Ollama is a tool for running local LLMs. It exposes an interface to download and run different published LLMs from it’s library (which seems largely huggingface backed.)
How do I use it?
Currently running on Windows laptop that I use exclusively for gaming & VR stuff, since LLMs benefit from more memory and powerful GPUs.
Resources
This article gave a really good summary of choosing models from the ollama library: