Skip to main content

Running LLMs Locally

How do I run an LLM locally? >2025-W36

Main Platforms for Running LLMs Locally​

Running LLMs with LM Studio​

Running LLMs with Ollama​

Using Tools with Locally Running LLMs​

Tool Support​

Invoking Localy Running LLM with Existing CLIs​

Codex Integration​

  • codex exec --config model_provider=ollama --config model_providers.ollama.base_url=http://192.168.1.252:11423/v1 --config model=gemma3 hi

Claude Code Integration​

  • We can use claude code router to get the claude code CLI to use an LLM running locally instead of the default remote llm.

Models and Tool Support​

Available Models​

Hardware​

Clustering Mac Minis​

Apple Silicon (M1/M2/M3) Performance​

Documentation and Resources​

Official Documentation​

Community Discussions​

Video Tutorials​

Network and Diagramming​