With the increasing adoption of large language models (LLMs) in software development, running these models locally has become essential for developers seeking better performance, privacy, and cost control. Two popular solutions have emerged in this space: Ollama, an established framework for local LLM management, and Docker Model Runner, a recent