-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Support for Local AI Model #2531
Description
Describe the feature or problem you'd like to solve
support for connecting to local AI models
Proposed solution
Hi Copilot CLI team,
I'd love to see support for connecting to local AI models (e.g., Ollama, LM Studio, llama.cpp) as an alternative backend in addition to the hosted cloud models.
Why this matters:
Privacy — some codebases can't be sent to cloud APIs due to security/compliance policies
Offline use — ability to work without internet access
Cost — reduce premium request quota usage for repetitive tasks
Flexibility — let users choose the best model for their workflow
Suggested approach: A config option to point the CLI to a local OpenAI-compatible endpoint (e.g., http://localhost:11434) would be a great starting point, similar to how other tools support OPENAI_BASE_URL
overrides.
Thank you for building such a great tool — looking forward to seeing it evolve!
Example prompts or workflows
No response
Additional context
No response