Skip to content

Support for Local AI Model #2531

@tnpcmoney

Description

@tnpcmoney

Describe the feature or problem you'd like to solve

support for connecting to local AI models

Proposed solution

Hi Copilot CLI team,

I'd love to see support for connecting to local AI models (e.g., Ollama, LM Studio, llama.cpp) as an alternative backend in addition to the hosted cloud models.

Why this matters:

Privacy — some codebases can't be sent to cloud APIs due to security/compliance policies
Offline use — ability to work without internet access
Cost — reduce premium request quota usage for repetitive tasks
Flexibility — let users choose the best model for their workflow

Suggested approach: A config option to point the CLI to a local OpenAI-compatible endpoint (e.g., http://localhost:11434) would be a great starting point, similar to how other tools support OPENAI_BASE_URL
overrides.

Thank you for building such a great tool — looking forward to seeing it evolve!

Example prompts or workflows

No response

Additional context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    No fields configured for Feature.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions