A lightweight Go proxy that exposes GitHub Copilot as OpenAI-compatible, Anthropic-compatible, Gemini-compatible, and AmpCode-compatible API endpoints.
- OpenAI API Compatible:
/v1/chat/completions,/v1/models,/v1/embeddings,/v1/responses - Embeddings Support: Native OpenAI-compatible
/v1/embeddingsendpoint - Anthropic API Compatible:
/v1/messages - Gemini API Compatible:
/v1beta/models,/v1beta/models/{model}:generateContent,/v1beta/models/{model}:streamGenerateContent,/v1beta/models/{model}:countTokens - AmpCode Compatible:
/amp/v1/*routes for chat,/api/provider/*for provider-specific calls, management proxied toampcode.com - Streaming Support: Full SSE streaming for both OpenAI and Anthropic formats
- Anthropic Routing: Uses native
/v1/messageswhen the model supports it, otherwise routes via/responsesor/chat/completions - Auto Authentication: GitHub Device Flow OAuth with automatic token refresh
- Usage Monitoring: Built-in
/usageendpoint for quota tracking - Models Cache: 5-minute cache for
/v1/modelsand Anthropic model capability lookups
docker run -it --rm \
-p 127.0.0.1:7777:7777 \
-v ~/.config/copilot2api:/root/.config/copilot2api \
ghcr.io/whtsky/copilot2api:latestThe volume mount persists your GitHub credentials across container restarts. The examples publish the port on 127.0.0.1 only so the proxy stays local by default.
Docker Compose
services:
copilot2api:
image: ghcr.io/whtsky/copilot2api:latest
ports:
- "127.0.0.1:7777:7777"
volumes:
- ${HOME}/.config/copilot2api:/root/.config/copilot2apiStart it with:
docker compose up# Example: macOS Apple Silicon
curl -L -o copilot2api \
https://github.com/whtsky/copilot2api/releases/latest/download/copilot2api-darwin-arm64
# Example: Linux x64
# curl -L -o copilot2api \
# https://github.com/whtsky/copilot2api/releases/latest/download/copilot2api-linux-amd64
chmod +x copilot2api
./copilot2apiDownload the asset that matches your platform from GitHub Releases. Published binaries use names like copilot2api-linux-amd64, copilot2api-linux-arm64, copilot2api-darwin-amd64, copilot2api-darwin-arm64, copilot2api-windows-amd64.exe, and copilot2api-windows-arm64.exe.
On first run, both Docker and downloaded binaries prompt GitHub Device Flow authentication:
π GitHub Authentication Required
Please visit: https://github.com/login/device
Enter code: XXXX-XXXX
Waiting for authorization...
β
Authentication successful!
Server starts on http://127.0.0.1:7777 by default.
- Does not implement API key validation β any request is accepted
- Do not expose publicly β it becomes an open proxy consuming your Copilot quota
- Credentials are stored in
~/.config/copilot2api/credentials.json
Add to ~/.claude/settings.json:
{
"env": {
"ANTHROPIC_BASE_URL": "http://127.0.0.1:7777",
"ANTHROPIC_API_KEY": "dummy",
"ANTHROPIC_MODEL": "claude-opus-4.6",
"ANTHROPIC_SMALL_FAST_MODEL": "claude-haiku-4.5",
"CLAUDE_CODE_DISABLE_NONESSENTIAL_TRAFFIC": "1"
},
"permissions": {
"deny": [
"WebSearch"
]
}
}copilot2api supports Claude 1M context models. When Claude Code sends the anthropic-beta: context-1m-... header, the proxy automatically appends -1m to the model ID (e.g. claude-opus-4.6 β claude-opus-4.6-1m) so Copilot routes to the 1M variant.
To use it, select the 1M model variant in Claude Code via the /model command (e.g. Opus (1M)). Without this, Claude Code defaults to the standard 200K context window.
Add to ~/.codex/config.toml:
model = "gpt-5.3-codex"
model_provider = "copilot2api"
model_reasoning_effort = "high"
web_search = "disabled"
[model_providers.copilot2api]
name = "copilot2api"
base_url = "http://127.0.0.1:7777/v1"
wire_api = "responses"
api_key = "dummy"Add to ~/.gemini/.env:
GOOGLE_GEMINI_BASE_URL=http://127.0.0.1:7777
GEMINI_API_KEY=dummy
GEMINI_MODEL=claude-opus-4.6-1mSet the AMP_URL environment variable to point at copilot2api:
AMP_URL=http://127.0.0.1:7777/amp ampOr add to ~/.config/amp/settings.json:
{
"amp.url": "http://127.0.0.1:7777/amp"
}Chat completions, tool calls, and image input all route through Copilot API. Login and management routes (threads, telemetry) are proxied to ampcode.com β a free amp account is required for authentication.
Usage with curl
# OpenAI chat completion
curl http://localhost:7777/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model":"gpt-5.3-codex","messages":[{"role":"user","content":"Hello!"}]}'
# Anthropic message
curl http://localhost:7777/v1/messages \
-H "Content-Type: application/json" \
-H "x-api-key: dummy" \
-d '{"model":"claude-sonnet-4.6","messages":[{"role":"user","content":"Hello!"}],"max_tokens":100}'
# List models
curl http://localhost:7777/v1/models
# Check usage/quota
curl http://localhost:7777/usageUsage with SDKs
import openai
client = openai.OpenAI(
api_key="dummy",
base_url="http://127.0.0.1:7777/v1"
)
response = client.chat.completions.create(
model="gpt-5.3-codex",
messages=[{"role": "user", "content": "Hello!"}]
)import anthropic
client = anthropic.Anthropic(
api_key="dummy",
base_url="http://127.0.0.1:7777"
)
message = client.messages.create(
model="claude-sonnet-4.6",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello!"}]
)| Endpoint | Method | Description |
|---|---|---|
/v1/chat/completions |
POST | OpenAI Chat Completions (streaming & non-streaming) |
/v1/responses |
POST | OpenAI Responses API |
/v1/models |
GET | List available models (5min cache) |
/v1/embeddings |
POST | Generate embeddings (string or array input) |
/v1/messages |
POST | Anthropic Messages API (streaming & non-streaming) |
/v1beta/models |
GET | List Gemini-compatible models |
/v1beta/models/{model}:generateContent |
POST | Gemini Generate Content |
/v1beta/models/{model}:streamGenerateContent |
POST | Gemini Generate Content streaming SSE |
/v1beta/models/{model}:countTokens |
POST | Gemini token counting estimate |
/amp/v1/chat/completions |
POST | AmpCode chat completions (via Copilot API) |
/amp/v1/models |
GET | AmpCode model listing |
/api/provider/* |
POST | AmpCode provider-specific routes |
/api/* |
ANY | AmpCode management proxy to ampcode.com |
/usage |
GET | Copilot usage and quota info |
./copilot2api [options]
-host string Server host (default "127.0.0.1")
-port int Server port (default 7777)
-token-dir string Token storage directory (default ~/.config/copilot2api)
-debug Enable debug logging
-version Show version and exit
Environment variables are used as defaults when flags are not provided:
| Variable | Description | Default |
|---|---|---|
COPILOT2API_HOST |
Server host | 127.0.0.1 |
COPILOT2API_PORT |
Server port | 7777 |
COPILOT2API_TOKEN_DIR |
Token storage directory | ~/.config/copilot2api |
COPILOT2API_DEBUG |
Enable debug logging (true/false, 1/0) |
false |
CLI flags take precedence over environment variables.
- Authenticates with GitHub via Device Flow OAuth
- Exchanges GitHub token for Copilot API token (auto-refreshes)
- Proxies OpenAI-format requests directly to Copilot API
- Routes Anthropic Messages requests by model capabilities (native
/v1/messages, translated/responses, or translated/chat/completions) - Automatically detects API endpoint from token (Individual/Business/Enterprise)
go test ./... # Run tests
go build -o copilot2api . # BuildMIT