Run Cursor with a Local Model: Privacy-First AI Coding Without a Subscription
If you write code for a living, your IDE is now an AI agent — Cursor, GitHub Copilot, Claude Code, Cline, and the rest. They are spectacular productivity multipliers. They are also potential exfiltration vectors for whatever proprietary code you happen to be working on. If your employer has rules about sending source to external APIs, or if you just like the idea of your code never leaving your machine, running these tools against a local model is a viable, fully-supported path in 2026.
This guide gets you from a working Cursor (or VS Code) install to a setup where every completion, every chat, every refactor request is served by a model running on your own GPU. No subscription. No API keys. No code leaving the box.
What “local” actually buys you
Three things, ranked by how much they probably matter to you:
- Privacy — your code, your prompts, and your in-progress thoughts never enter another company’s logs.
- Cost — local inference has zero per-token cost after the GPU. If you write a lot of code, the math gets compelling fast.
- Offline capability — works on a plane, in a coffee shop with sketchy wifi, in a secure facility.
What it does not buy you, honestly:
- Quality parity with frontier models. A local 8B–32B model is impressive, but it is not GPT-5 or Claude Opus 4.7. For boilerplate, refactors, and well-defined tasks it is more than sufficient. For “design me a novel algorithm,” cloud frontier models still win.
- Speed parity. A 4090 running a quantized 14B model produces completions at 30–60 tokens/sec. Cursor Pro running on an HTTP-edge frontier model is often faster.
If those tradeoffs are acceptable, read on.
The two viable paths
There are a lot of “AI coding assistant” projects in 2026. For local-model use, two are mature and stable:
- Cursor with a “Custom Model” pointing at a local OpenAI-compatible endpoint.
- VS Code + Continue.dev with a local model configured directly.
Cursor is the more polished product but its custom-model support is more limited (chat works, full agent mode against a local model has gaps as of mid-2026). Continue.dev is open source, somewhat scrappier, and has full local-model support including agent mode and codebase indexing.
Pick based on which IDE you already use. Both are below.
The shared infrastructure: a local OpenAI-compatible server
Both Cursor and Continue.dev expect to talk to an HTTP endpoint that mimics OpenAI’s API. The simplest way to provide that locally is Ollama, which we covered in the Ollama / LM Studio / llama.cpp comparison.
Install Ollama from https://ollama.com, then pull a model designed for code:
ollama pull qwen2.5-coder:14b
Why this model: Qwen 2.5 Coder 14B is one of the strongest open-weight code models as of 2026, fits in 12 GB VRAM at Q4_K_M, and trained heavily on code reasoning. Alternatives worth considering:
qwen2.5-coder:7b— fits in 8 GB VRAM, slightly weaker on logic but very capable.deepseek-coder-v2:16b-lite-instruct— strong long-context performance.codestral:22b— Mistral’s code model; needs ~14 GB.llama3.1:8b— general-purpose, fine for code completion if a coder model is too large.
For VRAM math, see our guide to VRAM for Llama models — the numbers transfer to other model families.
Verify Ollama is serving:
curl http://localhost:11434/v1/chat/completions \
-H "Content-Type: application/json" \
-d '{"model": "qwen2.5-coder:14b", "messages": [{"role": "user", "content": "say hello"}]}'
If you get a JSON response with "choices", the OpenAI-compatible endpoint is working.
Path A: Cursor with a custom model
Open Cursor → Settings → Models. Find the Custom Models or OpenAI API Key section, and enable Override OpenAI base URL:
- Override URL:
http://localhost:11434/v1 - API Key: any non-empty string (Ollama ignores it but Cursor requires the field)
- Model:
qwen2.5-coder:14b(or whatever you pulled)
Then in the model dropdown above, your custom model should appear and be selectable. Switch the chat panel to it, ask a question, and verify Ollama’s terminal shows incoming requests.
What works in Cursor with a local model:
- Inline completions (Cmd+K rewrite, Tab to accept)
- Chat mode (Cmd+L)
- Code explanations
- Single-file refactors
What does not (or works inconsistently) as of mid-2026:
- Cursor’s full “Composer” agent mode that touches multiple files and runs commands — designed around frontier-model capabilities and tends to fail on local models.
- Codebase-wide context retrieval — Cursor’s embedding pipeline expects OpenAI’s embeddings; pointing it at local embeddings is non-trivial.
For full agent mode against local models, Continue.dev is the better path.
Path B: VS Code + Continue.dev with full local mode
Install the Continue extension from the VS Code marketplace. After installation, it creates
a config file at ~/.continue/config.yaml (Mac/Linux) or %USERPROFILE%\.continue\config.yaml
(Windows). Replace the contents with:
name: Local-only setup
models:
- title: Qwen Coder 14B
provider: ollama
model: qwen2.5-coder:14b
apiBase: http://localhost:11434
- title: Llama 3.1 8B
provider: ollama
model: llama3.1:8b
apiBase: http://localhost:11434
embeddingsProvider:
provider: ollama
model: nomic-embed-text
apiBase: http://localhost:11434
contextProviders:
- name: codebase
- name: open
- name: terminal
- name: docs
- name: diff
Pull the embedding model:
ollama pull nomic-embed-text
Now Continue can:
- Use Qwen Coder for chat and inline completions
- Index your codebase using local embeddings (your code never leaves your machine)
- Use the
@codebaseretrieval provider in chat to find relevant files - Run “edit” actions on selected code
- Operate in agent mode for multi-step tasks
The agent mode against a 14B local model is noticeably less capable than Cursor with Claude Sonnet 4.6 cloud — multi-file refactors that would take 1–2 turns in Cursor often take 5–10 turns locally — but it works, and your code stays on the box.
Performance and what to expect
On an RTX 4090 / 5090 with Qwen 2.5 Coder 14B Q4_K_M:
- Completion latency for short suggestions: 200–400 ms (acceptable for inline)
- Chat throughput: 40–60 tokens/sec
- Agent multi-step task: 10–60 seconds depending on complexity
On a 16 GB card (4060 Ti, 5060 Ti):
- 7B model is comfortable; 14B Q4 fits but no headroom for long context
- Completion latency: 300–600 ms (still usable)
- Throughput: 25–40 tokens/sec
On 12 GB cards: stick to the 7B variants; you will be happier.
The honest experience: completions and chat feel comparable to a cloud model. Multi-step agent tasks feel noticeably slower and less reliable. For 80% of the IDE-AI workflow that is “complete this line, refactor this function, explain this snippet,” local is fine.
When to keep using cloud anyway
Some tasks where local models still trail:
- Architecting non-trivial new code — designing a system from scratch, picking the right abstractions. Frontier models still see the design space better.
- Large-context refactors across 10+ files — embedding-based retrieval helps but generation quality degrades on local models when context gets very large.
- Tasks that require knowledge cutoffs past your model’s training — local models’ knowledge may be older than the latest cloud model’s.
A hybrid approach is what most professional users settle on: **local for the 80% boilerplate
- refactor + completion work, cloud for the 20% high-value design and architecture work.** The tools support this — Cursor and Continue both let you switch models per chat.
The privacy bit, with nuance
“Code never leaves the machine” is technically true with this setup. Caveats worth knowing:
- Cursor’s telemetry still sends usage events (button clicks, errors, performance metrics) even when you use a custom model endpoint. Disable telemetry in settings if this matters.
- Continue.dev is open source and you can audit its outbound traffic.
- Ollama does not phone home for inference itself, but does check for model updates at startup.
If you need defensible privacy (i.e., proof for an audit), running on an air-gapped machine with telemetry fully disabled is the only watertight path. If you just want reasonable privacy from your IDE vendor, the local-model setup above is sufficient.
No GPU yet? Rent one for a few dollars
If you are still on a laptop iGPU or a 4 GB card and the local-model path looks blocked
on hardware, the practical bridge is a cloud GPU you spin up on demand. A 4090 on
RunPod is around $0.34/hour on Community Cloud, which
means you can run qwen2.5-coder:32b from a remote Ollama endpoint and point Continue.dev
at its IP. Total cost for a coding session is typically under $1, and the model quality
is meaningfully better than what fits on 8–12 GB locally. When you decide whether to
buy hardware, see our GPU buying guide and the
VRAM-tier model matrix to size up the upgrade.
The setup that works
The setup that works for most professional programmers in 2026:
- Continue.dev in VS Code as the primary AI coding interface.
- Ollama with
qwen2.5-coder:14bas the local model for routine work. - Cursor or cloud Claude kept as a parallel option for harder tasks.
- Codebase embedding handled locally via
nomic-embed-text.
That gives you privacy, productivity, and zero ongoing cost — at the price of some friction on the most ambitious tasks. For anyone with code their employer cares about, or anyone allergic to per-month subscriptions, that price is worth paying.
If you also use AI at home for image generation, language model chat, or other things beyond coding, the same hardware that runs Qwen Coder will run those workloads in parallel — see our guide on picking models per VRAM tier for how to split a 16 GB or 24 GB card across multiple uses.