LOCAL-FIRST AI

OLLAMA · VLLM · LLAMA.CPP · LM STUDIO

Your models.
Your machine.
Full power.

uplnk is an open-source TUI coding assistant that runs entirely on your hardware. Claude Code quality — pointed at your local Ollama server. No cloud, no subscription, no data leaving your machine.

Works with Ollama vLLM LM Studio llama.cpp

The situation

You've been here before.

01

It's 11pm. You're debugging a production issue. You paste your auth middleware into the chat window. The response is exactly what you needed. Then you remember: you just sent your session token schema to a vendor's inference server. You close the tab and figure it out yourself.

◆ With uplnk: same conversation. Same response quality. The inference runs on your machine. The tokens don't leave.

02

You installed Aider because it supports Ollama. It does — sort of. The streaming drops out halfway through a long response. The edit format confuses the 8B model. You spend 20 minutes reading error output instead of writing code. You paste the function into Jan.ai and keep going.

◆ With uplnk: the streaming layer handles local model latency. When a response stalls, you see why. When it succeeds, the output is syntax-highlighted and ready to use.

03

Your manager asks you to stop using Cursor for work. The security review found that source code is transmitted to third-party inference servers during normal use. You need an alternative that passes infosec review — and doesn't require explaining to the CISO what an ngrok tunnel is.

◆ With uplnk: local inference, no network egress, MIT-licensed source you can audit. The answer to your security team is two words: "runs locally."

Capabilities

Everything you need.
Nothing you don't.

TUI-Native

Runs inside your existing terminal. No browser, no Electron, no separate window stealing focus.

Local-First

Talk to Ollama, vLLM, LM Studio, or any OpenAI-compatible server. Your models, your rules, your hardware.

Streaming That Doesn't Drop

Built for local model latency. uplnk handles timeouts, backpressure, and slow inference without freezing — you see output as it arrives, not after it fails.

File Context, Done Right

Drag a file in or paste a snippet. uplnk attaches it as context so your LLM answers about your actual code. Full codebase indexing → v0.5

SQLite History

Every conversation saved locally at ~/.uplnk/db.sqlite. Portable, queryable, forever yours.

Zero Cloud

No API keys sent to our servers. No telemetry without consent. No subscription required. Ever.

Get started in 60 seconds

Three commands.
Running in under three minutes.

shell

# Install uplnk globally

$ npm install -g uplnk

# Connect your local LLM

$ uplnk config --provider ollama --url http://localhost:11434

# Start coding with AI

$ uplnk

◆ uplnk v0.1.0 — Connected: Ollama (llama3:8b)

Requires Node.js 20+ and a running Ollama/vLLM instance. Get Ollama →

How it stacks up

The honest comparison.

Supported Not supported ~ Partial / cloud-only
Feature
UPLNK open-source
Aider Continue.dev Open WebUI Cursor
Terminal UI (TUI) Runs inside your existing terminal IDE extension Web app IDE app
Local LLM native Ollama, vLLM, llama.cpp first-class
MCP tool calling Model Context Protocol — file browse + tools (v0.5) ~ ~
Codebase-aware context Full project indexing + .gitignore (v0.5) ~
Portable local history SQLite at ~/.uplnk/db.sqlite — yours forever ~ ~
Open source
No subscription
Runs fully offline / air-gapped ~

For uplnk, ~ = planned on the public roadmap (v0.5 or v1.0). For others, ~ = partial or requires workaround. Data based on public documentation, April 2026.

Local LLMs deserve
better tooling.

Open source. MIT licensed. No tracking. No paywalls. Install it once and it's yours — no subscription, no vendor lock-in, no account required.

MIT License · Open source forever