Kelpie is an LLM-first browser for macOS, iOS, Android, and Linux. Language models discover and orchestrate real devices on the local network — no emulators, no persistent scripts, no desktop required.
Test exactly how Safari renders — same engine, same quirks, same performance characteristics. Ideal for validating iOS-equivalent rendering behaviour without leaving your desk. Every WebKit-specific CSS feature, font rendering decision, and scroll behaviour is faithfully reproduced.
The complete Chrome DevTools Protocol surface — request interception, performance timelines, JS coverage, cross-origin iframe access. Kelpie embeds a real Chromium engine via CEF, not a headless binary. Switch renderers on the fly; cookies migrate automatically.
kelpie_set_renderer engine=chromium · Cookies migrate automatically. No reload required.
Playwright controls browsers through CDP over a WebSocket. Kelpie calls iOS WebKit and Android WebView APIs directly in-process. No external process, no protocol hop, no serialisation round-trip.
Running on an actual iPhone or Android phone means real GPU rendering, real network stack, real iOS fonts and scroll physics. Playwright on desktop cannot replicate mobile platform behaviour.
Click, swipe, type, scroll, fill forms, take screenshots, read the DOM, intercept network, observe mutations — every interaction a human user can perform is exposed as a first-class MCP tool. If you can tap it, an LLM can too.
Install a small local model directly in the browser. Kelpie ships five AI endpoints — load, unload, infer, status, and audio recording — available on macOS, iOS, and Android via the same HTTP and MCP interface.
Extract and evaluate page content without touching a cloud API. Use a local model to parse a page, classify elements, or summarise content — all inference stays on the device. Perfect for high-volume scraping or privacy-sensitive content.
Switch seamlessly between local and remote. Use the local model for cheap extraction tasks, then hand off to a larger cloud model only when reasoning quality matters. Mix strategies per task, per device, via a single API.
Every platform has a native AI path. macOS runs GGUF models locally or routes through Ollama. iOS uses Core ML and Android uses NNAPI — hardware-accelerated, zero-dependency, no model download required. Gemma 4 adds multimodal support: pass a screenshot directly to the model to describe, extract, or classify what's visible.
ai-status, ai-load, ai-unload, ai-infer, ai-record — are available via HTTP and MCP on every platform.
Browsers advertise _kelpie._tcp on the local network. The CLI finds every device — no IPs, no manual config, no pairing.
Navigate, click, and fill forms across an entire fleet simultaneously. One instruction, every device, in parallel — with per-device result aggregation.
Full Model Context Protocol in both the browser and CLI. Any MCP client connects directly — no translation layer, no proxy overhead.
Screenshots, DOM trees, console logs, network timelines — all via platform WebView APIs and CDP. No content scripts, no CSP conflicts.
findButton("Submit") returns only the devices where it exists. Let the model decide what to do with the ones that don't.
Commands like scroll2 adapt per device viewport. One instruction works correctly across every screen size in a heterogeneous fleet.
A self-hosted management layer for a WireGuard overlay network. Device enrollment, session brokering, STUN/TURN NAT traversal, and a clean admin UI — your private network, your infrastructure, no cloud required.
A voice-first personal AI agent that lives on your Mac. Talk to it, it talks back. Spawn sub-agents for deep research. Inject text into any active application. Ambient, always-on, always ready.
Install the CLI, launch Kelpie on a device, start issuing commands.