Runs on your hardware. Not ours.

Your AI. Your Machine.
Your Rules.

We're just helping you feed it the data it's craving.

Mac, VPS, or Raspberry Pi. Free on unlimited gateways. A friend's onboarded in 5 minutes.

VPS / Linux / Raspberry Pi · layers on top of OpenClaw
1 curl -fsSL https://openclaw.ai/install.sh | bash
2 curl -fsSL https://carapace.info/install.sh | bash
Two commands · ~5 min · non-destructive on existing OpenClaw

Why CARAPACE

Not another chatbot wrapper.
An operating layer.

Headless Linux + Mac. Seamless.

Context tracking means you can switch between your VPS and your Mac and the AI doesn't get lost. It stays on task, tracks projects, remembers decisions. Work on your server, review on your laptop — same brain, zero re-explaining.

Runs on a VPS. Runs on a Pi. Runs on whatever you've got.

No monthly AI subscription. No enterprise per-seat tier. A VPS, a Raspberry Pi, an old Mac Mini in a closet — whatever hardware you already trust. A free Google API key gets you web search, browser control, code execution, cron jobs, and multi-agent orchestration.

Vision + Hearing — Your AI's Eyes and Ears

Point your phone at anything and talk. Tap to peel a subject off the scene as a focus sticker. Sweep an area in scan mode to give your AI a panorama. The mic is listening too — Apple's on-device classifier picks up music, water running, your dog barking, and feeds those signals into the conversation. All perception runs on the iPhone; only the labels reach your gateway.

Not a substitute for professional medical, legal, or financial advice. Always verify what the AI says before acting on it.

Works for Everyone

Power users get headless Linux on a VPS with SSH and cron jobs. Your parents get a Mac app with a guided setup wizard that installs everything automatically. Same platform, different entry points.

The Full OpenClaw Toolbox

This isn't a chat wrapper. It's a full agent runtime with real capabilities.

Web search
Browser control
Code execution
Cron jobs
Persistent memory
Multi-agent orchestration
Image generation
File management

Vision Mode

Point. Talk. Get answered.
All on-device perception.

Camera + microphone + location, fused into one tap. Your iPhone does the seeing, hearing, and locating; only the distilled labels reach your gateway.

Point and ask

Tap the camera, speak naturally. The frame and the question ship together — no shutter, no upload, no waiting.

Tap-to-peel focus stickers

Tap a subject in frame to peel it off as a labeled sticker. Your AI sees the original scene plus a labeled cutout — "what's the dosage on this?" actually means this.

SCAN mode — 15-second sweep

Tap SCAN, sweep your phone across a fridge, a shelf, a workspace. Your AI gets a temporal contact-sheet of the area — perfect for inventory questions, "what's missing?", or panoramas.

Ambient hearing

Apple's on-device sound classifier feeds music, water running, dog barking, keyboard typing — ~300 categories — into the conversation. Audio never leaves the phone; only the labels do.

Location-aware

A reverse-geocoded place name ("San Clemente, CA") rides with every turn so questions like "where am I" or "what's the nearest…" answer themselves without an extra round-trip.

Hey Claw + cross-mode memory

Wake-word listening means you can leave the camera open without holding a button. Whatever you say in vision, voice, or chat threads into one shared session — no context loss when you switch surfaces.

On-device perception. Frames stay on your iPhone — only distilled labels reach your gateway.

Cognition

A brain, not a history buffer.
Modeled on actual brain regions.

Most "AI memory" is a vector database of past chats with cosine similarity slapped on top. Yours runs deeper — a layered cognitive model that takes what your iPhone perceives, files it the way a brain would, and surfaces the right context the moment it's relevant.

🧠

Hippocampus

Episodic memory of specific moments — the wine you photographed in Sonoma, the rash you asked about Tuesday morning. Retrieved by time, place, or topic.

🗺️

Place Cells & Grid Cells

Learns the shape of rooms, blocks, and routes you visit. Knows the difference between "your kitchen" and "a kitchen" without being told.

❤️

Amygdala

Tags emotional weight. Notices when you correct an answer or push back on a take, and quietly weights future recall so the same mistake doesn't repeat.

🌙

Default Mode Network

Consolidates raw experience into stable schemas in the background — like sleep does for you. The bridge gets quietly smarter between turns, not just during them.

👂

Auditory Cortex

Every chat and voice turn flows in here. Not just the message — the cadence, the corrections, the things you didn't have to say twice.

Thalamus

Per-turn relay. Pulls only the memories that actually matter for what you just asked, packs them into the prompt, and gets out of the way. Nothing irrelevant ever reaches your AI provider.

What this gets you

Context that compounds

Ask "where did I see that?" three weeks later and get an actual answer. The bridge gets sharper at recognizing your patterns the longer you run it.

Grounded answers

Replies are anchored in where you were, what you saw, and what you've cared about — not just the last few messages in the buffer.

Privacy by architecture

Every memory lives on your bridge. There's no "shared brain" across users, no cloud index, nothing for us to subpoena or accidentally leak. Single-tenant by construction.

Built and refined over thousands of real-world turns. Toggle in Settings → Rumination on iPhone — off the moment you want vanilla AI back.

Install

Pick your device.
One brain across all of them.

Each guide is a 3-to-5 step walkthrough. Send the link to a friend — they'll handle the rest.

🖥️

macOS

Ventura and later

  • Menu bar app — always one click away
  • Guided setup wizard — installs OpenClaw + Tailscale
  • Tailscale integration for secure remote access
  • Project tracking, agent grid, cron monitoring
  • Free — unlimited devices, no login
Download DMG — v1.9.37 Full install guide →
🐧

Linux

Headless / VPS

  • One-liner install — paste and go
  • Runs on any VPS — or a Raspberry Pi
  • Always-on headless daemon
  • Free — no license required for headless
Two commands — Carapace layers on top of OpenClaw:
1 curl -fsSL https://openclaw.ai/install.sh | bash 2 curl -fsSL https://carapace.info/install.sh | bash
Non-destructive on existing OpenClaw setups — your chats, keys, and identity files are preserved.
Full install guide →
📱

iOS

iPhone companion

  • Chat, voice, and photo input
  • Live camera vision — AI sees what you see
  • CarPlay + Bluetooth device support
  • Control any gateway from anywhere
App Store — Free Full install guide →

What's under the shell

Not just a chat wrapper.

Real project tracking. Live agent orchestration. Status, in your pocket.

CARAPACE — Projects view Force-press to dive in

Project Board

Every project your agents are touching, with live progress and per-workstream status. Force-press any project to drill into workstreams — granular progress, active blockers, who touched what last.

CARAPACE — Agents view

Spinal Map

Live view of your running agents — main orchestrator, subagents, what they're chewing on. When something breaks, you see it. When something finishes, you know.

CARAPACE — Scheduled Jobs view Force-press to open payload

Cron Tracker

Every scheduled job in one list — next-run ETAs, last-run results, RED / YELLOW / GREEN health at a glance. Force-press a job to open its payload, edit the schedule, or fire it now.

Coming soon · Limited beta

Your AI.
On your face.

Meta Ray-Ban glasses + CARAPACE. Hands-free voice, live vision, every direction you look — all routed through your own gateway.

Targeting later this year, the first 500 CARAPACE supporters on Power or Corporate will be first in line for TestFlight access — no waitlist, no extra cost. Glasses support is pending Meta opening third-party SDK access for Ray-Ban; we have no partnership with Meta and can't guarantee Meta will open that SDK on any specific timeline. The Power / Corporate IAPs remain fully valuable on their own (gateway pairing + device cap) regardless of whether glasses support ships.

Lock your beta spot
Unlock Power — $7.99
Already on Power or Corporate? You're in.

What you can do with it

Real use cases.
Not demos.

Voice + camera + your own AI's memory. Here's what people actually do with it once it's set up.

Kitchen

"What can I make for dinner?"

Open the fridge, ask. Carapace scans what's there, cross-references your pantry log, and names three things you can cook right now — not 40-minute recipes that assume you have saffron.

Vision Memory Voice
Thrifting

Good-Will Hunting, priced in real time

Walk the thrift-store aisle with Carapace live. Point at anything interesting — instant eBay sold-listing average, Mercari range, resale spread. "That midcentury lamp? $12 here, $180 sold last week."

Live vision Web search
Business

Inventory that updates itself

Walk your shop or warehouse naming items out loud. Your Mac keeps the spreadsheet live, flags low stock, and drafts reorder emails when a SKU dips below threshold. No clipboards.

Voice Long memory Agents
Everyday

"What am I looking at?"

Landmark in a new city. Plant at the nursery. Error code on a dishwasher. Mushroom in the woods. Engine noise you can hear but can't describe. Point, ask, get a real answer with sources.

Vision Audio
DIY

Home-repair co-pilot

Point at the leak. Carapace walks you through the fix while watching your progress. It remembers your house — "that's the same water heater we looked at in March; the pilot light's on the lower-left panel."

Vision Long memory Guided voice
Travel

Language in the wild

Menu abroad, product label, traffic sign. Translation with context: "this is a regional dialect for…," "this dish is usually served…," "this warning means stop within 50m." Beyond flat Google Translate.

Vision Voice

Six examples. Infinite more. Your AI knows what you look at, what you said yesterday, and what's in your tools. Build whatever routine fits your life — it's already in the box.

Pricing

Mac and Linux: always free. iPhone scales with you.

Run CARAPACE on as many Macs, VPSs, and Raspberry Pis as you want — zero cost. The iOS tiers below unlock how many of those gateways a single iPhone can connect to. One-time purchase, no subscriptions.

Solo

Free forever

$0

iPhone pairs with 1 gateway. Mac & Linux always unlimited.

All apps, all features
Voice, vision, chat
Your own hardware
Get iPhone app — Free

Power

Heavy homelab

$7.99

iPhone pairs with up to 10 gateways.

Everything in Household
Pair iPhone with 10 gateways
VPS fleet, Raspberry Pis
One-time, forever
Meta Ray-Ban beta — first 500, later this year
In-app purchase — $7.99

Corporate

Small team

$24.99

iPhone pairs with up to 50 gateways.

Everything in Power
Pair iPhone with 50 gateways
Dev, QA, production fleets
Still no subscription
Meta Ray-Ban beta — first 500, later this year
In-app purchase — $24.99
Mac & Linux apps are free. Tiers are iPhone-side in-app purchases.
CARAPACE

Your turn.
Five minutes, start to finish.

Pick your flavor. Each install guide is a short picture-by-picture walkthrough you can send to a non-technical friend.

Linux / VPS / Raspberry Pi — Carapace layers on top of OpenClaw
1 curl -fsSL https://openclaw.ai/install.sh | bash 2 curl -fsSL https://carapace.info/install.sh | bash
Two commands · ~5 min · safe to layer onto an existing OpenClaw
Encrypted transport (WireGuard) No CARAPACE cloud Your data stays on your hardware