Your AI's
shadow memory
ShadowCTX captures the pages you read, the notes you take, and the decisions you make — then makes all of it queryable by any AI assistant.
The Problem
AI agents are context-starved.
They don't know what you were reading an hour ago, what decisions led here, or what that article said.
You've been copying and pasting context into every chat. That's the wrong approach.
Your AI should already know — because it was watching.
How It Works
Three steps. Zero cloud.
Browse & save
The Chrome extension captures any page with one click. Or use the CLI to save URLs from your terminal.
Pages are indexed
Content is extracted, embedded, and stored locally in SQLite. Your data never leaves your machine.
Ask any AI
Claude, Cursor, and any MCP-compatible assistant can now search your saved context with a natural query.
Three Surfaces
Use it however you work.
CLI
Terminal-first workflow
# Install globally
$ npm install -g shadowctx
# Save a page with tags
$ ctx save https://arxiv.org/abs/1706.03762 --tag ai --note "attention is all you need"
# Semantic search
$ ctx search "transformer architecture"
→ Attention Is All You Need — arxiv.org 2h ago MCP Server
Claude · Cursor · any MCP client
// claude_desktop_config.json
{
"mcpServers": {
"shadowctx": {
"command": "npx",
"args": ["-y", "@shadowctx/mcp"],
"env": { "SHADOWCTX_API_URL": "http://localhost:3000" }
}
}
} Chrome Extension
One-click page capture while you browse
- 1. Build:
cd packages/extension && pnpm build - 2. Open
chrome://extensions→ Load unpacked → selectpackages/extension/dist - 3. Click the extension icon on any page to save it to your local ShadowCTX instance.
Open Source · Self-hosted
Your data stays yours.
MIT license. No accounts. No telemetry. Runs entirely on your machine with Docker or bare Node.
MIT License
Use it, fork it, ship it.
SQLite storage
One file. No database server.
Docker one-liner
Up in seconds.
$ docker compose up
# API available at http://localhost:3000
# Data persisted to ./data/shadowctx.db