Your AI tools should
share a brain.
Claude doesn't know what you told ChatGPT. Copilot ignores what you researched in Perplexity. Sulci captures context from every AI interaction and makes it available everywhere.
sul·ci /ˈsʌlsaɪ/ — the folds and grooves of the cerebral cortex. Where memory lives.
Every conversation
starts from zero.
You use multiple AI tools every day. None of them talk to each other. Here's what a typical week looks like:
We're building Atlas, a customer analytics platform. We need PostgreSQL 16 with pgvector, FastAPI for the gateway, and RabbitMQ for messaging. Sarah is tech lead, Marcus handles DevOps.
Excellent choices! PostgreSQL 16 with pgvector gives you ACID transactions plus vector similarity search. FastAPI is perfect for async operations. Let me help you design the service boundaries...
Can you help me debug the auth service? Users are getting 429 errors.
I'd be happy to help! A few questions first — What framework is the auth service built with? What database are you using? What's the project called?
OK so the project is Atlas, it's a customer analytics platform, we use PostgreSQL 16, FastAPI for the gateway, JWT with 15-min access tokens...
What messaging system did we decide on for Atlas?
I don't have access to previous conversations. Could you remind me about the project and the messaging requirements?
You explained your project setup four times this week. Across four tools. Decisions scattered. Context evaporated.
But wasted time is just the start.
Without a shared context layer, three deeper problems emerge — and they compound over time.
Recall Drift
When you re-explain from memory, details go wrong. Small errors compound into bad decisions.
from memory
The consequence: Claude now debugs your auth service assuming 30-minute tokens and Redis queues. It suggests fixes for a system that doesn't exist. You spend 20 minutes on a dead-end before realizing the mismatch.
Contradictory Ground Truth
Decisions evolve, but each tool is frozen at the moment you last talked to it. There's no single source of truth.
Based on our discussion, your stack uses RabbitMQ for async messaging between services.
Got it — you're using Kafka for event streaming. I'd recommend KRaft mode for the new cluster.
The consequence: Two weeks later, a teammate asks ChatGPT to review the messaging architecture. ChatGPT confidently recommends RabbitMQ-specific patterns for a system that's now running Kafka. The PR gets merged. Production breaks.
Blind Spots
Each tool only sees its own conversations. It gives confident advice while missing critical context from other tools.
I'd recommend adding an in-memory cache layer with Redis and increasing your rate limits to 500 req/min. You could also batch database queries to reduce round trips.
What Claude doesn't know
The consequence: Claude's advice is technically correct in isolation — but wrong for your system. You follow it, introduce Redis, violate compliance rate limits, and waste two days before discovering the real bottleneck was JWT verification all along.
Without shared context, you get:
Same week.
With Sulci.
Now every tool has access to everything you've discussed, decided, and preferred. Ask anything \u2014 Sulci finds the relevant context, regardless of where it originated.
Not a chatbot. Not another AI tool. An infrastructure layer that makes every AI tool you already use dramatically more effective by giving them shared, persistent memory.
“I'm building Atlas, a customer analytics platform. We use PostgreSQL 16, FastAPI, RabbitMQ for messaging. Sarah is tech lead...”
Repeated every. single. session.
“Help me optimize the RabbitMQ consumer throughput.”
The AI already knows Atlas, your stack, Sarah's role, and every decision you've made. You just build.
The sulci are the grooves and folds of the cerebral cortex \u2014 the structures that give the brain its capacity for memory. More folds, more surface area, more capability. Sulci adds folds to your AI.
Four stages.
One pipeline.
Capture
Conversations flow in via MCP server, API proxy, or REST API. Every interaction across every tool is recorded and queued for processing.
Extract
An LLM analyzes each conversation and extracts structured knowledge atoms — facts, decisions, preferences, entities, relationships, context, and instructions.
Store
Knowledge is dual-indexed: a relational database for structured queries and a vector store for semantic search. Both work together to find what matters.
Serve
When you start a new conversation, relevant context flows out automatically — ranked by semantic similarity, freshness, confidence, and usage frequency.
Seven types of
structured knowledge.
Every piece of information extracted from your conversations is classified, structured, and scored. Not raw text. Structured knowledge that compounds over time.
Objective information discussed or established
"PostgreSQL 16 is the primary database with pgvector"
Choices made or conclusions reached
"Using SQLAlchemy 2.0 with async support instead of raw SQL"
Likes, dislikes, and style choices
"Prefers Python for backend, TypeScript for frontend"
People, projects, tools, or organizations
"Sarah Chen is the tech lead for architecture decisions"
Connections and dependencies between entities
"The frontend consumes the API gateway endpoints"
Situational information about current work
"Migrating from REST to GraphQL for analytics queries"
Guidelines, rules, and conventions to follow
"All API endpoints must include OpenAPI documentation"
Connects to
everything you use.
MCP Server
Claude Desktop & Claude Code
Native integration via Model Context Protocol. Claude can query your knowledge, store new information, and extract from conversations — all through natural language.
Available Tools
API Proxy
OpenAI & Anthropic APIs
Transparent proxy that intercepts API calls, injects relevant context into the system prompt, and captures interactions — zero code changes required.
REST API
Any Application
Full HTTP API for knowledge CRUD, context queries, interaction ingestion, data export, and privacy controls.
Dashboard
Visual Management
React web UI for browsing knowledge, viewing entity relationship graphs, inspecting injection audit logs, and managing data exports.
Built on convictions,
not compromises.
Privacy-First
Your knowledge stays on your machine by default. No cloud dependency. No data leaving your control. Export or purge everything at any time.
Provider Agnostic
Sulci doesn't care which AI you prefer. It works across providers, across tools, across workflows. Your context belongs to you, not to a platform.
Intelligent Decay
Not all knowledge ages equally. Sulci understands relevance as a function of time, frequency, and confidence. Old decisions fade. Current context shines.
Fully Transparent
Every piece of injected context is logged and auditable. You can see exactly what Sulci told your AI, when, and why. No black boxes.
Stop re-explaining yourself.
Sulci is currently in limited beta. Reach out to get early access and help shape the future of AI context.
hello@lopez.fi
Why I'm Building This
I spend my days moving between Claude, ChatGPT, and Copilot. The friction of re-establishing context became the bottleneck, not the AI itself. Sulci started as a personal tool to solve my own workflow pain. It became something bigger when I realized every developer, every team, every organization using multiple AI tools faces the same invisible tax.
Let's build something
worth talking about.
I take on a limited number of advisory and fractional engagements. Only projects where I can make a real difference. If you're navigating growth, AI, or revenue challenges in a technical B2B environment, let's talk.