Repeated context is expensive
Large prompts, full repositories, tool schemas, and growing chat history consume the context window before the agent can make real progress.
OCP AI is an experimental open context layer for AI agents. It helps agents retrieve only the context they need instead of repeatedly loading entire codebases into prompts.
Most agent workflows repeatedly rediscover the same project knowledge. They scan files again, ask for rules again, and lose useful state when the session ends.
Large prompts, full repositories, tool schemas, and growing chat history consume the context window before the agent can make real progress.
Context should be scoped by project, user, task, and team. That makes retrieval safer, easier to audit, and easier to reuse.
Teams should not have to recreate the same memory layer separately for every IDE, model, agent, or workflow tool.
Instead of sending full repositories into every prompt, OCP lets agents retrieve only the relevant context for each task.
The first release direction is simple: index project knowledge, chunk it by meaning, retrieve what matters, and keep useful state available for future agent steps.
Detect file and document changes without repeatedly reprocessing everything.
Understand code structure at the level of symbols, modules, and useful boundaries.
Create small context blocks that can be reused across agent steps.
Represent meaning so agents can search by task intent, not just keywords.
Keep vectors, metadata, permissions, and freshness state together.
Return the smallest useful context for the current question.
These are representative estimates for explaining the economics. Final benchmarks will be published with the self-hosted reference implementation.
Agents retrieve focused context instead of receiving the full codebase every step.
Reducing repeated input leaves more room for multi-step planning and execution.
The indexing cost is amortised across many future sessions and agents.
A medium codebase, one agent step. Output tokens not included.
| Component | Without OCP | With OCP | Notes |
|---|---|---|---|
| Codebase dump | 65,000 | 0 | Replaced by retrieved chunks |
| Retrieved chunks | 0 | ~2,200 | Fetched on demand per step |
| Tool schemas | 12,000 | 600 | Only called tools included |
| System prompt | 2,000 | 400 | No code context needed in prompt |
| Session summary | 0 | 300 | Replaces growing history |
| Conversation history | 3,000 | 600 | Offloaded to session.save |
| Total per step | 82,000 | 3,700 | −95.5% |
| Cost per 100 steps | $24.60 | $1.11 | $23.49 saved |
| Steps before 200k limit | 2 | 54 | 27× more |
OCP AI is intended to be a practical open context layer for agent workflows, not a closed memory silo.
A documented protocol surface that can be implemented by different tools and teams.
A self-hosted version will come first so teams can inspect, run, and adapt it.
Designed to work across hosted models, local models, IDEs, and MCP-compatible clients.
The paid future version can add AI governance assistance: continuous checks, AI Act risk flags, documentation support, and alerts for compliance vulnerabilities across regions.
Minimal, vendor-neutral context protocol direction.
Free self-hosted reference implementation.
MCP-compatible IDE and agent workflow support.
Paid governance assistance to flag AI Act and regional compliance vulnerabilities before they become costly.