RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
-
Updated
May 6, 2026 - Python
RAGFlow is a leading open-source Retrieval-Augmented Generation (RAG) engine that fuses cutting-edge RAG with Agent capabilities to create a superior context layer for LLMs
Enterprise-grade (40m+ LOC) codebase intelligence, zero-setup, local & private Plugin/Skill/Extension or MCP: hybrid semantic search, polyglot dependency graphs, symbol-level impact analysis & call-flow, interactive HTML viewer, cross-project & branch-aware search, DB/API/infra knowledge. 61% less tokens, 84% fewer calls, 37x faster. Cloud in beta.
Context-Engine MCP - Agentic Context Compression Suite
Meteor extracts structured context from across your systems and delivers it to power your organization's context graph and AI agents.
Open Source Context infrastructure for AI agents. Auto-capture and share your agents' context everywhere.
100% Rust implementation of code graphRAG with blazing fast AST+FastML parsing, surrealDB backend and advanced agentic code analysis tools through MCP for efficient code agent context management
Compass is a context engine that builds a knowledge graph of your organization's metadata, capturing entities, relationships, and lineage across systems and time, making it discoverable and queryable for both humans and AI agents.
Local-native Context Engine MCP server for codebase indexing, semantic retrieval, planning, and review workflows.
MCP server that provides code context and analysis for AI assistants. Extracts directory structure and code symbols using WebAssembly Tree-sitter parsers with Zero Native Dependencies.
Database Freedom Platform - Mathematical search optimization for whatever database you already have. 27,000x faster than vector databases with SMT-powered search across 8+ database types. One-time 9-2999 vs 00-500/month recurring.
Universal Memory & Context Engine (MCP Server) to give Long-Term Memory to AI Agents.
Give one OpenClaw session durable memory, topic-aware continuity, and bounded token growth.
Persistent local memory engine for OpenClaw — replaces default memory-system with a full context lifecycle: hybrid vector recall, automatic compaction, and domain-adaptive gating over LibraVDB
A production-ready TypeScript MCP server that provides comprehensive project analysis, intelligent code search, dependency tracking, and coordinated multi-file editing capabilities.
Self-hosted adaptive code search: micro-chunk precision, hybrid rerank, VS Code/MCP-ready
The Physics Engine for AI Coding Agents.
This is your operating system for thinking - A context and decision engine that filters signals and tells you what actually matters.
A tool for experimenting with context (aka prompts) for LLMs. Provides context template editor, websocket streaming API, drafting, testing, and analytics.
Autonomous context engine plugin for OpenClaw — pre-search memory retrieval with salience scoring, QMD integration, and cross-agent memory support
Add a description, image, and links to the context-engine topic page so that developers can more easily learn about it.
To associate your repository with the context-engine topic, visit your repo's landing page and select "manage topics."