topoteretes / cognee
Knowledge Engine for AI Agent Memory in 6 lines of code
AI Architecture Analysis
This repository is indexed by RepoMind. By analyzing topoteretes/cognee in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.
Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.
Repository Summary (README)
PreviewCognee - Build AI memory with a Knowledge Engine that learns
<p align="center"> <a href="https://www.youtube.com/watch?v=1bezuvLwJmw&t=2s">Demo</a> . <a href="https://docs.cognee.ai/">Docs</a> . <a href="https://cognee.ai">Learn More</a> · <a href="https://discord.gg/NQPKmU5CCg">Join Discord</a> · <a href="https://www.reddit.com/r/AIMemory/">Join r/AIMemory</a> . <a href="https://github.com/topoteretes/cognee-community">Community Plugins & Add-ons</a> </p>
<a href="https://github.com/sponsors/topoteretes"><img src="https://img.shields.io/badge/Sponsor-❤️-ff69b4.svg" alt="Sponsor"></a>
Use our knowledge engine to build personalized and dynamic memory for AI Agents.
<p align="center"> 🌐 Available Languages : <!-- Keep these links. Translations will automatically update with the README. --> <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=de">Deutsch</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=es">Español</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=fr">Français</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=ja">日本語</a> | <a href="README_ko.md">한국어</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=pt">Português</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=ru">Русский</a> | <a href="https://www.readme-i18n.com/topoteretes/cognee?lang=zh">中文</a> </p> <div style="text-align: center"> <img src="https://raw.githubusercontent.com/topoteretes/cognee/refs/heads/main/assets/cognee_benefits.png" alt="Why cognee?" width="50%" /> </div> </div>About Cognee
Cognee is an open-source knowledge engine that transforms your raw data into persistent and dynamic AI memory for Agents. It combines vector search, graph databases and self-improvement to make your documents both searchable by meaning and connected by relationships as they change and evolve.
Cognee offers default knowledge creation and search which we describe bellow. But with Cognee you can build your modular knowledge blocks!
:star: Help us reach more developers and grow the cognee community. Star this repo!
Cognee Open Source:
- Interconnects any type of data — including past conversations, files, images, and audio transcriptions
- Replaces traditional database lookups with a unified knowledge engine built with graphs and vectors
- Reduces developer effort and infrastructure cost while improving quality and precision
- Provides Pythonic data pipelines for ingestion from 30+ data sources
- Offers high customizability through user-defined tasks, modular pipelines, and built-in search endpoints
Basic Usage & Feature Guide
To learn more, check out this short, end-to-end Colab walkthrough of Cognee's core features.
Quickstart
Let’s try Cognee in just a few lines of code. For detailed setup and configuration, see the Cognee Docs.
Prerequisites
- Python 3.10 to 3.13
Step 1: Install Cognee
You can install Cognee with pip, poetry, uv, or your preferred Python package manager.
uv pip install cognee
Step 2: Configure the LLM
import os
os.environ["LLM_API_KEY"] = "YOUR OPENAI_API_KEY"
Alternatively, create a .env file using our template.
To integrate other LLM providers, see our LLM Provider Documentation.
Step 3: Run the Pipeline
Cognee will take your documents, generate a knowledge graph from them and then query the graph based on combined relationships.
Now, run a minimal pipeline:
import cognee
import asyncio
from pprint import pprint
async def main():
# Add text to cognee
await cognee.add("Cognee turns documents into AI memory.")
# Generate the knowledge graph
await cognee.cognify()
# Add memory algorithms to the graph
await cognee.memify()
# Query the knowledge graph
results = await cognee.search("What does Cognee do?")
# Display the results
for result in results:
pprint(result)
if __name__ == '__main__':
asyncio.run(main())
As you can see, the output is generated from the document we previously stored in Cognee:
Cognee turns documents into AI memory.
Use the Cognee CLI
As an alternative, you can get started with these essential commands:
cognee-cli add "Cognee turns documents into AI memory."
cognee-cli cognify
cognee-cli search "What does Cognee do?"
cognee-cli delete --all
To open the local UI, run:
cognee-cli -ui
Demos & Examples
See Cognee in action:
Persistent Agent Memory
Cognee Memory for LangGraph Agents
Simple GraphRAG
Cognee with Ollama
Community & Support
Contributing
We welcome contributions from the community! Your input helps make Cognee better for everyone. See CONTRIBUTING.md to get started.
Code of Conduct
We're committed to fostering an inclusive and respectful community. Read our Code of Conduct for guidelines.
Research & Citation
We recently published a research paper on optimizing knowledge graphs for LLM reasoning:
@misc{markovic2025optimizinginterfaceknowledgegraphs,
title={Optimizing the Interface Between Knowledge Graphs and LLMs for Complex Reasoning},
author={Vasilije Markovic and Lazar Obradovic and Laszlo Hajdu and Jovan Pavlovic},
year={2025},
eprint={2505.24478},
archivePrefix={arXiv},
primaryClass={cs.AI},
url={https://arxiv.org/abs/2505.24478},
}