back to home

langchain-ai / deepagents

Deep Agents is an agent harness built on langchain and langgraph. Deep Agents are equipped with a planning tool, a filesystem backend, and the ability to spawn subagents - making them well-equipped to handle complex agentic tasks.

9,468 stars
1,515 forks
198 issues
PythonMakefileShell

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing langchain-ai/deepagents in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind-ai.vercel.app/repo/langchain-ai/deepagents)
Preview:Analyzed by RepoMind

Repository Summary (README)

Preview
<div align="center"> <a href="https://docs.langchain.com/oss/python/deepagents/overview#deep-agents-overview"> <picture> <source media="(prefers-color-scheme: light)" srcset=".github/images/logo-dark.svg"> <source media="(prefers-color-scheme: dark)" srcset=".github/images/logo-light.svg"> <img alt="Deep Agents Logo" src=".github/images/logo-dark.svg" width="80%"> </picture> </a> </div> <div align="center"> <h3>The batteries-included agent harness.</h3> </div> <div align="center"> <a href="https://opensource.org/licenses/MIT" target="_blank"><img src="https://img.shields.io/pypi/l/deepagents" alt="PyPI - License"></a> <a href="https://pypistats.org/packages/deepagents" target="_blank"><img src="https://img.shields.io/pepy/dt/deepagents" alt="PyPI - Downloads"></a> <a href="https://pypi.org/project/deepagents/#history" target="_blank"><img src="https://img.shields.io/pypi/v/deepagents?label=%20" alt="Version"></a> <a href="https://x.com/langchain" target="_blank"><img src="https://img.shields.io/twitter/url/https/twitter.com/langchain.svg?style=social&label=Follow%20%40LangChain" alt="Twitter / X"></a> </div> <br>

Deep Agents is an agent harness. An opinionated, ready-to-run agent out of the box. Instead of wiring up prompts, tools, and context management yourself, you get a working agent immediately and customize what you need.

What's included:

  • Planningwrite_todos for task breakdown and progress tracking
  • Filesystemread_file, write_file, edit_file, ls, glob, grep for reading and writing context
  • Shell accessexecute for running commands (with sandboxing)
  • Sub-agentstask for delegating work with isolated context windows
  • Smart defaults — Prompts that teach the model how to use these tools effectively
  • Context management — Auto-summarization when conversations get long, large outputs saved to files

[!NOTE] Looking for the JS/TS library? Check out deepagents.js.

Quickstart

pip install deepagents
# or
uv add deepagents
from deepagents import create_deep_agent

agent = create_deep_agent()
result = agent.invoke({"messages": [{"role": "user", "content": "Research LangGraph and write a summary"}]})

The agent can plan, read/write files, and manage its own context. Add tools, customize prompts, or swap models as needed.

Customization

Add your own tools, swap models, customize prompts, configure sub-agents, and more. See the documentation for full details.

from langchain.chat_models import init_chat_model

agent = create_deep_agent(
    model=init_chat_model("openai:gpt-4o"),
    tools=[my_custom_tool],
    system_prompt="You are a research assistant.",
)

MCP is supported via langchain-mcp-adapters.

Deep Agents CLI

Try Deep Agents instantly from the terminal:

uv tool install deepagents-cli
deepagents

The CLI adds conversation resume, web search, remote sandboxes (Modal, Runloop, Daytona, & more), persistent memory, custom skills, headless mode, and human-in-the-loop approval. See the CLI documentation for more.

LangGraph Native

create_deep_agent returns a compiled LangGraph graph. Use it with streaming, Studio, checkpointers, or any LangGraph feature.

FAQ

Why should I use this?

  • 100% open source — MIT licensed, fully extensible
  • Provider agnostic — Works with Claude, OpenAI, Google, or any LangChain-compatible model
  • Built on LangGraph — Production-ready runtime with streaming, persistence, and checkpointing
  • Batteries included — Planning, file access, sub-agents, and context management work out of the box
  • Get started in secondspip install deepagents or uv add deepagents and you have a working agent
  • Customize in minutes — Add tools, swap models, tune prompts when you need to

Documentation

Discussions: Visit the LangChain Forum to connect with the community and share all of your technical questions, ideas, and feedback.

Additional resources

  • Examples — Working agents and patterns
  • API Reference – Detailed reference on navigating base packages and integrations for LangChain.
  • Contributing Guide – Learn how to contribute to LangChain projects and find good first issues.
  • Code of Conduct – Our community guidelines and standards for participation.

Packages

This is a monorepo containing all Deep Agents packages:

PackagePyPIDescription
deepagentsVersionCore SDK — create_deep_agent, middleware, backends
deepagents-cliVersionInteractive terminal interface with TUI, web search, and sandboxes
deepagents-acpVersionAgent Client Protocol integration for editors like Zed
deepagents-harbor-Harbor evaluation and benchmark framework
langchain-daytonaVersionDaytona sandbox integration
langchain-modalVersionModal sandbox integration
langchain-runloopVersionRunloop sandbox integration

Acknowledgements

This project was primarily inspired by Claude Code, and initially was largely an attempt to see what made Claude Code general purpose, and make it even more so.

Security

Deep Agents follows a "trust the LLM" model. The agent can do anything its tools allow. Enforce boundaries at the tool/sandbox level, not by expecting the model to self-police.