Understanding Honcho’s core concepts and data model.
Honcho has 3 main components that work together to manage agent identity and context.
Below we’ll deep dive into these different areas, discussing the data primitives, the flow of data through the system, artifacts Honcho produces, and how to use them.
Honcho has a hierarchical data model centered around the entities below.
There are Workspaces
at the top that contain Peers
and Sessions
. A Peer
can be part of many Sessions
and a Session
can have many Peers
. Both
Sessions
and Peers
can have Messages
Workspaces are the top-level containers that provide complete isolation between different applications or environments; they essentially as a namespace to isolate different workloads or environments
Key Features:
Use Cases:
Honcho has a Peer-Centric Architecture: Peers are the most important entity within Honcho, with everything revolving around Peers and their representations.
Peers represent individual users, agents, or entities in a workspace. They are the primary subjects for memory and context management. Treating humans and agents the same lets us support arbitrary combinations of Peers for multi-agent or group chat scenarios.
Key Features:
Use Cases:
Sessions represent individual conversation threads or interaction contexts between peers.
Key Features:
Use Cases:
Messages are the fundamental units of interaction within sessions. They may also be used at the peer level to ingest information of any kind that is not related to a specific interaction, but provides important context for a peer (emails, docs, files, etc.).
Key Features:
Message Types:
At the core of developing representations of Peers, we have the Deriver. The Deriver refers to a set of processes in Honcho that enqueue new messages sent by peers and reasons over them to extract facts, insights, and context.
Depending on the configuration of a Peer
or Session
, the deriver will behave
differently and update different representations.
Facts derived here are used in the Dialectic chat endpoint to generate context-aware responses that can correctly reference both concrete facts extracted from messages and social insights deduced from facts, tone, and opinion.
Deriver tasks are processed in parallel, but tasks affecting the same peer representation will always be processed serially in order of message creation, so as to properly understand their cumulative effect.
There are two types of tasks that the deriver currently does:
Peer representations are more of an abstract concept, as they are made up of various pieces of data stored throughout Honcho. There are however multiple types of representations that Honcho can produce.
Honcho handles both local and global representations of Peers, where local representations are specific to a single Peer’s view of another Peer, while Global Representations are based on any message ever produced by a Peer.
Everything is framed with regards to perspective. Alice owns her own global representation, but she also maintains a local representation of Bob based on what she observes and similarly Bob has a global representation of himself and local representation of Alice. So in the example above, when Alice sends a message to Bob it triggers an update to both Alice’s global representation of herself and Bob’s local representation of Alice.
If Alice were to have another conversation with a different Peer, Nico, and sent them a message, this action would trigger an update to Alice’s Global Representation and Nico’s local representation of Alice. Bob’s local representation of Alice would not change since Bob would never receive that message.
Depending on the use case, a developer may choose to only use global representation, only use local, or a combination.
Summary tasks create conversation summaries. Periodically, a
“short” summary will be created for each session as messages are added — every
20 messages by default. “Long” summaries are created every 60 messages by
default and maintain a total overview of the session by including the previous
summary in a recursive fashion. These summaries are accessed in the
get_context
endpoint along with recent messages, allowing developers to
easily fetch everything necessary to generate the next LLM completion for an
agent.
The system defaults are also the checkpoints used on the managed version of Honcho hosted at https://api.honcho.dev
The Dialectic API is one of the most integral components of Honcho and acts as
the main way to leverage Peer Representations. By using the /chat
endpoint,
developers can directly talk to Honcho about any Peer in a workspace to get
insights into the psychology of a Peer and help them steer their behavior.
This allows us to use this one endpoint for a wide variety of use cases. Model steering, personalization, hydrating a prompt, etc. Additionally, since the endpoint works through natural language, a developer can allow an agent to backchannel directly with Honcho, via MCP or a direct API call.
Developers should frame the Dialectic as talking to an expert on the Peer rather than addressing the Peer itself, meaning:
Think of Dialectic Chat as an assisting agent that your main agent can consult for contextual information about any actor in your application.
Learn how to use the SDK to interact with the data model
Reference for all technical terms and concepts
Detailed API documentation and examples
Get started with your first integration
Understanding Honcho’s core concepts and data model.
Honcho has 3 main components that work together to manage agent identity and context.
Below we’ll deep dive into these different areas, discussing the data primitives, the flow of data through the system, artifacts Honcho produces, and how to use them.
Honcho has a hierarchical data model centered around the entities below.
There are Workspaces
at the top that contain Peers
and Sessions
. A Peer
can be part of many Sessions
and a Session
can have many Peers
. Both
Sessions
and Peers
can have Messages
Workspaces are the top-level containers that provide complete isolation between different applications or environments; they essentially as a namespace to isolate different workloads or environments
Key Features:
Use Cases:
Honcho has a Peer-Centric Architecture: Peers are the most important entity within Honcho, with everything revolving around Peers and their representations.
Peers represent individual users, agents, or entities in a workspace. They are the primary subjects for memory and context management. Treating humans and agents the same lets us support arbitrary combinations of Peers for multi-agent or group chat scenarios.
Key Features:
Use Cases:
Sessions represent individual conversation threads or interaction contexts between peers.
Key Features:
Use Cases:
Messages are the fundamental units of interaction within sessions. They may also be used at the peer level to ingest information of any kind that is not related to a specific interaction, but provides important context for a peer (emails, docs, files, etc.).
Key Features:
Message Types:
At the core of developing representations of Peers, we have the Deriver. The Deriver refers to a set of processes in Honcho that enqueue new messages sent by peers and reasons over them to extract facts, insights, and context.
Depending on the configuration of a Peer
or Session
, the deriver will behave
differently and update different representations.
Facts derived here are used in the Dialectic chat endpoint to generate context-aware responses that can correctly reference both concrete facts extracted from messages and social insights deduced from facts, tone, and opinion.
Deriver tasks are processed in parallel, but tasks affecting the same peer representation will always be processed serially in order of message creation, so as to properly understand their cumulative effect.
There are two types of tasks that the deriver currently does:
Peer representations are more of an abstract concept, as they are made up of various pieces of data stored throughout Honcho. There are however multiple types of representations that Honcho can produce.
Honcho handles both local and global representations of Peers, where local representations are specific to a single Peer’s view of another Peer, while Global Representations are based on any message ever produced by a Peer.
Everything is framed with regards to perspective. Alice owns her own global representation, but she also maintains a local representation of Bob based on what she observes and similarly Bob has a global representation of himself and local representation of Alice. So in the example above, when Alice sends a message to Bob it triggers an update to both Alice’s global representation of herself and Bob’s local representation of Alice.
If Alice were to have another conversation with a different Peer, Nico, and sent them a message, this action would trigger an update to Alice’s Global Representation and Nico’s local representation of Alice. Bob’s local representation of Alice would not change since Bob would never receive that message.
Depending on the use case, a developer may choose to only use global representation, only use local, or a combination.
Summary tasks create conversation summaries. Periodically, a
“short” summary will be created for each session as messages are added — every
20 messages by default. “Long” summaries are created every 60 messages by
default and maintain a total overview of the session by including the previous
summary in a recursive fashion. These summaries are accessed in the
get_context
endpoint along with recent messages, allowing developers to
easily fetch everything necessary to generate the next LLM completion for an
agent.
The system defaults are also the checkpoints used on the managed version of Honcho hosted at https://api.honcho.dev
The Dialectic API is one of the most integral components of Honcho and acts as
the main way to leverage Peer Representations. By using the /chat
endpoint,
developers can directly talk to Honcho about any Peer in a workspace to get
insights into the psychology of a Peer and help them steer their behavior.
This allows us to use this one endpoint for a wide variety of use cases. Model steering, personalization, hydrating a prompt, etc. Additionally, since the endpoint works through natural language, a developer can allow an agent to backchannel directly with Honcho, via MCP or a direct API call.
Developers should frame the Dialectic as talking to an expert on the Peer rather than addressing the Peer itself, meaning:
Think of Dialectic Chat as an assisting agent that your main agent can consult for contextual information about any actor in your application.
Learn how to use the SDK to interact with the data model
Reference for all technical terms and concepts
Detailed API documentation and examples
Get started with your first integration