get_context()
method is a powerful feature that retrieves formatted conversation context from sessions, making it easy to integrate with LLMs like OpenAI, Anthropic, and others. This guide covers everything you need to know about working with session context.
By default, the context includes a blend of summary and messages which covers the entire history of the session. Summaries are automatically generated at intervals and recent messages are included depending on how many tokens the context is intended to be. You can specify any token limit you want, and can disable summaries to fill that limit entirely with recent messages.
Basic Usage
Theget_context()
method is available on all Session objects and returns a SessionContext
that contains the formatted conversation history.
Context Parameters
Theget_context()
method accepts several optional parameters to customize the retrieved context:
Token Limits
Control the size of the context by setting a maximum token count:Summary Mode
Enable summary mode (on by default) to get a condensed version of the conversation:Converting to LLM Formats
TheSessionContext
object provides methods to convert the context into formats compatible with popular LLM APIs. When converting to OpenAI format, you must specify the assistant peer to format the context in such a way that the LLM can understand it.
OpenAI Format
Convert context to OpenAI’s chat completion format:Anthropic Format
Convert context to Anthropic’s Claude format:Complete LLM Integration Examples
Using with OpenAI
Multi-Turn Conversation Loop
Advanced Context Usage
Context with Summaries for Long Conversations
For very long conversations, use summaries to maintain context while controlling token usage:Context for Different Assistant Types
You can get context formatted for different types of assistants in the same session:Best Practices
1. Token Management
Always set appropriate token limits to control costs and ensure context fits within LLM limits:2. Context Caching
For applications with frequent context retrieval, consider caching context when appropriate:3. Error Handling
Always handle potential errors when working with context:Conclusion
Theget_context()
method is essential for integrating Honcho sessions with LLMs. By understanding how to:
- Retrieve context with appropriate parameters
- Convert context to LLM-specific formats
- Manage token limits and summaries
- Handle multi-turn conversations