80% Reduction
Typical compression from 2000 tokens to 400
Intelligent Context Compression. Preserve semantic meaning while dramatically reducing token count. Compress conversation history from 2000 tokens to 400 with all key decisions intact.
Capabilities
beacon_compress intelligently reduces token count in conversation history while preserving all semantic meaning. It identifies key decisions, code snippets, error messages, and user preferences—keeping what matters, discarding verbose exploration.
Typical compression ratios: 70-80% reduction with zero information loss.
Semantic Preservation
Keeps all meaningful information, removes verbose exploration
Decision Tracking
Preserves key decisions and their rationale
Code & Error Retention
Code blocks and error messages always preserved verbatim
80% Reduction
Typical compression from 2000 tokens to 400
Compression Example
// Before (2000 tokens)
"Let me help you with that authentication issue. First, I'll check the code... [500 tokens of exploration]"
"I see the problem now. The issue is... [300 tokens of explanation]"
"Here's the fix: [code block]... Let me know if this works [200 tokens]"
// After (400 tokens) - 80% reduction
Decision: Auth issue identified in token validation
Fix: [code block preserved]
Status: Awaiting user confirmation
Use Cases
Keep conversation history under context limits without losing critical information.
Pass compressed context between agents efficiently.
Reduce token usage by 80% while preserving all meaningful content.
80% token reduction with zero information loss for $0.02 per call.
Get Started