PURGR.DEV[BETA ACTIVE]
Registry Status: Verified System Epoch: 2026.04

Up and running in 60 seconds.

SDK
Proxy
Claude Desktop
[COPY]
npm install purgr

import { Purgr } from 'purgr'

const purgr = new Purgr({ activeWindow: 8, anchorCount: 3 })
const result = purgr.compress(conversationHistory)

// Pass compressed messages to your LLM
const response = await openai.chat.completions.create({
  model: 'gpt-4o',
  messages: result.messages
})

// Access the cryptographic receipt
const receipt = purgr.getSignedReceipt()
[COPY]
npm install -g purgr
purgr proxy
# Now point your app at http://localhost:3000 instead of https://api.anthropic.com
[COPY]
npm install -g purgr
purgr proxy
purgr setup claude-desktop
# Restart Claude Desktop
[COPY]
npm install purgr
Requirements: Node.js 18+. Zero runtime dependencies. Works on Windows, macOS, Linux. No Python, no PyTorch, no GPU required.
Package
@anthropic-ai/sdk
openai
tiktoken
Version
>=0.20.0
>=4.0.0
>=1.0.0
Required
Optional
Optional
Optional (higher accuracy)
Active Window

The most recent N messages are never compressed. They always pass through to the LLM unchanged. Everything older than the active window is eligible for compression. Default: 8 messages.

Phase 1 Momentum Scoring

Every message receives a momentum score based on EWMA Jaccard overlap with recent messages. Messages that stay topically relevant maintain high momentum and survive compression. Messages that drift from current topics decay toward compression candidates. O(N) time complexity.

Phase 2 Koopman DMD

After sufficient conversation history accumulates, the AutoScorer transitions to Dynamic Mode Decomposition. The Koopman operator models conversation dynamics as an evolving linear system, identifying structural topics that persist across topic changes. Achieves 91.4% TRR at 1M tokens.

Cryptographic Receipts

Every compression produces an Ed25519-signed receipt containing a Merkle root over all compression decisions, input hash, query binding, and chain link to the previous receipt. Receipts are independently verifiable at purgr.dev/verify without sending data to any server.

Option Type Default Description
activeWindownumber8Messages protected from compression at the tail
anchorCountnumber3Maximum anchor summaries injected per compression pass
scorerModestring'auto'Phase 1 only ('momentum'), Phase 2 only ('dmd'), or automatic transition
protectedLeadMessagesnumber5Messages at the head permanently protected (system prompts)
tailProtectMessagesnumber3Messages just outside active window, protected from fallback compression
anchorPositionstring'after-system'Where anchors are injected ('before', 'after-system', 'end')
debugbooleanfalseVerbose compression logging to console
Place purgr.config.json in your project root. The proxy reads it automatically on startup. Fields are identical to the constructor options plus target, port, threshold, and receipts.
[COPY]
{
  "activeWindow": 12,
  "anchorCount": 4,
  "scorerMode": "dmd",
  "threshold": 3000,
  "target": "https://api.anthropic.com",
  "port": 3000,
  "receipts": true,
  "debug": false
}
Basic compression
[COPY]
import { Purgr } from 'purgr'
import type { Message } from 'purgr'

const purgr = new Purgr({
  activeWindow: 8,
  anchorCount: 3,
  scorerMode: 'auto'
})

const messages: Message[] = [
  { role: 'user', content: 'Hello' },
  { role: 'assistant', content: 'Hi there' },
  // ... conversation history
]

const result = purgr.compress(messages)
// result.messages — compressed array ready for LLM
// result.stats — compression statistics
With query binding
[COPY]
const result = purgr.compress(messages, {
  query: "What was the approved budget?",
  response: "The approved budget is $2.4M"
})
// Receipt will bind to this specific query/response pair
Accessing stats
[COPY]
console.log(result.stats.originalTokens)    // tokens before
console.log(result.stats.compressedTokens)  // tokens after
console.log(result.stats.reductionPct)      // 0.0 to 1.0
console.log(result.stats.latencyMs)         // compression time
console.log(result.stats.anchorsInjected)   // anchors added
Accessing the receipt
[COPY]
const receipt = purgr.getSignedReceipt()
// receipt.payload.merkleRoot — Merkle root over all decisions
// receipt.payload.inputHash — SHA-256 of original input
// receipt.signature — Ed25519 signature
// receipt.publicKey — public key for independent verification

Wrap any async function that takes messages as its first argument. Compression happens automatically before every call. Zero changes to your existing code structure.

[COPY]
import { Purgr, withCompression } from 'purgr'
import Anthropic from '@anthropic-ai/sdk'

const client = new Anthropic()
const purgr = new Purgr({ activeWindow: 8 })

const compressed = withCompression(purgr, async (messages) => {
  return client.messages.create({
    model: 'claude-opus-4-6',
    max_tokens: 1024,
    messages
  })
})

// Use exactly like the original function
const response = await compressed(conversationHistory)

// Receipt available after each call
const receipt = purgr.getSignedReceipt()
withCompression works with any function that accepts a messages array as its first argument — OpenAI, Anthropic, local LLM clients, or custom wrappers. The return type is preserved exactly.

The Purgr proxy is a local HTTP server that intercepts LLM API traffic, compresses messages, and forwards requests transparently. No code changes required in your application.

[COPY]
npm install -g purgr
purgr proxy
Output example
┌─ Purgr Proxy ──────────────────────────────────┐
│  Listening on  http://localhost:3000            │
│  Target        https://api.anthropic.com        │
│  Scorer        momentum                         │
│  Active window 8 messages                       │
│  Receipts      ~/.purgr/receipts/               │
└────────────────────────────────────────────────┘
Point your SDK at the proxy
[COPY]
// Anthropic
const client = new Anthropic({ baseURL: 'http://localhost:3000' })

// OpenAI
const client = new OpenAI({ baseURL: 'http://localhost:3000/v1' })
Supported endpoints: /v1/messages (Anthropic) and /v1/chat/completions (OpenAI). Both streaming and non-streaming responses supported. Receipts written to ~/.purgr/receipts/session-YYYY-MM-DD.jsonl.

Point the proxy at any local LLM server that serves an OpenAI-compatible API. No code changes — just change the target in purgr.config.json.

[COPY]
{
  "target": "http://localhost:11434",
  "scorerMode": "dmd"
}
Tool
Ollama
LM Studio
llama.cpp server
Any server
Default Port
11434
1234
8080
Custom
Format
OpenAI-compatible
OpenAI-compatible
OpenAI-compatible
/v1/chat/completions
Purgr compresses before sending to the local model. For long-context local models like Llama 3.1 70B, compression reduces prefill time and memory pressure — making large context workloads practical on consumer hardware.

compressDocument() automatically detects academic papers, legal filings, compliance documents, and technical reports. Section types (Abstract, Introduction, Results, Conclusion, References) are detected and scored independently.

[COPY]
const result = purgr.compressDocument(documentText, {
  targetTokens: 4000
})

// result.messages — compressed document chunks
// result.stats.sections — per-section breakdown
Protected sections: Abstract, Results, Conclusion, and References sections are unconditionally protected at 100% fidelity regardless of compression ratio. Dollar amounts, dates, filing deadlines, and penalty thresholds are fact-protected across all section types.
Document Tokens TRR Numeric Facts
Bank Secrecy Act20,03468.8%100% preserved
USA PATRIOT Act §531821,57972.0%100% preserved
OCC AML Guidance~3,00018.3%100% preserved
FinCEN BSA Regulations~2,00016.0%100% preserved

Every compression pass produces an Ed25519-signed cryptographic receipt. Receipts form a tamper-evident chain — each receipt commits to the hash of the previous receipt.

Receipt anatomy
[COPY]
{
  "payload": {
    "version": "1.0.2",
    "timestamp": "2026-04-20T23:14:02.441Z",
    "inputHash": "8829f3e1a7c2d4b6e9f0a1c3d5e7f2b4",
    "merkleRoot": "a3f8c91d0e724b5f6e8d9c2a1b3e4f7a",
    "queryHash": "c7d2e4f6a8b0c1d3e5f7a9b2c4d6e8f0",
    "responseCommitment": "d4e6f8a0b2c4d6e8f0a2b4c6d8e0f2a4",
    "previousReceiptHash": "f1a3b5c7d9e0f2a4b6c8d0e2f4a6b8c0",
    "statsHash": "b2c4d6e8f0a2b4c6d8e0f2a4b6c8d0e2",
    "decisions": [
      {
        "id": "msg-001",
        "outcome": "compressed",
        "evidence": {
          "momentumScore": 0.08,
          "decayStreak": 4,
          "normalizedPosition": 0.12
        }
      }
    ]
  },
  "signature": "7/Ly01j7cZKK...base64...",
  "publicKey": "MCowBQYDK2VdAyEA...base64 SPKI..."
}
Field Description
inputHashSHA-256 of the original uncompressed input
merkleRootMerkle root committing to all compression decisions
queryHashSHA-256 of the query string if provided
responseCommitmentBinds a specific LLM response to this compression
previousReceiptHashLinks to prior receipt forming tamper-evident chain
signatureEd25519 signature over the payload
publicKeySPKI DER public key for independent verification
Receipts are stored locally at ~/.purgr/receipts/session-YYYY-MM-DD.jsonl. Purgr Infrastructure never receives copies of your receipts. Verification happens entirely in your browser at purgr.dev/verify.

Any Purgr receipt can be independently verified at purgr.dev/verify. The verifier performs Ed25519 signature validation and Merkle tree reconstruction entirely in your browser. No data is transmitted to any server.

What verification proves
  • [✓] The compression decisions were produced by an unmodified Purgr instance
  • [✓] The decision record has not been tampered with since signing
  • [✓] The Merkle root correctly commits to all recorded decisions
  • [✓] The receipt is cryptographically linked to the specific input that was compressed
A valid receipt does not guarantee that no important content was lost or that the compression output is factually complete. It proves the integrity of the compression process, not the quality of the output. See our Terms of Service for full attestation scope.
Open Verification Portal

Purgr exposes an MCP (Model Context Protocol) server for integration with Claude Desktop and other MCP-compatible tools.

[COPY]
purgr mcp
Claude Desktop integration
[COPY]
{
  "mcpServers": {
    "purgr": {
      "command": "purgr",
      "args": ["mcp"]
    }
  }
}
The MCP server exposes compression as a tool that Claude can call directly during conversations. This enables Claude to manage its own context window by requesting compression when conversations grow long.
Command Description
purgr proxyStart the local compression proxy on port 3000
purgr proxy --port 4000Start on custom port
purgr proxy --scorer dmdForce Phase 2 DMD scorer
purgr proxy --debugEnable verbose compression logging
purgr setup claude-desktopConfigure Claude Desktop to use proxy
purgr setup revertRestore Claude Desktop original config
purgr mcpStart MCP server
purgr receiptsView today's receipt log
purgr receipts --verifyVerify all receipts in today's log
purgr receipts --date 2026-04-20View receipts for specific date

Purgr is engineered for speed and cryptographic integrity. This requires specific engineering tradeoffs.

Semantic NIAH

Purgr uses lexical and dynamical scoring. Paraphrase detection without embeddings is an unsolved problem. Category 2 semantic NIAH (conceptual similarity without keyword overlap) scores 0% without an embedding model.

Topic Resurrection

When a topic is discussed early, drops away for 100+ turns, and is referenced again late, Phase 2 DMD may compress the original discussion before the resurrection signal arrives.

General Conversational Detail

Fact protection triggers on high-specificity signals: currency values, precise dates, version strings, named persons with titles. General conversational details (pet names, informal references) do not trigger protection.

Heuristic Tokenization

The default token counter uses a 4 chars ≈ 1 token heuristic. For exact token counts install the optional tiktoken peer dependency.

Why not just use Claude's 1M token context window? +
Cost and exhaustion. At Opus 4.6 pricing, sending 1M tokens costs $5 per request. In long-running agent workflows, even 1M token windows eventually fill. Purgr reduces your token spend by 80-90% and ensures your pipeline never hits a ceiling regardless of model context limits.
Does Purgr store my conversation data? +
No. Purgr runs entirely on your machine. Conversation content never leaves your infrastructure. Receipts are stored locally at ~/.purgr/receipts/. The verification portal runs entirely in your browser with no server-side component.
What happens if compression fails? +
The proxy fails safe — if compression throws an error, the original uncompressed messages are forwarded to the LLM unchanged. Your application never sees the error. In SDK mode, compress() will throw and you can catch it.
Is Purgr compatible with streaming responses? +
Yes. The proxy handles both streaming (SSE) and non-streaming responses. Compression happens on the request side only — responses are piped through unchanged.
Can I tune how aggressive the compression is? +
Yes. Lower anchorCount for less compression, higher for more. Increase activeWindow to protect more recent messages. Set threshold in purgr.config.json to prevent compression on short conversations. The scorerMode: 'momentum' setting is more conservative than 'dmd'.
Does Purgr work with fine-tuned or self-hosted models? +
Yes. The Purgr proxy supports any model endpoint that accepts a messages array — both Anthropic format (/v1/messages) and OpenAI format (/v1/chat/completions). This includes self-hosted models via Ollama, LM Studio, and llama.cpp, cloud providers like Together AI, Fireworks, Groq, and Mistral, and any custom inference server that follows either message format. Set target in purgr.config.json to your endpoint's base URL.
What does the receipt prove in a compliance context? +
A valid receipt proves the integrity of the compression process — that decisions were made by unmodified Purgr software and have not been altered since signing. It does not constitute legal evidence of content completeness. Consult qualified legal counsel before relying on receipts in formal proceedings.