Getting started

Quick start guide

Registration, API key, and integration β€” everything in 3 simple steps

1

Registration

Click "Start Free" or "Sign In" in the top menu. You can sign in with your GitHub or Google account β€” no separate registration form needed. After sign-in, you'll be taken straight to your dashboard.
Go to the Pricing page and select a plan that suits your needs. The Free plan gives you 50 compressions per day β€” perfect for trying things out. Payment is processed securely via Stripe.
Your plan is activated immediately. You'll be redirected to the dashboard where you can create API keys and start using TokenCompress right away.
2

Creating an API Key

Go to Dashboard β†’ API Keys (https://tokencompress.com/dashboard/api-keys) and click "Create new key". Give it a descriptive name, e.g. "Continue IDE" or "LangChain prod".
The key has the format: ak_live_... Copy it immediately after creation β€” it will only be shown once! Store it securely and never commit it to version control.
Go to Dashboard β†’ Compression Statistics (https://tokencompress.com/dashboard/compression) to see your overall usage and savings. You can also see which keys are used most and how many tokens you've saved.
3

Integration

Add this to your ~/.continue/config.yaml file. The apiKey consists of two parts: your TokenCompress key (created in the dashboard) and your LLM provider key, joined with :: (double colon). Set apiBase to https://tokencompress.com/v1/{provider} where {provider} matches the table below:

# ~/.continue/config.yaml
models:
  - name: TokenCompress - DeepSeek
    provider: openai
    model: deepseek-chat
    apiKey: ak_live_xxx...xxx::sk-your-provider-key
    apiBase: https://tokencompress.com/v1/deepseek
    roles:
      - chat
      - edit
      - apply
    defaultCompletionOptions:
      stream: true

apiKey = ak_live_... :: provider-api-key

The apiKey field is a composite key consisting of two parts separated by double colons (::). The first part (ak_live_...) is created in your TokenCompress dashboard. The second part is your LLM provider's own API key (e.g. sk-... for DeepSeek or OpenAI). Example: ak_live_abc123::sk-your-provider-key

After saving the config, restart Continue. The model "TokenCompress - DeepSeek" will appear in the model list. All your requests will be automatically compressed, saving up to 87% on tokens.

TokenCompress is compatible with the OpenAI API. Set base_url to https://tokencompress.com/v1/{provider} and api_key to your composite key (TokenCompress key :: provider key):

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    base_url="https://tokencompress.com/v1/anthropic",
    api_key="ak_live_xxx...xxx::sk-ant-your-anthropic-key",
    model="claude-sonnet-4-20250514",
)

response = llm.invoke("Analyze this code...")
print(response.content)

Install the package first: pip install langchain-openai. All standard LangChain features work β€” chains, agents, tools, and output parsers.

Use the same ChatOpenAI client inside your LangGraph nodes. Same composite api_key and base_url format as LangChain:

from langchain_openai import ChatOpenAI
from langgraph.graph import StateGraph, MessagesState

llm = ChatOpenAI(
    base_url="https://tokencompress.com/v1/anthropic",
    api_key="ak_live_xxx...xxx::sk-ant-your-anthropic-key",
    model="claude-sonnet-4-20250514",
)

def chatbot(state: MessagesState):
    return {"messages": [llm.invoke(state["messages"])]}

graph = StateGraph(MessagesState)
graph.add_node("chatbot", chatbot)
graph.set_entry_point("chatbot")
app = graph.compile()

Install: pip install langchain-openai langgraph. TokenCompress compresses context transparently β€” your graph logic stays exactly the same.

Supported Providers

The apiBase URL must end with the provider name matching one of the supported providers listed below. Always set provider to openai in your config β€” TokenCompress uses an OpenAI-compatible API format.

LLM Provider provider field apiBase URL Note
OpenAI openai https://tokencompress.com/v1/openai
Anthropic openai https://tokencompress.com/v1/anthropic
DeepSeek openai https://tokencompress.com/v1/deepseek
Mistral openai https://tokencompress.com/v1/mistral
Qwen openai https://tokencompress.com/v1/qwen
OpenRouter openai https://tokencompress.com/v1/openrouter
LM Studio openai https://tokencompress.com/v1/lm-studio Enterprise
Ollama openai https://tokencompress.com/v1/ollama Enterprise

πŸ’‘ model field: Use any model name supported by your chosen LLM provider β€” TokenCompress places no restrictions on this value. For example: deepseek-chat, claude-sonnet-4-20250514, gpt-4o, etc.

Need a provider not listed here? Contact us and we'll add it promptly.

Ready to start?

Choose a plan and set up integration in under 5 minutes