Use all models inside of Chat, Assistants, Agents or via the API.
Pricing applies only to our API product. Chat and Assistants, when purchased including AI models, have no usage-based cost component. Langdock charges 15% on top of the model provider's price. Model prices origin from the model providers in USD.
Model | API pricing Input tokens | API pricing Output tokens | Region |
---|---|---|---|
OpenAI | |||
GPT-4 Turbo | 10.41€ / 1M tokens | 31.23€ / 1M tokens | |
GPT-4o | 5.20€ / 1M tokens | 10.41€ / 1M tokens | |
GPT-4o mini | 0.16€ / 1M tokens | 0.62€ / 1M tokens | |
Anthropic | |||
Claude 3.5 Sonnet | 3.12€ / 1M tokens | 15.61€ / 1M tokens | |
Claude 3 Haiku | 0.26€ / 1M tokens | 1.30€ / 1M tokens | |
Claude 3 Opus | 15.61€ / 1M tokens | 78.06€ / 1M tokens | |
Gemini 1.5 Pro | 7.81€ / 1M tokens | 21.86€ / 1M tokens | |
Gemini 1.5 Flash | 0.16€ / 1M tokens | 0.62€ / 1M tokens | |
Meta | |||
Llama 3.1 70B | 2.79€ / 1M tokens | 3.68€ / 1M tokens | |
Llama 3.1 8B | 0.31€ / 1M tokens | 0.63€ / 1M tokens | |
Mistral | |||
Mistral Large 2 | 3.12€ / 1M tokens | 9.37€ / 1M tokens | |
Mistral Nemo | 3.12€ / 1M tokens | 3.12€ / 1M tokens |
Pricing applies only to our API product. Chat and Assistants, when purchased including AI models, have no usage-based cost component. Langdock charges 15% on top of the model provider's price. Model prices origin from the model providers in USD.
Advanced language models process text using tokens, which are common sequences of characters in text. These models learn the statistical relationships between tokens to predict the next one in a sequence.
Tokenization is crucial for how these models interpret and generate text. It breaks down input text into smaller units (tokens) that the model can process.
The tokenization process can vary between different models. Newer models may use different tokenizers than older ones, potentially producing different tokens for the same input text. This can affect how the model processes text and impact token counts.
Understanding tokenization is helpful when working with these models, especially when considering input length limitations or optimizing text processing efficiency.
Tokens
Characters
Note: This is a simplified tokenization method and may not accurately represent the exact token count used by language models. For precise tokenization, consider using model-specific tokenizers.
For typical English text, one token often equals about 4 characters or ¾ of a word. As a rough estimate, 100 tokens ≈ 75 words.
For precise tokenization, developers can use programming libraries. In Python, the tiktoken package is available for tokenizing text. For JavaScript, the community-supported @dbdq/tiktoken package works with many advanced language models. These tools are valuable for accurate token counting and text processing tasks.