Understanding how tokens work and how billing is calculated in mixus.Documentation Index
Fetch the complete documentation index at: https://docs.mixus.ai/llms.txt
Use this file to discover all available pages before exploring further.
What Are Tokens?
Tokens are the fundamental unit of measurement for AI model usage in mixus. Every interaction with an AI model consumes tokens based on:- Input tokens: The text you send to the model
- Output tokens: The response generated by the model
- Processing complexity: Some operations require additional computational resources
Key Concepts
Token Counting
- Tokens are roughly equivalent to words, but vary by language and model
- English text averages ~1.3 tokens per word
- Code and structured data may have different token ratios
- Each model has its own tokenization method
Billing Structure
mixus uses a transparent, usage-based billing model:- Pay-per-token: Only pay for what you use
- No hidden fees: Clear pricing for all features
- Monthly billing: Consolidated invoices
- Usage tracking: Real-time monitoring
Token Types
Input Tokens
Tokens from your prompts, uploaded files, and context:- Chat messages
- File content
- System prompts
- Memory context
Output Tokens
Tokens generated by the AI model:- Chat responses
- Generated code
- Analysis results
- Tool outputs
Processing Tokens
Additional tokens for special operations:- Web search queries
- Document analysis
- Image processing
- Agent execution
Pricing Tiers
Free Tier
- 1,000 tokens per month
- Access to basic models
- Standard support
Pro Tier
- $20/month base
- 100,000 included tokens
- Premium model access
- Priority support
Team Tier
- $50/month base
- 500,000 included tokens
- Advanced collaboration features
- Team management tools
Enterprise
- Custom pricing
- Unlimited tokens
- Dedicated support
- Custom integrations
Cost Optimization
Best Practices
- Use appropriate models: Choose the right model for your task
- Optimize prompts: Shorter, clearer prompts use fewer tokens
- Manage context: Remove unnecessary conversation history
- Batch operations: Group similar tasks together
Model Efficiency
- GPT-4o mini: Most cost-effective for simple tasks
- GPT-4o: Best balance of capability and cost
- Claude 3.5 Sonnet: Excellent for analysis and reasoning
- o1-preview: Use for complex problem-solving only

