Hydra App – Credits Usage Rates

Hydra credits are bought in packages through the in-app purchase screen. The purchase price per credit depends on the package acquired. Check the stores for current price of the packages in your currency/region.

The credits purchased are loaded in the user account and are consumed on an ongoing basis as you use the app functions.

The table bellow shows the cost in credit for the different app functions, depending on the AI model selected:

Each function execution has a cost of 0.4 Hydra credits plus the applicable function cost.

Hydra credits usage on Chat functions:

Chat usage is charged based on the input tokens, output tokens and the selected chat model. The table below presents of the chat usage in Hydra credits per 1000 tokens for the different models, though the consumption is calculated per the actual number of tokens used and not rounded to 1000.

AI Chat model selectedHydra credits per 1000
input tokens
Hydra credits per 1000
output tokens
Open AI – GPT 3.5 Turbo0.250.75
Open AI – GPT 41530
Open AI – GPT 4 Turbo515
Open AI – GPT 4o2.57.5
Open AI – GPT 4o mini0.0750.3
Open AI – o1 mini1.56
Open AI – o1 Preview7.530
Google – Gemini 1.5 Pro1.250.5
Google – Gemini 1.5 Flash0.0753
Google – Gemini 1.0 Pro0.250.75
Google – Gemini 2.0 Flash experimental21
Mistral – Large13
Mistral – Open 7b0.1250.125
Mistral – Open 8x7b0.350.35
Mistral – Open 8x22b13
Mistral – Small0.10.3
Mistral – Nemo0.0750.075
Mistral – Codestral0.10.3
Mistral – Pixtral 12b0.0750.075
Mistral – Pixtral Large13
Anthropic – Claude 3 Opus7.537.5
Anthropic – Claude 3 Sonnet1.57.5
Anthropic – Claude 3 Haiku0.1250.625
Anthropic – Claude 3.5 Sonnet1.57.5
Anthropic – Claude 3.5 Haiku0.42
Meta Llama 3 – 70b0.2950.395
Meta Llama 3 – 8b0.120.12
Meta Llama 3.1 – 405b1.51.5
Meta Llama 3.1 – 70b0.450.45
Meta Llama 3.1 – 8b0.10.1
DeepSeek – R144
DeepSeek – V30.450.45
Qwen – 2.5 Coder 32b0.450.45
Qwen – QWQ 32b0.450.45
Qwen – 2.5 72b0.450.45
Qwen – 2 VL 72b0.450.45
Understanding Tokens in LLM Chat Models:

Tokens are the essential units that Language Models (LLMs) like ChatGPT use to understand and generate text. Tokens determine how the model processes text and how much you’re charged for its use.

What are Tokens?

Tokens are pieces of text. They can be as small as a single letter or as large as a whole word. For example, the word “unbelievable” might be split into multiple tokens: [“un”, “believ”, “able”].
A phrase like “New York” could be a single token: [“New York”].

Input Tokens

Input Tokens are the tokens you provide to the model. If you type in the chat, “How’s the weather in New York?” it might break down into 6 tokens: [“How”, “‘s”, “the”, “weather”, “in”, “New York”, “?”].
The model uses these tokens to understand your question.

Output Tokens

Output Tokens are the tokens the model generates in response. If it replies, “It’s sunny in San Francisco,” it could be tokenized as: [“It”, “‘s”, “sunny”, “in”, “San Francisco”].
These tokens form the model’s response to your input.

Hydra credits usage on Image generation functions:

When using Image generation or Image variant function the usage cost is calculated per image generated. The table below presents the amount of Hydra credits charged per each image generated, depending on the model selected.

AI Image model selectedHydra credits
per image
generated
dall-e-2@256×2565100
dall-e-2@512×5125725
dall-e-2@1024×1024 6350
dall-e-3@1024×102412600
dall-e-3@1792×102425100
dall-e-3@1024×179225100
dall-e-3@1024x1024hd25100
dall-e-3@1024x1792hd37600
dall-e-3@1792x1024hd37600
Stability Ai Stable Ultra4100
Stability Ai SD3 Large3350
Stability Ai SD3 Large Turbo2100
Stability Ai SD3 Medium1850
Image variant uses dall-e-2@1024×1024 model by default.