Hydra App – Credits Usage Rates

Hydra credits are bought in packages through the in-app purchase screen. The purchase price per credit depends on the package acquired. Check the stores for current price of the packages in your currency/region.

The credits purchased are loaded in the user account and are consumed on an ongoing basis as you use the app functions.

The table bellow shows the cost in credit for the different app functions, depending on the AI model selected:

Each function execution has a cost of 0.4 Hydra credits plus the applicable function cost.

Hydra credits usage on Chat functions:

Chat usage is charged based on the input tokens, output tokens and the selected chat model. The table below presents of the chat usage in Hydra credits per 1000 tokens for the different models, though the consumption is calculated per the actual number of tokens used and not rounded to 1000.

AI Chat model selectedHydra credits per 1000
input tokens
Hydra credits per 1000
output tokens
Open Ai – GPT 3.5 Turbo0.250.75
Open Ai – GPT 41530
Open Ai – GPT 4 Turbo515
Open Ai – GPT 4o2.57.5
Open Ai – GPT 4o Mini0.0750.3
Google – Gemini 1.5 Pro1.755.25
Google – Gemini 1.5 Flash0.1750.525
Mistral – Large26
Mistral – Open 7b0.1250.125
Mistral – Open 8x7b0.350.35
Mistral – Open 8x22b13
Mistral – Large 21.54.5
Mistral – Nemo0.150.15
Mistral – Codestral0.51.5
Anthropic – Claude 3 Opus7.537.5
Anthropic – Claude 3 Sonnet1.57.5
Anthropic – Claude 3 Haiku0.1250.625
Anthropic – Claude 3.5 Sonnet1.57.5
Meta Llama 3 – 70b0.450.45
Meta Llama 3 – 8b0.10.1
Meta Llama 3.1 – 405b1.51.5
Meta Llama 3.1 – 70b0.450.45
Meta Llama 3.1 – 8b0.10.1
Understanding Tokens in LLM Chat Models:

Tokens are the essential units that Language Models (LLMs) like ChatGPT use to understand and generate text. Tokens determine how the model processes text and how much you’re charged for its use.

What are Tokens?

Tokens are pieces of text. They can be as small as a single letter or as large as a whole word. For example, the word “unbelievable” might be split into multiple tokens: [“un”, “believ”, “able”].
A phrase like “New York” could be a single token: [“New York”].

Input Tokens

Input Tokens are the tokens you provide to the model. If you type in the chat, “How’s the weather in New York?” it might break down into 6 tokens: [“How”, “‘s”, “the”, “weather”, “in”, “New York”, “?”].
The model uses these tokens to understand your question.

Output Tokens

Output Tokens are the tokens the model generates in response. If it replies, “It’s sunny in San Francisco,” it could be tokenized as: [“It”, “‘s”, “sunny”, “in”, “San Francisco”].
These tokens form the model’s response to your input.

Hydra credits usage on Image generation functions:

When using Image generation or Image variant function the usage cost is calculated per image generated. The table below presents the amount of Hydra credits charged per each image generated, depending on the model selected.

AI Image model selectedHydra credits
per image
generated
dall-e-2@256×2565100
dall-e-2@512×5125725
dall-e-2@1024×1024 6350
dall-e-3@1024×102412600
dall-e-3@1792×102425100
dall-e-3@1024×179225100
dall-e-3@1024x1024hd25100
dall-e-3@1024x1792hd37600
dall-e-3@1792x1024hd37600
Image variant uses dall-e-2@1024×1024 model by default.

Scroll to Top