Eden's credit system is designed to bill you accurately based on your actual AI usage. Different models and the amount of context you use affect the cost, so understanding how to work efficiently can help you get more value from every credit.
"Credits" is how Eden translates "Tokens" into something we can measure internally. Tokens are how AI models read and write — think of them as pieces of words. Our credit system almost directly correlates to token usage.
Why Eden Works Differently
Eden is a drive, not just an AI chat app. You can easily work with large files and videos because we automatically transcribe and analyze content — including every frame of video. This means you can reference entire documents, YouTube videos, podcasts, and more without worrying about context limits.
However, this powerful capability also means that how you structure your work impacts your credit usage. Because Eden makes it easy to reference large files, it can also be easy to burn through your plan's AI credits — especially if you're using an expensive model like Claude Opus 4.5 while referencing large new files in every message. See our AI Credits Guide for detailed tables showing exactly how far your credits go across different models and usage patterns.
Important: File processing and storage are included in your subscription tier — only AI usage consumes credits. Any file or link you add to your workspace is transcribed, analyzed, and indexed as part of your plan's cost. Eden is a cloud drive, not only an AI tool.
Understanding Context in Longer Chats
"Context" is the information the AI considers when generating a response.
This is how every AI tool works — not just Eden. The more you add to a chat, the more context is processed with each message. Long chats use more credits because the AI re-reads your entire conversation history every time it responds.
In Eden, context can include your conversation history, any @mentioned workspace items, and project content. As conversations grow longer, the context window expands, which increases credit usage with each new message.
There is good news: Eden caches your previous context at a 90% discount. This means that when you stay in the same conversation, the AI only pays full price for new content you add — everything it has already seen costs 90% less to re-read. This is why staying in one conversation is much cheaper when you're asking multiple questions about the same file. However, if every message brings a brand-new large file, caching can't help with that file and the growing history actually adds to the cost.
The strategy to making the most of your credits is simple: keep conversations focused, summarize valuable insights into reusable documents, and start fresh chats when you shift to a new topic. This approach gives you the benefits of deep context without the costs of carrying unnecessary history forward.
If you still run out of credits fast, the way you work may be better suited for the Creator or Pro plan, or you can purchase Credit Add-On Packs ($25 for 1,600 credits) without upgrading your tier.
Tips to Maximize Your Credits
1. Start Fresh Chats When Referencing New Large Files
If you're working through a stack of different documents, video transcripts, or large files — referencing a new one with each message — starting a fresh chat for each file is actually cheaper than keeping one long conversation going. That's because in a continuous chat, your prior conversation history accumulates and gets re-processed on top of each new file, adding cost without any caching benefit on the new content.
On the other hand, if you're asking multiple follow-up questions about the same file, staying in one conversation is much cheaper — caching gives you a 90% discount on re-reading content the AI has already seen.
The rule of thumb: stay in the same chat when you're working with the same context. Start a new chat when you're moving to a different file or topic.
Keep in mind: Sending a simple message like "hello" to a long chat uses the same amount of credits as if you were resending that entire chat — because you are.
2. Summarize and Create "Knowledge Documents"
When a chat becomes long or produces valuable insights, ask the AI to summarize all relevant key points so you can reference it in another chat without losing much context. Save that summary as a note or document in your workspace. In future chats, simply @mention that document instead of continuing the old conversation. This gives you the same context at a fraction of the token cost.
3. Choose the Right Model for the Task
Not all AI models cost the same — and the differences are dramatic. A cheap model like Gemini 3 Flash can be 5–10x less expensive per message than a premium model like Claude Opus 4.5. Eden defaults to our "Best" model for high-quality results, but not every task needs the most expensive option.
Use premium models like Claude Opus 4.5 for complex analysis, creative work, or when accuracy is critical. Switch to less expensive models like Gemini Flash for simple questions, quick lookups, or routine tasks. Model choice alone is one of the biggest factors in how far your credits go.
4. Be Specific and Concise in Your Prompts
Every word in your prompt costs tokens. Be direct and clear — remove unnecessary explanations, examples, or verbosity. Specific prompts also reduce back-and-forth exchanges and help the AI understand what you need on the first try, which saves on output tokens as well.
5. Use Projects to Organize Your Work
Projects give you a dedicated space for specific work, allowing you to centralize relevant files and conversations. Instead of repeatedly uploading or mentioning the same files across different chats, add them to a project once. This reduces redundant context and keeps your work organized.
When chatting inside a Project, Eden AI searches for the most relevant information before pulling it into the chat's context.
6. Reference Files Selectively
When working with large documents or videos, @mention only the files that are directly relevant to your current question. If Eden AI doesn't have what it needs, it can query across your entire workspace with synthesized, sourced answers — so you don't need to include everything in every conversation. Each large file you reference adds significantly to the input cost of that message.
7. Edit Files Directly
Rather than asking the AI to regenerate entire documents, use Eden's direct editing feature to make targeted changes yourself. This preserves the AI's output while avoiding unnecessary regeneration costs.
8. Be Mindful of Output Length
Output tokens cost 3–5x more than input tokens across all models. If you don't need a 2,000-word essay, ask the AI to be concise — it directly saves credits.
Thank you.


