What You're Paying For
Eden is a drive, not an AI chat app. Your subscription tier primarily pays for:
Storage space — especially for video content like YouTube videos
File transcription — converting documents, audio, and video into searchable text
Video frame analysis — extracting and analyzing visual content from videos
Link transcription — the ability to download, transcribe, and analyze frames from sources like YouTube videos
Indexing — making all of your content searchable and referenceable by AI
AI credits are included in each tier, but they are not the majority of what you're paying for. The processing and storage features above do not consume your AI credits — they are baked into the cost of each tier.
How AI Credits Work
Each subscription tier includes a set number of AI credits:
Tier | Price | AI Credits Included |
|---|---|---|
Starter | $17/mo | 700 |
Creator | $35/mo | 1,500 |
Pro | $99/mo | 4,500 |
Credit Add-Ons | $25 | 1,600 |
Credits Can Go Fast
It's possible to run through your credits very quickly depending on how you use Eden. If you're referencing large new files — like different PDFs, video transcripts, or documents — in every message, and you're using an expensive model like Claude Opus 4.5, a Starter tier's worth of credits can be consumed in as few as 16 messages (see Table 4). This is an extreme case, but it illustrates how much usage pattern and model choice matter. Understanding the tables below will help you make informed decisions about when to use which model and how to structure your conversations.
A Note on Model Pricing
Not all AI models cost the same — and the differences are dramatic. A cheap model like Gemini 3 Flash can be 5–10x less expensive per message than a premium model like Claude Opus 4.5. Choosing the right model for the job is one of the biggest factors in how far your credits go. Use lightweight models for everyday tasks and save premium models for work that genuinely requires deeper intelligence.
Getting the Most AI for Your Money
If you don't need the extra storage, transcription, and processing that come with higher tiers — and you primarily want more AI usage — consider joining the Starter tier ($8/mo) and purchasing Credit Add-On Packs separately. Each add-on pack costs $25 for 1,600 credits, and you can buy as many as you need. This approach lets you scale your AI usage independently without paying for storage and processing you don't use. See Table 5 for a full breakdown of how far each add-on pack goes.
You can purchase credits by going to Settings > Billing > Manage AI Credits.
Understanding AI Usage in Eden
Since Eden is a drive, you're typically not using AI the way you would in a casual chat app. Eden is built for serious work — research, analysis, content creation, and working with large files. This means your AI usage will generally cost more per interaction than simple back-and-forth chatting.
There are three main patterns of AI usage in Eden, and each consumes credits differently:
Conversational Use (Small Input, Small Output)
Standard back-and-forth messages where you ask questions and get responses. Around 1,000 words in and 1,000 words out per exchange. This is the lightest usage pattern and stretches your credits the furthest.
Large Input Work (Big Input, Small Output)
You attach a large document, PDF, or video transcript and ask the AI to analyze, summarize, or answer questions about it. The large file dominates the cost because the AI has to process all of that context. Follow-up questions in the same conversation are much cheaper because the file gets cached.
Agentic Work (Large Input and Output)
The AI searches through multiple sources, reads several long articles, and produces multiple documents or reports. This is the most credit-intensive pattern because both input (reading sources) and output (writing deliverables) are high. This is typical of research workflows, content creation pipelines, and deep analysis tasks.
How Caching Saves You Credits
When you stay in the same conversation, Eden caches your previous context at a 90% discount. This means:
First message with a large PDF: You pay full price for the AI to read it
Every follow-up question about that PDF: The cached version costs 90% less to reference
This is why the tables below show two columns: "New chats" (starting fresh every time) and "One continuous chat" or "Continuous session" (staying in the same thread). For large files especially, staying in one conversation saves significant credits.
Important caveat: Caching only helps when you're re-referencing the same content. If every message brings a new file, the caching benefit disappears — and your conversation history actually adds to the cost. See Table 4 for what this looks like in practice.
Second caveat: Caching only works if you use the same model. If you switch models mid chat, or multiple times throughout the chat, then the previous messages in the chat will no longer be cached.
A Note on These Tables
The tables below are based on predictable, standardized scenarios — like adding the same PDF to every message or producing a fixed amount of output each time. Your individual usage will differ, and these numbers can vary widely. Please use these tables as a starting point to understand how credits work and how different models and usage patterns affect cost. Your actual usage will determine how far your credits go.
Table 1: Conversational Use
Scenario: ~1,000 words in, ~1,000 words out per message. Standard Q&A, brainstorming, writing assistance, and general conversation.
This is the lightest usage pattern. "New chats" assumes each message starts a fresh conversation. "One continuous chat" assumes an ongoing thread where conversation history is cached.
Model | Tier | New chats (messages) | Continuous chat (messages) |
|---|---|---|---|
Gemini 3 Flash | Starter | ~1,540 | ~1,200–1,350 |
Gemini 3 Flash | Creator | ~3,300 | ~2,500–2,800 |
Gemini 3 Flash | Pro | ~9,890 | ~7,000–8,000 |
GPT-4o | Starter | ~560 | ~330–370 |
GPT-4o | Creator | ~1,200 | ~700–800 |
GPT-4o | Pro | ~3,600 | ~2,000–2,300 |
Claude Haiku 4.5 | Starter | ~900 | ~700–750 |
Claude Haiku 4.5 | Creator | ~1,920 | ~1,400–1,500 |
Claude Haiku 4.5 | Pro | ~5,770 | ~3,800–4,200 |
Gemini 3 Pro | Starter | ~385 | ~200–230 |
Gemini 3 Pro | Creator | ~825 | ~440–500 |
Gemini 3 Pro | Pro | ~2,470 | ~1,250–1,400 |
GPT-5.2 | Starter | ~445 | ~240–270 |
GPT-5.2 | Creator | ~955 | ~510–580 |
GPT-5.2 | Pro | ~2,860 | ~1,500–1,700 |
Claude Sonnet 4.5 | Starter | ~300 | ~150–170 |
Claude Sonnet 4.5 | Creator | ~640 | ~350–400 |
Claude Sonnet 4.5 | Pro | ~1,920 | ~1,000–1,100 |
Claude Opus 4.5 | Starter | ~180 | ~100–110 |
Claude Opus 4.5 | Creator | ~385 | ~210–240 |
Claude Opus 4.5 | Pro | ~1,150 | ~600–680 |
Table 2: Large Input Work (PDF / Document Analysis / Video Referencing)
Scenario: You attach a large PDF (~75–100 pages, ~75,000 tokens, equal to about 3 podcasts or many documents) and ask questions about it, receiving ~1,000 words of output per message, which can vary heavily depending on what you are trying to do.
This is where caching makes the biggest difference. Uploading the same large file in a new chat every time is expensive because the AI re-reads the entire document at full price. Staying in one conversation and asking follow-up questions is 3–5x cheaper because the document is cached.
Model | Tier | New chats (messages - PDF each time) | One continuous chat (messages - PDF cached) |
|---|---|---|---|
Gemini 3 Flash | Starter | ~165 | ~540–600 |
Gemini 3 Flash | Creator | ~355 | ~1,150–1,300 |
Gemini 3 Flash | Pro | ~1,060 | ~3,400–3,800 |
GPT-4o | Starter | ~34 | ~175–200 |
GPT-4o | Creator | ~73 | ~370–420 |
GPT-4o | Pro | ~220 | ~1,100–1,250 |
Claude Haiku 4.5 | Starter | ~81 | ~400–450 |
Claude Haiku 4.5 | Creator | ~174 | ~850–960 |
Claude Haiku 4.5 | Pro | ~521 | ~2,500–2,800 |
Gemini 3 Pro | Starter | ~35 | ~115–130 |
Gemini 3 Pro | Creator | ~75 | ~250–280 |
Gemini 3 Pro | Pro | ~224 | ~730–830 |
GPT-5.2 | Starter | ~38 | ~130–150 |
GPT-5.2 | Creator | ~82 | ~280–320 |
GPT-5.2 | Pro | ~245 | ~830–940 |
Claude Sonnet 4.5 | Starter | ~27 | ~90–100 |
Claude Sonnet 4.5 | Creator | ~58 | ~190–215 |
Claude Sonnet 4.5 | Pro | ~174 | ~570–640 |
Claude Opus 4.5 | Starter | ~16 | ~55–65 |
Claude Opus 4.5 | Creator | ~35 | ~120–135 |
Claude Opus 4.5 | Pro | ~104 | ~350–400 |
Table 3: Agentic Work (Research & Content Creation)
Scenario: The AI searches through and reads 4–5 articles (~2,000 words each), then produces 2–3 documents (~1,000 words each). This represents a single research or content creation task — roughly 65,000 input tokens and 5,250 output tokens per task.
Caching helps less here because the source articles change with each task. The "continuous session" column reflects a small benefit from caching system instructions, but the fresh article content must be processed at full price each time.
Model | Tier | Standalone tasks (messages) | Continuous session (messages) |
|---|---|---|---|
Gemini 3 Flash | Starter | ~310 | ~330–350 |
Gemini 3 Flash | Creator | ~670 | ~710–750 |
Gemini 3 Flash | Pro | ~2,000 | ~2,120–2,250 |
GPT-4o | Starter | ~90 | ~95–100 |
GPT-4o | Creator | ~195 | ~205–215 |
GPT-4o | Pro | ~580 | ~615–650 |
Claude Haiku 4.5 | Starter | ~175 | ~185–195 |
Claude Haiku 4.5 | Creator | ~375 | ~400–420 |
Claude Haiku 4.5 | Pro | ~1,125 | ~1,190–1,260 |
Gemini 3 Pro | Starter | ~65 | ~69–73 |
Gemini 3 Pro | Creator | ~140 | ~148–156 |
Gemini 3 Pro | Pro | ~420 | ~445–470 |
GPT-5.2 | Starter | ~69 | ~73–77 |
GPT-5.2 | Creator | ~148 | ~157–165 |
GPT-5.2 | Pro | ~445 | ~470–500 |
Claude Sonnet 4.5 | Starter | ~55 | ~58–61 |
Claude Sonnet 4.5 | Creator | ~118 | ~125–132 |
Claude Sonnet 4.5 | Pro | ~355 | ~375–395 |
Claude Opus 4.5 | Starter | ~30 | ~32–34 |
Claude Opus 4.5 | Creator | ~65 | ~69–73 |
Claude Opus 4.5 | Pro | ~195 | ~206–218 |
Table 4: Multi-File Reference (New Large File Each Message)
Scenario: You reference a different large file (~75–100 pages, ~75,000 tokens) with each message and receive ~1,000 words of output per message. This represents working through a stack of different documents, reviewing multiple video transcripts, or comparing separate reports back-to-back.
This is the most expensive way to use large files. Unlike Table 2 — where you upload one file and ask many follow-up questions about it — here every message introduces a brand-new file that must be processed at full price. Caching cannot help with the files themselves since each one is different.
"Separate chats" means each file is handled in its own conversation with no history. "One conversation" means you're working through all of the files in a single thread, where your prior Q&A history accumulates and adds to the cost on top of each new file. In this scenario, staying in one conversation is actually more expensive because the growing history stacks on top of the already-costly new file each time.
Model | Tier | Separate chats (messages) | One conversation (messages) |
|---|---|---|---|
Gemini 3 Flash | Starter | ~166 | ~137 |
Gemini 3 Flash | Creator | ~356 | ~255 |
Gemini 3 Flash | Pro | ~1,070 | ~569 |
GPT-4o | Starter | ~34 | ~32 |
GPT-4o | Creator | ~73 | ~66 |
GPT-4o | Pro | ~220 | ~173 |
Claude Haiku 4.5 | Starter | ~84 | ~75 |
Claude Haiku 4.5 | Creator | ~181 | ~147 |
Claude Haiku 4.5 | Pro | ~543 | ~350 |
Gemini 3 Pro | Starter | ~41 | ~39 |
Gemini 3 Pro | Creator | ~89 | ~79 |
Gemini 3 Pro | Pro | ~267 | ~203 |
GPT-5.2 | Starter | ~46 | ~43 |
GPT-5.2 | Creator | ~98 | ~87 |
GPT-5.2 | Pro | ~296 | ~222 |
Claude Sonnet 4.5 | Starter | ~28 | ~27 |
Claude Sonnet 4.5 | Creator | ~60 | ~55 |
Claude Sonnet 4.5 | Pro | ~181 | ~147 |
Claude Opus 4.5 | Starter | ~16 | ~16 |
Claude Opus 4.5 | Creator | ~36 | ~34 |
Claude Opus 4.5 | Pro | ~108 | ~94 |
Need More AI Credits? Get a Credit Add-On Pack
If you don't need more storage or processing power and just want more AI usage, you don't have to upgrade your tier. Instead, consider purchasing a Credit Add-On Pack.
Credit Add-On Pack: $25 for 1,600 credits
This is a straightforward way to extend your AI usage without changing your subscription. Add-on packs are ideal if you've hit your monthly credit limit but don't need the additional storage, transcription, or indexing that comes with a higher tier.
You can purchase credits by going to Settings > Billing > Manage AI Credits.
Table 5: How Far 1,600 Credits Go
Credit Add-On Pack — $25 for 1,600 credits (more credits for less money than upgrading your tier.)
Conversational Use (~1,000 words in, ~1,000 words out)
Model | New chats (messages) | One continuous chat (messages) |
|---|---|---|
Gemini 3 Flash | ~3,520 | ~2,660–3,000 |
GPT-4o | ~1,280 | ~750–850 |
Claude Haiku 4.5 | ~2,050 | ~1,500–1,600 |
Gemini 3 Pro | ~880 | ~470–540 |
GPT-5.2 | ~1,020 | ~550–620 |
Claude Sonnet 4.5 | ~685 | ~375–430 |
Claude Opus 4.5 | ~410 | ~225–260 |
Large Input Work (~75–100 page PDF, ~1,000 words out)
Model | New chats (messages - PDF each time) | One continuous chat (messages - PDF cached) |
|---|---|---|
Gemini 3 Flash | ~375 | ~1,230–1,390 |
GPT-4o | ~78 | ~400–450 |
Claude Haiku 4.5 | ~185 | ~910–1,025 |
Gemini 3 Pro | ~80 | ~265–300 |
GPT-5.2 | ~87 | ~300–340 |
Claude Sonnet 4.5 | ~62 | ~205–230 |
Claude Opus 4.5 | ~37 | ~125–145 |
Agentic Work (4–5 articles → 2–3 documents)
Model | Standalone tasks (messages) | Continuous session (messages) |
|---|---|---|
Gemini 3 Flash | ~710 | ~750–800 |
GPT-4o | ~205 | ~215–230 |
Claude Haiku 4.5 | ~400 | ~425–450 |
Gemini 3 Pro | ~150 | ~158–168 |
GPT-5.2 | ~158 | ~167–176 |
Claude Sonnet 4.5 | ~126 | ~134–141 |
Claude Opus 4.5 | ~70 | ~74–78 |
Multi-File Reference (new large file each message, ~1,000 words out)
Model | Separate chats (messages) | One conversation (messages) |
|---|---|---|
Gemini 3 Flash | ~380 | ~269 |
GPT-4o | ~78 | ~70 |
Claude Haiku 4.5 | ~193 | ~155 |
Gemini 3 Pro | ~95 | ~84 |
GPT-5.2 | ~105 | ~92 |
Claude Sonnet 4.5 | ~64 | ~59 |
Claude Opus 4.5 | ~38 | ~36 |
Tips for Getting the Most Out of Your Credits
Stay in the same conversation when working with the same file. Caching gives you a 90% discount on re-reading content you've already sent. Starting a new chat forces the AI to re-process everything at full price. However, if you're referencing a different large file each message, separate chats can actually be cheaper — see Table 4.
Pick the right model for the job. Use faster, cheaper models (Gemini 3 Flash, Claude Haiku 4.5) for straightforward tasks and save premium models (Claude Opus 4.5, GPT-5.2) for work that genuinely requires deeper reasoning. Model choice alone can make a 5–10x difference in how far your credits go.
Buy credit add-on packs instead of upgrading tiers if you only need more AI usage. Upgrading a tier makes sense when you also need more storage and processing — but if your storage needs are met, add-on packs are more efficient.
Be mindful of output length. Output tokens cost 3–5x more than input tokens across all models. If you don't need a 2,000-word essay, ask the AI to be concise — it directly saves credits.
Thank you.


