Topics

Getting Started

Workspace

Eden AI

Canvas

Custom Agents

Tasks & Integrations

Visual Search

Collaboration

Import & Export

Account & Billing

Search…

Understanding AI usage & credits

Understanding AI usage & credits

Understanding AI usage & credits

How credits work, what affects usage, and examples for different workflows.

How credits work, what affects usage, and examples for different workflows.

Copied!

What You're Paying For

Eden is a drive, not an AI chat app. Your subscription tier primarily pays for:

Storage space — especially for video content like YouTube videos

File transcription — converting documents, audio, and video into searchable text

Video frame analysis — extracting and analyzing visual content from videos

Link transcription — the ability to download, transcribe, and analyze frames from sources like YouTube videos

Indexing — making all of your content searchable and referenceable by AI

AI credits are included in each tier, but they are not the majority of what you're paying for. The processing and storage features above do not consume your AI credits — they are baked into the cost of each tier.

How AI Credits Work

Each subscription tier includes a set number of AI credits:

Tier

Price

AI Credits Included

Starter

$17/mo

1,000

Creator

$35/mo

2,500

Pro

$99/mo

7,500

Credit Add-Ons

$25

1,600

Credits Can Go Fast

It's possible to run through your credits very quickly depending on how you use Eden. If you're referencing large new files — like different PDFs, video transcripts, or documents — in every message, and you're using an expensive model like Claude Opus, a Starter tier's worth of credits can be consumed in as few as 23 messages (see Table 4). This is an extreme case, but it illustrates how much usage pattern and model choice matter. Understanding the tables below will help you make informed decisions about when to use which model and how to structure your conversations.

What Happens When You Run Out of Credits

If you use all your credits for the month, you don't lose access to AI. Eden automatically switches you to a Free model — an unlimited, always-available model that lets you keep working. It's slower and less capable than the premium models included with your tier, but it means you're never locked out of AI entirely.

You can continue using the Free model for as long as you need, or you can purchase a Credit Add-On Pack ($25 for 1,600 credits) to get back to your preferred models immediately. Your credits reset at the start of each billing cycle.

A Note on Model Pricing

Not all AI models cost the same — and the differences are dramatic. A cheap model like Gemini 3 Flash can be 5–10x less expensive per message than a premium model like Claude Opus 4.5. Choosing the right model for the job is one of the biggest factors in how far your credits go. Use lightweight models for everyday tasks and save premium models for work that genuinely requires deeper intelligence.

Getting the Most AI for Your Money

If you don't need the extra storage, transcription, and processing that come with higher tiers — and you primarily want more AI usage — consider joining the Starter tier and purchasing Credit Add-On Packs separately. Each add-on pack costs $25 for 1,600 credits, and you can buy as many as you need. This approach lets you scale your AI usage independently without paying for storage and processing you don't use. See Table 5 for a full breakdown of how far each add-on pack goes.

You can purchase credits by going to Settings > Billing > Manage AI Credits.

Understanding AI Usage in Eden

Since Eden is a drive, you're typically not using AI the way you would in a casual chat app. Eden is built for serious work — research, analysis, content creation, and working with large files. This means your AI usage will generally cost more per interaction than simple back-and-forth chatting.

There are three main patterns of AI usage in Eden, and each consumes credits differently:

Conversational Use (Small Input, Small Output)

Standard back-and-forth messages where you ask questions and get responses. Around 1,000 words in and 1,000 words out per exchange. This is the lightest usage pattern and stretches your credits the furthest.

Large Input Work (Big Input, Small Output)

You attach a large document, PDF, or video transcript and ask the AI to analyze, summarize, or answer questions about it. The large file dominates the cost because the AI has to process all of that context. Follow-up questions in the same conversation are much cheaper because the file gets cached.

Agentic Work (Large Input and Output)

The AI searches through multiple sources, reads several long articles, and produces multiple documents or reports. This is the most credit-intensive pattern because both input (reading sources) and output (writing deliverables) are high. This is typical of research workflows, content creation pipelines, and deep analysis tasks.

How Caching Saves You Credits

When you stay in the same conversation, Eden caches your previous context at a 90% discount. This means:

First message with a large PDF: You pay full price for the AI to read it

Every follow-up question about that PDF: The cached version costs 90% less to reference

This is why the tables below show two columns: "New chats" (starting fresh every time) and "One continuous chat" or "Continuous session" (staying in the same thread). For large files especially, staying in one conversation saves significant credits.

Important caveat: Caching only helps when you're re-referencing the same content. If every message brings a new file, the caching benefit disappears — and your conversation history actually adds to the cost. See Table 4 for what this looks like in practice.

Second caveat: Caching only works if you use the same model. If you switch models mid chat, or multiple times throughout the chat, then the previous messages in the chat will no longer be cached.

A Note on These Tables

The tables below are based on predictable, standardized scenarios — like adding the same PDF to every message or producing a fixed amount of output each time. Your individual usage will differ, and these numbers can vary widely. Please use these tables as a starting point to understand how credits work and how different models and usage patterns affect cost. Your actual usage will determine how far your credits go.

Understanding Auto Ranges

The tables below include estimated ranges for Auto — Eden's default model routing. Because Auto selects different models depending on each message, usage varies widely. The low end of the range reflects messages that get routed to more expensive models (like Claude Sonnet 4.5), and the high end reflects messages routed to inexpensive models (like Gemini 3 Flash). In practice, most Auto usage falls somewhere in the middle, since the router optimizes for the right model per task. Auto is generally the most credit-efficient option for mixed workflows. For more on how Auto works, see [How Auto Model Routing Works].

Table 1: Conversational Use

Scenario: ~1,000 words in, ~1,000 words out per message. Standard Q&A, brainstorming, writing assistance, and general conversation.

This is the lightest usage pattern. "New chats" assumes each message starts a fresh conversation. "One continuous chat" assumes an ongoing thread where conversation history is cached.

Model

Tier

New chats (messages)

Continuous chat (messages)

Auto

Starter

~470–2,000

~240–1,750

Auto

Creator

~1,150–4,950

~640–4,200

Auto

Pro

~3,500–14,850

~1,850–12,000

Gemini 3 Flash

Starter

~2,200

~1,715–1,930

Gemini 3 Flash

Creator

~5,500

~4,170–4,670

Gemini 3 Flash

Pro

~16,480

~11,670–13,330

Claude Haiku 4.5

Starter

~1,285

~1,000–1,070

Claude Haiku 4.5

Creator

~3,200

~2,335–2,500

Claude Haiku 4.5

Pro

~9,615

~6,335–7,000

Gemini 3 Pro

Starter

~550

~285–330

Gemini 3 Pro

Creator

~1,375

~735–835

Gemini 3 Pro

Pro

~4,115

~2,085–2,335

GPT-5.2

Starter

~635

~345–385

GPT-5.2

Creator

~1,590

~850–965

GPT-5.2

Pro

~4,770

~2,500–2,835

Claude Sonnet 4.5

Starter

~430

~215–245

Claude Sonnet 4.5

Creator

~1,065

~585–665

Claude Sonnet 4.5

Pro

~3,200

~1,665–1,835

Claude Opus 4.5

Starter

~255

~145–155

Claude Opus 4.5

Creator

~640

~350–400

Claude Opus 4.5

Pro

~1,915

~1,000–1,135

Table 2: Large Input Work (PDF / Document Analysis / Video Referencing)

Scenario: You attach a large PDF (~75–100 pages, ~75,000 tokens, equal to about 3 podcasts or many documents) and ask questions about it, receiving ~1,000 words of output per message, which can vary heavily depending on what you are trying to do.

This is where caching makes the biggest difference. Uploading the same large file in a new chat every time is expensive because the AI re-reads the entire document at full price. Staying in one conversation and asking follow-up questions is 3–5x cheaper because the document is cached.

Model

Tier

New chats (messages - PDF each time)

One continuous chat (messages - PDF cached)

Auto

Starter

~44–210

~145–770

Auto

Creator

~105–530

~350–1,950

Auto

Pro

~320–1,600

~1,050–5,700

Gemini 3 Flash

Starter

~235

~770–855

Gemini 3 Flash

Creator

~590

~1,915–2,170

Gemini 3 Flash

Pro

~1,770

~5,670–6,335

Claude Haiku 4.5

Starter

~115

~570–645

Claude Haiku 4.5

Creator

~290

~1,415–1,600

Claude Haiku 4.5

Pro

~870

~4,170–4,670

Gemini 3 Pro

Starter

~50

~165–185

Gemini 3 Pro

Creator

~125

~415–465

Gemini 3 Pro

Pro

~375

~1,215–1,385

GPT-5.2

Starter

~55

~185–215

GPT-5.2

Creator

~135

~465–535

GPT-5.2

Pro

~410

~1,385–1,565

Claude Sonnet 4.5

Starter

~40

~130–145

Claude Sonnet 4.5

Creator

~95

~315–360

Claude Sonnet 4.5

Pro

~290

~950–1,065

Claude Opus 4.5

Starter

~23

~80–95

Claude Opus 4.5

Creator

~60

~200–225

Claude Opus 4.5

Pro

~175

~585–665

Table 3: Agentic Work (Research & Content Creation)

Scenario: The AI searches through and reads 4–5 articles (~2,000 words each), then produces 2–3 documents (~1,000 words each). This represents a single research or content creation task — roughly 65,000 input tokens and 5,250 output tokens per task.

Caching helps less here because the source articles change with each task. The "continuous session" column reflects a small benefit from caching system instructions, but the fresh article content must be processed at full price each time.

Model

Tier

Standalone tasks (messages)

Continuous session (messages)

Auto

Starter

~90–400

~95–450

Auto

Creator

~210–1,000

~230–1,100

Auto

Pro

~650–3,000

~690–3,400

Gemini 3 Flash

Starter

~445

~470–500

Gemini 3 Flash

Creator

~1,115

~1,185–1,250

Gemini 3 Flash

Pro

~3,335

~3,535–3,750

Claude Haiku 4.5

Starter

~250

~265–280

Claude Haiku 4.5

Creator

~625

~665–700

Claude Haiku 4.5

Pro

~1,875

~1,985–2,100

Gemini 3 Pro

Starter

~95

~100–105

Gemini 3 Pro

Creator

~235

~245–260

Gemini 3 Pro

Pro

~700

~740–785

GPT-5.2

Starter

~100

~105–110

GPT-5.2

Creator

~245

~260–275

GPT-5.2

Pro

~740

~785–835

Claude Sonnet 4.5

Starter

~80

~85–90

Claude Sonnet 4.5

Creator

~195

~210–220

Claude Sonnet 4.5

Pro

~590

~625–660

Claude Opus 4.5

Starter

~43

~46–49

Claude Opus 4.5

Creator

~110

~115–120

Claude Opus 4.5

Pro

~325

~345–365

Table 4: Multi-File Reference (New Large File Each Message)

Scenario: You reference a different large file (~75–100 pages, ~75,000 tokens) with each message and receive ~1,000 words of output per message. This represents working through a stack of different documents, reviewing multiple video transcripts, or comparing separate reports back-to-back.

This is the most expensive way to use large files. Unlike Table 2 — where you upload one file and ask many follow-up questions about it — here every message introduces a brand-new file that must be processed at full price. Caching cannot help with the files themselves since each one is different.

"Separate chats" means each file is handled in its own conversation with no history. "One conversation" means you're working through all of the files in a single thread, where your prior Q&A history accumulates and adds to the cost on top of each new file. In this scenario, staying in one conversation is actually more expensive because the growing history stacks on top of the already-costly new file each time.

Model

Tier

Separate chats (messages)

One conversation (messages)

Auto

Starter

~44–210

~43–175

Auto

Creator

~110–530

~100–380

Auto

Pro

~330–1,600

~270–860

Gemini 3 Flash

Starter

~237

~196

Gemini 3 Flash

Creator

~593

~425

Gemini 3 Flash

Pro

~1,785

~950

Claude Haiku 4.5

Starter

~120

~107

Claude Haiku 4.5

Creator

~302

~245

Claude Haiku 4.5

Pro

~905

~585

Gemini 3 Pro

Starter

~59

~56

Gemini 3 Pro

Creator

~148

~132

Gemini 3 Pro

Pro

~445

~340

GPT-5.2

Starter

~66

~61

GPT-5.2

Creator

~163

~145

GPT-5.2

Pro

~493

~370

Claude Sonnet 4.5

Starter

~40

~39

Claude Sonnet 4.5

Creator

~100

~92

Claude Sonnet 4.5

Pro

~302

~245

Claude Opus 4.5

Starter

~23

~23

Claude Opus 4.5

Creator

~60

~57

Claude Opus 4.5

Pro

~180

~157

Need More AI Credits? Get a Credit Add-On Pack

If you don't need more storage or processing power and just want more AI usage, you don't have to upgrade your tier. Instead, consider purchasing a Credit Add-On Pack.

Credit Add-On Pack: $25 for 1,600 credits

This is a straightforward way to extend your AI usage without changing your subscription. Add-on packs are ideal if you've hit your monthly credit limit but don't need the additional storage, transcription, or indexing that comes with a higher tier.

You can purchase credits by going to Settings > Billing > Manage AI Credits.

Table 5: How Far 1,600 Credits Go

Credit Add-On Pack — $25 for 1,600 credits (more credits for less money than upgrading your tier.)

Conversational Use (~1,000 words in, ~1,000 words out)

Model

New chats (messages)

One continuous chat (messages)

Auto

~750–3,150

~410–2,700

Gemini 3 Flash

~3,520

~2,660–3,000

Claude Haiku 4.5

~2,050

~1,500–1,600

Gemini 3 Pro

~880

~470–540

GPT-5.2

~1,020

~550–620

Claude Sonnet 4.5

~685

~375–430

Claude Opus 4.5

~410

~225–260

Large Input Work (~75–100 page PDF, ~1,000 words out)

Model

New chats (messages - PDF each time)

One continuous chat (messages - PDF cached)

Auto

~70–340

~230–1,250

Gemini 3 Flash

~375

~1,230–1,390

Claude Haiku 4.5

~185

~910–1,025

Gemini 3 Pro

~80

~265–300

GPT-5.2

~87

~300–340

Claude Sonnet 4.5

~62

~205–230

Claude Opus 4.5

~37

~125–145

Agentic Work (4–5 articles → 2–3 documents)

Model

Standalone tasks (messages)

Continuous session (messages)

Auto

~140–640

~145–720

Gemini 3 Flash

~710

~750–800

Claude Haiku 4.5

~400

~425–450

Gemini 3 Pro

~150

~158–168

GPT-5.2

~158

~167–176

Claude Sonnet 4.5

~126

~134–141

Claude Opus 4.5

~70

~74–78

Multi-File Reference (new large file each message, ~1,000 words out)

Model

Separate chats (messages)

One conversation (messages)

Auto

~70–340

~65–240

Gemini 3 Flash

~380

~269

Claude Haiku 4.5

~193

~155

Gemini 3 Pro

~95

~84

GPT-5.2

~105

~92

Claude Sonnet 4.5

~64

~59

Claude Opus 4.5

~38

~36

Tips for Getting the Most Out of Your Credits

Use Auto for mixed workflows. If your work involves a variety of tasks — quick questions, writing, research, organization — Auto is generally the most credit-efficient choice. It routes each message to the best model for the job, so you're not overpaying for simple tasks or underpowered on complex ones.

Stay in the same conversation when working with the same file. Caching gives you a 90% discount on re-reading content you've already sent. Starting a new chat forces the AI to re-process everything at full price. However, if you're referencing a different large file each message, separate chats can actually be cheaper — see Table 4.

Pick the right model for the job. Use faster, cheaper models (Gemini 3 Flash, Claude Haiku 4.5) for straightforward tasks and save premium models (Claude Opus 4.5, GPT-5.2) for work that genuinely requires deeper reasoning. Model choice alone can make a 5–10x difference in how far your credits go. Or just use Auto and let Eden handle this for you.

Buy credit add-on packs instead of upgrading tiers if you only need more AI usage. Upgrading a tier makes sense when you also need more storage and processing — but if your storage needs are met, add-on packs are more efficient.

Be mindful of output length. Output tokens cost 3–5x more than input tokens across all models. If you don't need a 2,000-word essay, ask the AI to be concise — it directly saves credits.

Let Auto Save You Credits

If you're not sure which model to use — or you just don't want to think about it — Auto is the easiest way to get the most out of your credits.

Auto is Eden's default model and it's selected whenever you start a new chat. Instead of sending every message to the same model, Auto analyzes each message and routes it to the best model for that specific task. Quick questions go to fast, inexpensive models. Complex writing or analysis goes to more capable models. Agentic tasks like organizing your workspace or researching topics go to models optimized for multi-step tool use.

The result: you get better results per message and your credits last longer, because you're never overpaying for simple tasks or underpowered on hard ones.

For most users, Auto is the recommended default. You can always switch to a specific model when you have a reason to, but Auto handles the optimization automatically.

For a full explanation of how Auto selects models, see [How Auto Model Routing Works].

Thank you.