The way credits are consumed depends on whether you are using Discourse Analyzer Simple or Advanced. Understanding these mechanics helps you manage your usage and plan your analyses efficiently.
Discourse Analyzer Simple: How Credits Are Consumed #
- Basic Processing: Each credit lets you process up to 500 tokens in a prompt. This includes both the text you submit and the AI’s response.
- Larger Prompts: If your total input and output go beyond 500 tokens, more credits are used. For example, a prompt and response totaling 600 tokens will require two credits.
- Extra Features: Using advanced features like web search or selecting advanced AI models will use more credits for each request.
- Swift Scribe Model: The only exception is the Swift Scribe model. This lightweight model is free to use and does not consume any credits.
Discourse Analyzer Advanced: How Credits Are Consumed #
- Prompt Processing: Credits are deducted for both the size of your input (including all sources and your prompt) and the AI’s output.
- Caching: Credits are used up front to create and store a cache. The cache stores your analysis data for 15 minutes. If you run the same analysis again while the cache is active, input credits for cached sources will be refunded.
- Grounding Feature: Using the Grounding feature deducts a set number of credits. The exact amount depends on your plan.
- Failed Requests: If a request fails, input credits are automatically refunded to your balance.
- Insufficient Credits: If you do not have enough credits for an action, it will be blocked and you will see a message explaining how many credits you need.
Why is this important for discourse analysts? #
Understanding how credits are consumed lets you estimate the cost of different analyses, avoid surprises, and make sure your credits last for all your research needs. Whether you’re working with short texts or analyzing large corpora, managing your credits ensures you get consistent, uninterrupted access to the platform’s capabilities.