For big research projects, Discourse Analyzer Advanced offers an optional caching system that helps you save time and credits when re-analyzing the same material.

How It Works #
- Eligibility:
The caching option appears automatically when your project’s total source tokens reach 32,000 or more. Smaller projects do not have access to caching. - What Caching Does:
When you activate caching, the system stores your analysis data for 15 minutes. If you run another prompt or repeat an analysis on the same content within this window, the cached data is used instead of recalculating everything from scratch. This makes repeated prompts much faster. - Credit Efficiency:
Enabling cache costs a fixed number of credits up front, but it can save you credits overall if you need to run several analyses on the same set of sources. Input credits are refunded for sources that are already cached, so you’re not charged again for the same data during the active cache period. - When to Use:
Caching is ideal when you’re working with large, complex projects and plan to ask several questions or run multiple analyses on the same materials. It helps you avoid waiting for the AI to re-process huge data sets each time. - Limitations:
The cache only lasts for 15 minutes after activation. After that, new prompts will require a fresh analysis, and the cache will need to be reactivated if you want to keep using this feature. - Activation:
The caching button is easy to find in your Sources panel. Just switch it on when your project meets the eligibility requirement.
Learn more #
For extra details, see the dedicated caching guide.
Summary #
Caching is a smart way to work efficiently with large projects. Activate it to speed up your workflow and get the most out of your credits when handling lots of sources or complex analyses.