A token is the basic unit of text that the AI uses to process and analyze language. In general AI, a token can be a word, part of a word, a punctuation mark, or even a space. In Discourse Analyzer, one token is defined as one word for your input. For the AI’s output, each token represents about 0.75 words.
Input Tokens #
When you submit text for analysis, each word in your input counts as one token.
For example:
“Discourse analysis is powerful!”
counts as 4 input tokens (Discourse
,analysis
,is
,powerful!
).
Output Tokens #
The AI’s response is measured in tokens, where each token is roughly 0.75 words.
For example, an output of 30 words would count as about 40 output tokens (30 ÷ 0.75 = 40).
Total Tokens #
The total token count is the sum of your input tokens and the output tokens for each prompt.
This total determines how many credits are consumed for that analysis.
Why does this matter? #
Input tokens limit how much text you can analyze at once.
Output tokens limit how long the AI’s response can be.
The combined total affects your credit usage.
Summary #
In Discourse Analyzer, tokens keep track of your usage and ensure precise billing. For input, one word equals one token. For output, one token is about 0.75 words. Managing tokens helps you maximize your analyses and control your resource use.