Context in Factory represents all the information relevant to your current development task. Unlike traditional development environments where context lives across multiple tools and tabs, Factory brings everything together in one place, allowing AI to understand and assist with your work more effectively.
Tokens are the fundamental units that Factory uses to process text. Each session has a context limit based on the underlying LLM being used.
What are Tokens?
In a nutshell, tokens are discrete units of text that language models use to process and understand content. They represent the atomic elements of text processing, where words, subwords, or individual characters are converted into numerical values that the AI can analyze. Understanding token usage is crucial for optimizing context management and ensuring efficient AI interactions within Factory. Below is a slightly more detailed explanation.Basic Concepts:
Tokens can be words, parts of words, or even single characters
Common words are usually single tokens (e.g., “the”, “is”, “Factory”)
Longer or uncommon words might be split into multiple tokens
Punctuation marks and spaces count as tokens
Example Tokenization:
The sentence “I heard a dog bark loudly” becomes:
I
heard
a
dog
bark
loudly
Token Limits:
Maximum context window: ~120,000 tokens
Optimal working range: 10,000-60,000 tokens
Includes all context sources: code, documentation, conversation history