Context Window
The maximum amount of text (code, chat history, files) that an AI model can read and use at one time.
The context window is the total number of tokens (roughly words or word-pieces) that an LLM can process in a single interaction. Everything the model can "see" — your message, system instructions, conversation history, and any files you share — must fit within this limit.
As of 2025, context windows range from ~4,000 tokens (older models) to over 200,000 tokens (Claude 3's extended context). Larger windows allow models to read entire codebases, long documents, or extended conversations without losing track of earlier content.
For vibe coders, context management is a practical skill: knowing which files to include, when to start a fresh conversation, and how to summarise prior context prevents the model from "forgetting" important constraints.
Related Courses
Links open the course details directly on the Courses page.