Large Language Model (LLM)
A neural network trained on massive text datasets that can understand, generate, and reason about language and code.
A Large Language Model (LLM) is a type of artificial intelligence model trained on billions of words of text — including books, websites, and source code. The training process teaches the model statistical patterns in language, allowing it to predict plausible next tokens given a context.
Modern LLMs like Claude, GPT-4, and Gemini are capable of writing production-quality code, explaining complex concepts, summarising documents, translating between languages, and reasoning through multi-step problems.
For software builders, LLMs are the engine under tools like Cursor, Claude Code, GitHub Copilot, and ChatGPT. Understanding their limitations — especially hallucinations and context-window constraints — is essential for using them effectively.
Related Courses
Links open the course details directly on the Courses page.