Back to Glossary

Fine-Tuning

Training an existing AI model on your own dataset to specialize its behaviour for a specific task or domain.

Fine-tuning is the process of taking a pre-trained language model and continuing its training on a smaller, domain-specific dataset. The result is a model that retains its general capabilities but becomes significantly better at the specific task you trained it on.

For example: a general LLM might write decent SQL, but a model fine-tuned on your company's specific database schema and query patterns will write queries that match your conventions perfectly. Fine-tuning is useful when prompt engineering alone cannot achieve the consistency or quality you need.

Most vibe coders will not need to fine-tune models. RAG (retrieval-augmented generation) and good prompting solve most use cases. Fine-tuning makes sense when you have thousands of examples of desired input/output pairs and need very consistent, specialized behaviour at scale.

Related Courses

Links open the course details directly on the Courses page.