LLMs: Fine-tuning
https://developers.google.com/machine-learning/crash-course/llm/tuning
Foundation Models
A Foundation LLM (or base/pre-trained model) is a general-purpose model trained on vast amounts of data. It understands grammar and can perform creative tasks like writing poetry. However, to solve specific problems (like classification or regression), it often serves as a starting platform rather than a finished solution.

Fine Tuning
Fine-tuning transforms a general foundation model into a specialized tool.
- The Process: The model undergoes additional training using a specific dataset relevant to the desired task.
- Efficiency: Because foundation models are already powerful pattern recognizers, they often need relatively few examples (hundreds or thousands) to learn new tasks.
- Technique: To save computational costs, parameter-efficient tuning is often used, where only a small subset of the model’s parameters are updated during training.
- Result: The model becomes more accurate for the specific task, but its file size (number of parameters) remains the same as the original.

However, it is computationaly hard to fine tune whole foundation model i.e. all parameters and weights on each backpropagation. Thus, standard fine tuning has alternative which is parameter efficient tuning which adjusts only a subset of parameters on each backpropagation iteration.

Fine tune models can perform better in prediction than foundation LLM’s prediction. But sill, the number of parameters in both the models are same.

Other Resources:
“Introduction to tuning,” Google Cloud Documentation, 2026. https://docs.cloud.google.com/vertex-ai/generative-ai/docs/models/tune-models (accessed Jan. 25, 2026).