Back to Glossary

Foundation Model

A large-scale AI model trained on broad data that can be adapted to a wide range of downstream tasks.

Foundation models are large, general-purpose models trained on diverse data at scale. The term, coined by Stanford's Center for Research on Foundation Models, captures the idea that these models serve as a "foundation" that can be fine-tuned, prompted, or otherwise adapted for countless specific applications.

Examples include GPT-4, Claude, LLaMA, and Gemini for language; CLIP and DALL-E for vision-language; and Whisper for speech. These models demonstrate emergent capabilities — abilities that appear at scale but weren't explicitly trained for.

The foundation model paradigm has shifted AI from task-specific model development to adaptation and application building. Companies increasingly build on top of existing foundation models rather than training from scratch, creating demand for engineers who understand how to effectively leverage, customize, and deploy these models.

    Foundation Model — AI Careers Glossary | We Love AI Jobs