Microsoft has announced the newest addition to its Phi family of generative AI models.
Called Phi-4, the model is improved in several areas over its predecessors, Microsoft claims — in particular math problem solving. That’s partly the result of improved training data quality.
Phi-4 is available in very limited access as of Thursday night: only on Microsoft’s recently launched Azure AI Foundry development platform, and only for research purposes under a Microsoft research license agreement.
This is Microsoft’s latest small language model, coming in at 14 billion parameters in size, and it competes with other small models such as GPT-4o mini, Gemini 2.0 Flash, and Claude 3.5 Haiku. These AI models are oftentimes faster and cheaper to run, but the performance of small language models has gradually increased over the last several years.
In this case, Microsoft attributes Phi-4’s jump in performance to the use of “high-quality synthetic datasets,” alongside high quality datasets of human generated content and some unspecified post training improvements.
Many AI labs are looking more closely at innovations they can make around synthetic data and post training these days. Scale AI CEO Alexandr Wang said in a tweet on Thursday that “we have reached a pre-training data wall,” confirming several reports on the topic in the last several weeks.
Notably, Phi-4 is the first Phi-series model to launch following the departure of Sébastien Bubeck. Bubeck, previously an AI VP at Microsoft and a key figure in the company’s Phi model development, left Microsoft in October to join OpenAI.