Microsoft launched its lightweight AI model Phi-3 Mini. This model is the first of three smaller Phi models the company is planning this year and works with 3.8 billion parameters. It was trained on a smaller dataset than large language models like GPT-4 and is now available on Azure, Hugging Face, and Ollama. Here are the details…
Features of Microsoft artificial intelligence model Phi-3
With its smaller size, Phi-3 Mini has the same capacity as large language models such as GPT-3.5, but consumes less resources. Microsoft also plans to launch Phi-3 Small with 7 billion parameters and Phi-3 Medium with 14 billion parameters.
Parameters indicate how many complex instructions a model can understand. Microsoft released the Phi-2 in December, claiming that this model offers similar performance to larger models such as the Llama 2.
Phi-3 gives better results than Phi-2 and can provide answers close to large models. Eric Boyd, corporate vice president of Microsoft Azure AI Platform, said the Phi-3 Mini is as capable as the GPT-3.5 in its small form factor.
Small AI models generally cost less and perform better on personal devices. Microsoft sees Phi-3 as an ideal solution for companies working with small data sets. Phi-3 was trained with the idea that “children learn from books and simple sentence structures,” Boyd said.
This model starts from the foundation that Phi-1 focuses on coding and Phi-2 develops logical thinking abilities. Although Phi-3 does not have a broad general availability, it may be a good option for companies’ specific applications.
Small data sets and low computational power requirements make Phi-3 cost-effective for many companies. This is a step in line with Microsoft’s strategy to develop smaller, lighter and more accessible models of AI technology.
What do you think about this news? You can write your opinions in the comments section below.