WWDW Podcast - Promo

Meta unveils a new, more efficient Llama model


Meta has announced the newest addition to its Llama family of generative AI models: Llama 3.3 70B.

In a post on X, Ahmad Al-Dahle, VP of generative AI at Meta, said that the text-only Llama 3.3 70B delivers the performance of Meta’s largest Llama model, Llama 3.1 405B, at lower cost.

“By leveraging the latest advancements in post-training techniques … this model improves core performance at a significantly lower cost,” Al-Dahle wrote.

Al-Dahle published a chart showing Llama 3.3 70B outperforming Google’s Gemini 1.5 Pro, OpenAI’s GPT-4o, and Amazon’s newly released Nova Pro on a number of industry benchmarks, including MMLU, which evaluates a model’s ability to understand language. Via email, a Meta spokesperson said that the model should deliver improvements in areas like math, general knowledge, instruction following, and app use.

Llama 3.3 70B, which is available for download from the AI dev platform Hugging Face and other sources, including the official Llama website, is Meta’s latest play to dominate the AI field with “open” models that can be used and commercialized for a range of applications.

Meta’s terms constrain how certain developers can use Llama models; platforms with more than 700 million monthly users must request a special license. But for many, it’s immaterial that Llama models aren’t “open” in the strictest sense. Case in point, Llama has racked up more than 650 million downloads, according to Meta.

Meta has leveraged Llama internally as well. Meta AI, the company’s AI assistant, which is powered entirely by Llama models, now has nearly 600 million monthly active users, per Meta CEO Mark Zuckerberg. Zuckerberg claims that Meta AI is on track to be the most-used AI assistant in the world.

For Meta, the open nature of Llama has been a blessing and a curse. In November, a report alleged that Chinese military researchers had used a Llama model to develop a defense chatbot. Meta responded by making its Llama models available to U.S. defense contractors.

Meta has also voiced concerns about its ability to comply with the AI Act, the EU law that establishes a regulatory framework for AI, calling the law’s implementation “too unpredictable” for its open release strategy. A related issue for the company are provisions in the GDPR, the EU’s privacy law, pertaining to AI training. Meta trains AI models on the public data of Instagram and Facebook users who haven’t opted out — data that in Europe is subject to GDPR guarantees.

EU regulators earlier this year requested that Meta halt training on European user data while they assessed the company’s GDPR compliance. Meta relented, while at the same time endorsing an open letter calling for “a modern interpretation” of GDPR that doesn’t “reject progress.”

Meta, not immune to the technical challenges other AI labs are encountering, is ramping up its computing infrastructure to train and serve future generations of Llama. The company announced Wednesday that it would build a $10 billion AI data center in Louisiana — the largest AI data center Meta has ever built.

Zuckerberg said on Meta’s Q4 earnings call in August that to train the next major set of Llama models, Llama 4, the company will need 10x more compute than what was needed to train Llama 3. Meta has procured a cluster of more than 100,000 Nvidia GPUs for model development, rivaling the resources of competition like xAI.

Training generative AI models is a costly business. Meta’s capital expenditures rose nearly 33% to $8.5 billion in Q2 2024, up from $6.4 billion a year earlier, driven by investments in servers, data centers, and network infrastructure.





Source link

About The Author

Scroll to Top