Meta AI Releases Llama 3 for Developers and a new Meta AI Assistant for Individuals

Meta Llama 3

Meta AI announced the release of Llama 3, a new version of its open source large language models (LLMs) with significant performance improvements. The firm claims these highly capable models are competitive with similar Google, OpenAI, and Anthropic models, while being safer and more responsible. And it’s already integrated these LLMs into its Meta AI assistant.

“Today, we’re excited to share the first two models of the next generation of Llama, Meta Llama 3, available for broad use,” the announcement post says. “This next generation of Llama demonstrates state-of-the-art performance on a wide range of industry benchmarks and offers new capabilities, including improved reasoning. We believe these are the best open source models of their class, period.”

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday — and get free copies of Paul Thurrott's Windows 11 and Windows 10 Field Guides (normally $9.99) as a special welcome gift!

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Meta’s goal for Llama 3 was to build open AI models that are on par with the best proprietary models from its competitors. And as hinted at in that quote, there’s more to come: The Llama 3 family of models will expand to include multiple models that offer multilingual and multimodal capabilities, and longer contexts, and Meta says it will continue to improve overall performance across core LLM capabilities such as reasoning and coding.

For now, however, the first two models—one with 8B parameters, and one with 70B—offer major leaps over Llama 2 and are, according to Meta, as good or better than the leading competitors in most benchmarks. The models were trained on over 15T tokens, all collected from publicly available sources, a dataset that is over 7 times larger than that used to train Llama 2. And to ensure that this data was high quality, it developed data-filtering pipelines with heuristic filters, NSFW filters, semantic deduplication approaches, and text classifiers.

That work appears to have paid off. Because its predecessor was good at identifying high-quality data, Meta used Llama 2 to generate the training data for the text-quality classifiers that are powering Llama 3. As a result, Llama 3 performs particularly well across use cases including trivia questions, STEM, coding, historical knowledge, and the like.

Meta says that the Llama 3 models will soon be available on AWS, Databricks, Google Cloud, Hugging Face, Kaggle, IBM WatsonX, Microsoft Azure, NVIDIA NIM, and Snowflake, and they support hardware platforms from AMD, AWS, Dell, Intel, NVIDIA, and Qualcomm. “Llama 3 will be everywhere,” the company says.

To get started, developers can head over to the Llama 3 website, where you can download the models and check out the documentation.

But individuals can also experience Llama 3, as it’s been integrated into Meta AI, which the company describes as the world’s leading AI assistant. You can experience Meta AI on Facebook, Instagram, WhatsApp, Messenger, and the web. And you can learn more about the Llama 3-powered Meta AI on the Meta news site.

Tagged with

Share post

Please check our Community Guidelines before commenting

Windows Intelligence In Your Inbox

Sign up for our new free newsletter to get three time-saving tips each Friday

"*" indicates required fields

This field is for validation purposes and should be left unchanged.

Thurrott © 2024 Thurrott LLC