Meta’s latest open-source AI model is its largest to date.
Meta today announced the release of Llama 3.1 405B, a model with 405 billion parameters. Parameters roughly correspond to the model’s problem-solving ability, and models with more parameters generally perform better than models with fewer parameters.
Llama 3.1 405B with 405 billion parameters is not absolute maximum There are many open source models out there, but this is the largest one to date. Trained using 16,000 Nvidia H100 GPUs and leveraging new training and development techniques, Meta is a part of OpenAI’s GPT-4o And Anthropic Claude 3.5 Sonnet (However, there are some points to note.)
Like Meta’s previous models, the Llama 3.1 405B can be downloaded or used on cloud platforms such as AWS, Azure and Google Cloud. It is also used by WhatsApp and Meta.ai, Enhance your chatbot experience For users in the United States.
New and improved
Like other open and closed source generative AI models, Llama 3.1 405B can perform a variety of tasks, from answering coding and basic math problems to summarizing documents in eight languages (English, German, French, Italian, Portuguese, Hindi, Spanish, and Thai). Being text-only, it can’t answer questions about images, for example, but most text-based workloads, such as analyzing files like PDFs and spreadsheets, are within its scope.
Meta is keen to be public about its experiments with multimodality: In a paper published today, the company’s researchers say they are actively developing Llama models that can recognize images and videos, and understand (and generate) speech, though these models are not yet ready for public release.
To train Llama 3.1 405B, Meta used a dataset of 15 trillion tokens dating back to 2024. (Tokens are parts of words that a model can internalize more easily than entire words, and 15 trillion tokens equates to a whopping 750 billion words.) Meta used the base set to train the previous Llama model, so it’s not a new training set per se, but the company claims that it has revamped its data curation pipeline and adopted a “more rigorous” quality assurance and data filtering approach in developing this model.
The company is developing synthetic data ( other Most major AI vendors, including OpenAI and Anthropic, are exploring the application of synthetic data to scale up AI training, but some experts say believe Synthetic data is Last resort This is because it can exacerbate bias in the model.
Meta, on the other hand, claims to be “striving for a careful balance.”[d]” Llama 3.1 405B’s training data is publicly available, but it doesn’t reveal where the data came from (other than as a web page or public web file). Many generative AI vendors consider training data a competitive advantage and keep the training data and related information secret. However, details of training data are also a potential source of IP-related litigation, preventing companies from revealing the details.
In the aforementioned paper, Meta researchers wrote that compared to previous Llama models, Llama 3.1 405B was trained on a combination of non-English data (to improve performance in non-English languages), more “math data” and code (to improve the model’s mathematical reasoning skills), and more recent web data (to enhance its knowledge of current events).
Recent Reuters report Mehta was found to have used copyrighted e-books to train its AI despite warnings from its lawyers. The company trained its AI using Instagram and Facebook posts, photos and captions. Making it difficult for users to opt outAdditionally, Meta, along with OpenAI, is currently being sued by authors, including comedian Sarah Silverman, for allegedly using copyrighted data without permission to train its models.
“The training data is in many ways like the secret recipe or sauce for building these models,” Raghavan Srinivasan, Meta’s vice president of AI program management, told TechCrunch in an interview. “So from our perspective, we’ve invested heavily in this, and it’s going to be one of the things that we continue to improve.”
The bigger context and tools
Llama 3.1 405B has a larger context window than previous Llama models: 128,000 tokens, or the length of about a 50-page book. A model’s context, or context window, refers to the input data (e.g., text) that the model considers before generating an output (e.g., additional text).
One advantage of a model with a larger context is that it can summarize longer text snippets or files. When powering chatbots, such models are also less likely to forget recently discussed topics.
The other two new miniature models Meta announced today, the Llama 3.1 8B and Llama 3.1 70B (updated versions of the Llama 3 8B and Llama 3 70B models released in April), also feature a context window of 128,000 tokens. This is a sizeable upgrade, as the previous models’ context maxed out at 8,000 tokens. Assuming the new llama model can effectively infer all contexts,.
All Llama 3.1 models, like rival models from Anthropic and OpenAI, can use third-party tools, apps, and APIs to complete tasks. Out of the box, they are trained to use Brave Search to answer questions about recent events, Wolfram Alpha APIs for math and science-related queries, and the Python interpreter to validate code. Additionally, Meta claims that Llama 3.1 models can use certain tools to some degree that they have not seen before.
Building an ecosystem
If you believe the benchmarks (Benchmarks aren’t everything In the field of generative AI, the Llama 3.1 405B is a very capable model. Painfully clear A limitation of previous generation Llama models.
According to the paper, Llama 3 405B performs on par with OpenAI’s GPT-4, and achieves “mixed results” compared to GPT-4o and Claude 3.5 Sonnet, according to human evaluators employed by Meta. Llama 3 405B outperforms GPT-4o at running code and generating plots, but has weaker overall multilingual capabilities and lags behind Claude 3.5 Sonnet in programming and general reasoning.
Also, due to its large size, you need powerful hardware to run it – Meta recommends at least a server node.
Perhaps that’s why Meta is promoting its new smaller models, the Llama 3.1 8B and Llama 3.1 70B, for general-purpose applications like chatbot powering and code generation. The company says the Llama 3.1 405B is best suited for model distillation (the process of transferring knowledge from a large model to a smaller, more efficient one) and generating synthetic data to train (or fine-tune) alternative models.
To further synthetic data use cases, Meta said it has updated Llama’s license to allow developers to use the output of the Llama 3.1 model family to develop third-party AI generative models (although it’s unclear whether that’s a wise idea). DebatableThe important thing is that the license is still Constraints How Developers can deploy the Llama Model: App developers with more than 700 million monthly users must apply for a special license from Meta, which will be granted by Meta in its sole discretion.
License changes regarding output are Big criticism Promoting Meta’s models within the AI community is part of the company’s efforts to aggressively gain mindshare in generative AI.
Alongside the Llama 3.1 family, Meta is releasing what it calls a “reference system” and new safety tools. Some of these tools block prompts that could cause Llama models to behave in unpredictable or undesirable ways, allowing developers to use Llama in more places. The company is also previewing and inviting comments on the Llama Stack, a soon-to-be-released API for fine-tuning Llama models, generating synthetic data with Llama, and tools that can be used to build “agent” applications (Llama-powered apps that can take actions on behalf of users).
“[What] We hear time and again from developers that they want to learn how to actually deploy. [Llama models] “We’re in production now,” Srinivasan said, “so we’re starting to give them different tools and options.”
Aiming for market share
In an open letter published this morning, Meta CEO Mark Zuckerberg laid out his vision for a future in which AI tools and models are put into the hands of more developers around the world, helping people access the “benefits and opportunities” of AI.
Though worded very charitably, the letter implicitly conveys Zuckerberg’s desire for Meta to create these tools and models.
Meta is racing to catch up with companies like OpenAI and Anthropic, and is employing a tried-and-true strategy: give away tools for free to foster an ecosystem, then slowly add more. product and serviceSome have already been paid. Billions of dollars Commoditizing the model also helps drive down prices for Meta’s competitors, making its version of AI more widely available, and allows improvements from the open source community to be incorporated into future models.
Llama has certainly caught the attention of developers: According to Meta, Llama models have been downloaded over 300 million times, and over 20,000 Llama-inspired models have been created to date.
Don’t get me wrong, Meta is serious about playing. Millions The company is focused on lobbying regulators to get them on board with its preferred form of “open” generative AI. None of the models in Llama 3.1 solve the intractable problems of today’s generative AI technology, such as its tendency to spit out problematic training data. But they help Meta achieve one of its main goals: to become synonymous with generative AI.
This comes at a cost. In the study, the authors say Zuckerberg recent comments — Discuss energy-related reliability issues through training Meta’s growing generative AI models.
“During training, the power consumption of tens of thousands of GPUs can increase or decrease simultaneously as, for example, all GPUs wait for checkpoints or collective communication to finish, or as they start up or shut down the entire training job,” the researchers wrote. “When this happens, the power consumption of an entire data center can instantly fluctuate by tens of megawatts, potentially exceeding the limits of the power grid. This is an ongoing challenge for us as we scale up training for even larger Llama models in the future.”
Training these larger models gives us more utility. Surrounding old coal-fired power plants.