Intel supports Meta’s Llama 4

  • April 7, 2025

Investing.com -- Meta (NASDAQ: META ) has recently launched the first models of the Llama 4 herd, designed to facilitate the creation of more personalized multimodal experiences.

Intel (NASDAQ: INTC ), a close partner of Meta, has announced functional support for the Llama 4 models across Intel® Gaudi® 3 AI accelerator and Intel® Xeon® processors. The Intel Gaudi 3 AI accelerators are specifically designed for AI workloads, benefiting from Tensor cores and eight large Matrix Multiplication Engines, as opposed to the many smaller matrix multiplication units found in a GPU. This design leads to fewer data transfers and improved energy efficiency. It’s worth noting that the new Llama 4 Maverick model can be operated on a single Gaudi 3 node with 8 accelerators.

Intel Xeon processors are designed to handle demanding end-to-end AI workloads. Available through major cloud service providers, Intel Xeon processors include an AI engine (AMX) in every core, unlocking new performance levels for inference and training. The combination of Intel Xeon AMX instructions, large memory capacity, and increased memory bandwidth in Intel® Xeon® 6 processors makes Xeon a cost-effective solution for deploying MoE models like Llama 4.

Open ecosystem software, including PyTorch, Hugging Face, vLLM, and OPEA, is optimized for Intel Gaudi and Intel Xeon processors, simplifying AI system deployment.

This article was generated with the support of AI and reviewed by an editor. For more information see our T&C.

0.097544s