Meta plans to deploy its Artemis AI processor in its own data centers
February 4, 2024

Meta plans to deploy its Artemis AI processor in its own data centers

Meta's new custom chips named Artemis are designed to support the introduction of artificial intelligence (AI) products on Meta platforms and devices. With the Artemis AI processor, i.e. its implementation in its own products, the company takes a step away from relying on Nvidia GPUs, which is also a strategic move to reduce costs.

According to a Meta spokesperson to Reuters, the new, internal accelerators greatly complement the available GPUs by providing an optimal combination of performance and efficiency on Meta-specific workloads. In this way, they practically confirmed the company's plans to implement its own chips.

Artemis AI processors are specifically designed to run the inference workload of AI models, which are increasingly becoming present on Meta platforms such as Facebook, Instagram and WhatsApp, and devices such as Ray-Ban smart glasses. In addition to offloading the popular Nvidia H100 processor for AI training, the deployment of Artemis chips will help optimize energy consumption in Meta data centers by reducing the high costs associated with running AI workloads. Artemis, by the way, is Meta's second AI processor, but the first to be commercially introduced.

Dylan Patel, founder of SemiAnalysis, says Meta's switch to using custom chips could result in significant savings in annual energy costs of hundreds of millions of dollars, as well as billions of dollars in cost savings from buying chips from other companies. Other major cloud service providers, such as Amazon Web Services, Google, and Microsoft, are also developing and deploying their own AI and general-purpose processors in an effort to reduce hardware costs and power consumption.

The meta continues its ambitions beyond Artemis and acceleration of reasoning. The company is reportedly working on a sophisticated processor that will be able to run AI training workloads, similar to the Nvidia H100 GPU.

To meet the needs of generative AI, Meta is urgently expanding its computing resources by investing billions of dollars in acquiring specialized chips and customizing its data centers for these workloads. Although it has the initiative to develop an internal line of silicon to reduce dependence on Nvidia processors, the company still has no intention of eliminating Nvidia GPUs from its data centers, but these graphics processors should work complementary to Artemis chips.

Mark Zuckerberg, CEO of Met, said that his company will have 350,000 H100 GPUs in its data centers by the end of 2024.