Taking on the Compute and Sustainability Challenges of Generative AI

Intel’s democratization of AI and support for an open ecosystem will meet the compute needs for generative AI.

News

  • March 28, 2023

  • Contact Intel PR

  • Follow Intel Newsroom on social:

    Twitter logo
    YouTube Icon

author-image

By

Top Things to Know:
 

  • What’s The News: Today, the top open source and open science library for machine learning – Hugging Face – shared performance results that demonstrate Intel’s AI hardware accelerators run inference faster than any GPU currently available on the market, with Habana® Gaudi®2 running inference 20 percent faster on a 176 billion parameter model than Nvidia’s A100. In addition, it has also demonstrated power efficiency when running a popular computer vision workload on a Gaudi2 server, showing a 1.8x advantage in throughput-per-watt over a comparable A100 server.1
  • Why It Matters: Today’s generative AI tools like ChatGPT have created excitement throughout the industry over new possibilities, but the compute required for its models have put a spotlight on performance, cost and energy efficiency as top concerns for enterprises today.
  • The Big Picture: As generative AI models get bigger, power efficiency becomes a critical factor in driving productivity with a wide range of complex AI workload functions from data pre-processing to training and inference. Developers need a build-once-and-deploy-everywhere approach with flexible, open, energy efficient and more sustainable solutions that allow all forms of AI, including generative AI, to reach their full potential.
  • What’s Next: AI has come a long way, but there is still more to be discovered. Intel’s commitment to true democratization of AI and sustainability will enable broader access to the benefits of the technology, including generative AI, through an open ecosystem.
  • The Bottom Line: An open ecosystem allows developers to build and deploy AI everywhere with Intel’s optimization of popular open source frameworks, libraries and tools. Intel's AI hardware accelerators and inclusion of built-in accelerators to 4th Gen Intel® Xeon® Scalable processors provide performance and performance per watt gains to address the performance, price and sustainability needs for generative AI.

Generative artificial intelligence (AI) with its ability to mimic human-generated content presents an exciting opportunity to transform many aspects of how we work and live. However, this quickly evolving technology exposes the complexities of the compute required to successfully leverage AI in the data center.
Intel is heavily invested in a future where everyone has access to this technology and can deploy it at scale with ease. Company leaders are collaborating with partners across the industry to support an open AI ecosystem that is built on trust, transparency and choice.

Embracing Open Generative AI with Superior Performance

Generative AI has been around for some time with language models like GPT-3 and DALL-E, but the excitement over ChatGPT – a generative AI chatbot that can have human-like conversations – shines a spotlight on the bottlenecks of traditional data center architectures. It also accelerates the need for hardware and software solutions that allow artificial intelligence to reach its full potential. Generative AI based on an open approach and heterogeneous compute makes it more broadly accessible and cost-effective to deploy the best possible solutions. An open ecosystem unlocks the power of generative AI by allowing developers to build and deploy AI everywhere while prioritizing power, price and performance.

Webinar: Intel to Host Data Center and AI Investor Webinar

Intel is taking steps to ensure it is the obvious choice for enabling generative AI with Intel’s optimization of popular open source frameworks, libraries and tools to extract the best hardware performance while removing complexity. Today, Hugging Face, the top open source and open science library for machine learning, published results that show inference runs faster on Intel’s AI hardware accelerators than any GPU currently available on the marketInference on the 176 billion parameter BLOOMZ model – an open science transformer-based multilingual large language model (LLM) – runs 20 percent faster on Intel’s Habana Gaudi2 than Nvidia’s A100-80G. BLOOM is designed to handle 46 languages and 13 programming languages and was created in complete transparency. All resources behind the model training are available and documented by researchers and engineers worldwide.

For the smaller 7 billion parameter BLOOMZ model, Gaudi2 is 3 times faster than A100-80G, while first-generation Habana® Gaudi® delivers a clear price-performance advantage over A100-80G. The Hugging Face Optimum Habana library makes it simple to deploy these large LLMs with minimal code changes on Gaudi accelerators.

Intel Labs researchers also used Gaudi2 to evaluate BLOOMZ in a zero-shot setting with LMentry, a recently proposed benchmark for language models. The accuracy of BLOOMZ scales with model size similarly to GPT-3, and the largest 176B BLOOMZ model outperforms its similarly sized GPT-3 counterpart as demonstrated by the graphic below. 

Automatic evaluation of generated language output by BLOOMZ models (up to 176B parameters) on 100K LMentry prompts, using Habana Gaudi accelerators.2

In addition, Hugging Face shared today that Stability AI’s Stable Diffusion, another generative AI model for state-of-the-art text-to-image generation and an open-access alternative to the popular DALL-E image generator, now runs an average of 3.8 times faster on 4th Gen Intel Xeon Scalable processors with built-in Intel® Advanced Matrix Extensions (Intel® AMX). This acceleration was achieved without any code changes. Further, by using Intel Extension for PyTorch with Bfloat16, a custom format for machine learning, auto-mixed precision can get another 2 times faster and reduce the latency to just 5 seconds – nearly 6.5x faster than the initial baseline of 32 seconds. You can try out your own prompts on an experimental Stable Diffusion demonstration that runs on an Intel CPU (4th Gen Xeon processors) on the Hugging Face website.

"At Stability, we want to enable everyone to build AI technology for themselves,” said Emad Mostaque, founder and CEO, Stability AI. “Intel has enabled stable diffusion models to run efficiently on their heterogenous offerings from 4th Gen Sapphire Rapids CPUs to accelerators like Gaudi and hence is a great partner to democratize AI. We look forward to collaborating with them on our next-generation language, video and code models and beyond.”

OpenVINO further accelerates Stable Diffusion inference. When combined with a 4th Gen Xeon CPU, it delivers almost 2.7x speedup compared to a 3rd Gen Intel® Xeon® Scalable CPU. Optimum Intel, a tool supported by OpenVINO to accelerate end-to-end pipelines on Intel architectures, reduces the average latency by an additional 3.5x, or nearly 10x in all.

Taking on the Generative AI Compute Challenge with Intel and Hugging Face

Taking on the Generative AI Compute Challenge with Intel and Hugging Face
Intel’s Kavitha Prasad, vice president and general manager of the Datacenter AI, Cloud Execution and Strategy Group, and Lama Nachman, Intel Fellow and director of the Intelligent Systems Research Lab, join Jeff Boudier, product director at Hugging Face, and industry analyst Daniel Newman to discuss generative AI’s impact on the world’s compute needs, why an open ecosystem matters and how we should be thinking about the role of ethics in the latest wave of AI developments. (Credit: Intel Corporation)

Chapters:

  1. “What is ChatGPT” – 1:14
  2. “Addressing the Compute Challenge for Generative AI” – 3:58
  3. “The Importance of an Open Ecosystem” – 6:42
  4. “Large Models are Driving Increased Compute Demand” – 8:51
  5. “Ethical Implications of AI” – 15:38
  6. “Democratizing AI with Hugging Face” – 20:18
  7. “AI Transparency with an Open Ecosystem” – 27:36
  8. “Develop Once, Deploy Everywhere” – 30:40

Addressing Price, Performance and Efficiency

In addition, more sustainable solutions need to be readily available to address the critical need for the reduction in electricity use while still meeting growing performance needs. An open ecosystem can remove roadblocks that limit progress, enabling developers to innovate with the best hardware and software tools for each job.

Built on the same high-efficiency architecture as first-generation Gaudi, which delivers up to 40% better price performance than comparable NVIDIA-based instances on the AWS cloudGaudi2 brings a new level of performance and efficiency to large-scale workloads. It has also demonstrated power efficiency when running AI workloads. In the Supermicro power-consumption evaluation between the Supermicro Gaudi2 Server and the Supermicro Nvidia A100 Server, Gaudi2 shows a 1.8x advantage in throughput-per-watt over the A100 server in running a popular computer vision workload.1

 Large-scale AI workloads also need a build-once-and-deploy-everywhere approach with flexible, open solutions that enable greater power efficiency. 4th Gen Xeon processors are Intel’s most sustainable data center processors and enable greater energy efficiency and power savings. With built-in accelerators like Intel AMX, inference and training performance increases of 10x can be achieved3 across a broad suite of AI workloads and use cases while also enabling up to 14x performance-per-watt increases over Intel's previous generation.4

Supporting an Ethical AI Future

 Generative AI is a powerful tool that supports and amplifies human capability, but it is essential that the development and deployment of these systems stem from a human-centered approach. Responsible AI governance is needed to ensure these systems reach their full potential without ethical compromise. The best way to protect the ethics of AI is through an open ecosystem that fosters transparency across training and datasets. Having a transparent AI supply chain ensures AI is being developed responsibly and reduces the ethical debt down the chain. With such transparency, developers are empowered to assess suitability of datasets and models, replicate results and uncover any ethical concerns for their context of use. 

Generative AI is one piece of a larger AI mosaic. Intel’s dedicated approach to the democratization of AI means it is combining its unique strengths in hardware, support for an open ecosystem and the right investments for the future to meet the compute needs for all aspects of AI, including generative AI.

Intel's approach to the democratization compute and tools enables access to building large language models, reducing cost and improving equity. For example, Intel is focusing on personalizing LLM for use with ALS patients to enable them to communicate more effectively. Enabling the developer community to tune these models for their use allows these models to be more accessible to those in need.

AI has come a long way, but there is still more to be discovered. Intel continues to foster an open ecosystem to build trust, to deliver choice and to ensure interoperability across the industry. And it is committed to using a multidisciplinary approach, providing energy-efficient solutions and focusing on amplifying human potential with AI through human-AI collaboration. An open approach is the best path forward.

Editor’s Note: This article was edited on April 4, 2023, to more accurately describe the BLOOMZ model and Hugging Face community.

1Supermicro L12 Validation Report of Gaudi2 HL-225H SYS-820GH-THR2, Oct. 20, 2022

2Measured on March 24th, 2023, using Habana Gaudi2 Deep Learning Server hosted on Intel Developer Cloud featuring 8 Gaudi2 HL-225H mezzanine cards and 3rd Gen Intel Xeon Processors running with SynapseAI® software version 1.8.0, with batch_size=1.

3See [A16] and [A17] at intel.com/performanceindex in the 4th Gen Intel Xeon Scalable processors section.

4See intel.com/processorclaims: 4th Gen Intel Xeon Scalable processors. Claim E1. 

Results may vary.