Missed a session at the Data Summit? View on demand here.
Follow VentureBeat’s ongoing coverage of Nvidia’s GTC 2022. >>
Nvidia packed about three years of news into its GPU Technology Conference today.
Flamboyant CEO Jensen Huang’s 1 hour 39 minute keynote covered a lot of ground, but the connecting themes for the majority of the two dozen announcements were GPU-centric and Nvidia’s platform approach to everything it builds.
Most people know Nvidia as the world’s largest manufacturer of a graphics processing unit or GPU. The GPU is a chip that was first used to speed up graphics in gaming systems. Since then, the company has steadily found new uses for the GPU, including autonomous vehicles, artificial intelligence (AI), 3D video rendering, genomics, digital twins, and many others.
The company has progressed so far from mere chip design and manufacturing that Huang summed up his company’s Omniverse development platform as “the new engine for the world’s AI infrastructure.”
Unlike all other silicon manufacturers, Nvidia ships its product as more than just a chip. It requires a platform approach, designing complete, optimized solutions packaged as reference architectures for its partners and then building in volume.
This 2022 GTC keynote had many examples of this approach.
NVIDIA Hopper H100 Systems ‘Transform’ AI
As noted earlier, the core of all Nvidia solutions is the GPU, and at GTC22 the company announced its new Hopper H100 chip, which uses a new architecture designed to be the engine for massively scalable AI infrastructure. The silicon features no less than 80B transistors and includes a new motor specifically designed for transformer motor training and inference. For those with only a cursory knowledge of AI, a transformer is a neural network that literally transforms AI based on a concept called “attention.”
Attention is where each element in a piece of data tries to figure out how much it understands or needs to know about other parts of the data. Traditional neural networks look at neighboring data, while transformers see the entire amount of information. Transformers are widely used in natural language processing (NLP), as completing a sentence and understanding what the next word in the sentence should be — or what a pronoun would mean — is all about understanding which other words are used and which ones. sentence structure the model has. may need to learn.
The chip alone offers massive processing power, but multiple GPUs can be linked together using Nvidia’s NVLink interconnect, essentially creating one large GPU, resulting in 4.9 Tbps of external bandwidth.
Related to this, Huang also announced an expansion of NVLink from an internal interconnect technology to a full external switch. Previously, NVLink was used to connect GPUs in a computer system. The new NVLink switch allows up to 256 GPUs to act as a single chip. The ability to go outside the system results in computational performance of 192 Teraflops. While this may seem like a crazy amount of performance, recommendation systems, natural language processing, and other AI use cases take in massive amounts of data, and these data sets are only getting bigger.
Continuing with the platform theme, Nvidia also announced new DGX H100-based systems, SuperPODs (multi-node systems), and a 576-node supercomputer. This is a turnkey system with all the software and hardware needed for nearly plug-and-play AI tasks. Like all of its systems, this is built as a reference architecture with production systems available from a wide variety of system providers, including Atos, Cisco, Dell, HPE, Lenovo, and other partners.
AI Enterprise 2.0 is now full stack
There may be no better example of the platform approach than how Nvidia has enabled enterprise AI. The company is approaching this segment with a multi-layered model. The bottom layer is the AI infrastructure, which includes various systems such as DGX, HGX, EGX, and others built on NVIDIA’s wide range of GPUs and DPUs. In addition, Nvidia provides all the necessary software and operating systems to allow developers to work with the hardware. This includes CUDA, TAO, RAPIDS, Triton Inference Server, TensorFlow, and other software.
The top layer is a set of pre-built AI systems to address specific use cases. For example, Maxine is the company’s video AI system, Clara is designed for healthcare, Drive is for the automotive industry, and Isaac is the simulator.
This allows enterprises and software vendors to leverage these components to deliver innovative new capabilities. For example, unified communications provider Avaya uses Maxine in its Spaces product to remove noise, virtual backgrounds and other features in video conferencing. Many automakers, including Jaguar and Mercedes, use Drive as a platform for autonomous vehicles.
Huang also announced the formalization of the AI platform. When one thinks of other enterprise platforms, such as VMware vSphere and Windows Servers, they have a roadmap for continuous innovation and an ecosystem of validated software running on them. NVIDIA currently has an underlying hardware program with vendors such as Lenovo, Dell and Cisco. The company complements this with a software program called Nvidia Accelerate, which currently has more than 100 members, including Adobe and Keysight. This should give customers confidence that the software has been tested, vetted and optimized for the Nvidia platform.
Omniverse expands into the clouds
Nvidia’s Omniverse is a collaboration and simulation engine that obeys all the laws of physics. Companies can use this to build a virtual version of an object, reducing training time. For example, teaching a robot to walk can be expensive and time consuming as one would have to build a number of scenarios such as uphill, downhill, stairs and more. With Omniverse this can be done virtually, the data is uploaded and the robot can walk immediately. Another use case is building a digital twin of something like a factory so that construction planners can design it to scale before construction begins.
At GTC22, Nvidia announced Omniverse Cloud, which, as the name suggests, is making the simulation engine available as a streaming cloud service. Historically, one would need a powerful system to run Omniverse. Now as a cloud service, it can run on any computing device, even a Chromebook or tablet. This democratizes Omniverse and makes it available to anyone with an internet connection.
The second announcement is the OVX Computing System, a data center-scale system for industrial digital twins. The system starts with eight NVIDIA A40 GPUs and scales up from there. Again, like all its systems, this is a reference architecture with systems from Lenovo, Inspur, and Supermicro.
Platform approach ensures sustainable differentiation
Many industry watchers predict that Nvidia’s dominance in GPUs will end as more silicon manufacturers enter the market, creating competition and pricing pressure. For example, Intel has been aggressively chasing GPUs for years, but none have managed to make a dent in Nvidia’s business. The platform approach Nvidia has taken is common in networking, cloud and software, but unique in silicon. The benefits were highlighted in Jensen’s keynote and led to long-term differentiation for the company.
Chris Preimesberger is a former editor of eWEEK and a regular VentureBeat contributor who has been reporting and analyzing IT trends and products for over two decades.
Zeus Kerravala is the founder and principal analyst at ZK Research. He spent 10 years at Yankee Group, previously holding a number of corporate IT positions. Kerravala is considered one of the top 10 IT analysts in the world by Apollo Research, which evaluated 3,960 technology analysts and their individual statistics on press coverage.
VentureBeat’s mission is to be a digital city square for tech decision makers to learn about transformative business technology and transactions. Learn more
This post GTC 2022: Nvidia tightens its GPU and platform muscles
was original published at “https://venturebeat.com/2022/03/22/gtc-2022-nvidia-flexes-its-gpu-and-platform-muscles/”