At the SC24 conference today, Nvidia’s Jensen Huang, Founder and CEO, and Ian Buck, VP of Hyperscale and HPC, gave a special address to attendees on the company’s high-performance computing strategy and its technologies being showcased in Atlanta’s Georgia World Congress Center from November 17-22.

Among the key announcements was a new Omniverse Blueprint that enables industry software developers to help their computer-aided engineering (CAE) customers in aerospace, automotive, manufacturing, energy, and other industries create digital twins with real-time interactivity. In addition, it launched the Alchemi NIM (Nvidia Inference Microservice), which is intended to accelerate chemical simulation research by optimizing AI inference and could lead to more efficient and sustainable materials to support the renewable energy transition.

 

Blueprint for lower costs and higher speeds

Nvidia says that software developers such as Altair, Ansys, Cadence, and Siemens can use the Omniverse Blueprint to help their customers drive down development costs and energy usage while getting to market faster. The blueprint is a reference workflow that includes Nvidia acceleration libraries, physics-AI frameworks, and interactive physically based rendering to achieve 1200 times faster simulations and real-time visualization.

“We built Omniverse so that everything can have a digital twin,” said Huang. “Omniverse Blueprints are reference pipelines that connect Nvidia Omniverse with AI technologies, enabling leading CAE software developers to build groundbreaking digital twin workflows that will transform industrial digitalization, from design and manufacturing to operations, for the world’s largest industries.”

According to Nvidia, building a real-time physics digital twin requires two fundamental capabilities: real-time physics solver performance and real-time visualization of large-scale datasets. The Omniverse Blueprint achieves these by bringing together Nvidia CUDA-X libraries to accelerate the solvers, the Nvidia Modulus physics-AI framework to train and deploy models to generate flow fields, and Nvidia Omniverse application programming interfaces for 3D data interoperability and real-time RTX-enabled visualization.

Developers can integrate the blueprint as individual elements or in its entirety into their existing tools.

 

Enabling accelerated CFD and cloud simulations

One of the first applications of the blueprint is for CFD (computational fluid dynamics) simulation to virtually explore, test, and refine the designs of cars, airplanes, and ships. It aims to supercharge traditional engineering workflows that can take weeks or even months to complete.

Ansys is the first to adopt the Omniverse Blueprint, applying it to Fluent fluid simulation software to enable accelerated CFD simulation. The company ran Fluent at the Texas Advanced Computing Center on 320 Nvidia GH200 Grace Hopper superchips. A 2.5-billion-cell automotive simulation was completed in just over 6 h, which would have taken nearly a month running on 2048 x86 CPU cores. It significantly enhances the feasibility of overnight high-fidelity CFD analyses and establishes a new industry benchmark.

“By integrating Nvidia Omniverse Blueprint with Ansys software, we’re enabling our customers to tackle increasingly complex and detailed simulations more quickly and accurately,” said Ajei Gopal, President and CEO of Ansys. “Our collaboration is pushing the boundaries of engineering and design across multiple industries.”

In addition to showcasing performance results on the latest CPU and GPU architectures including the much-anticipated Nvidia Grace Hopper superchip at SC24, Ansys is presenting on other HPC and cloud advancements with partners and on AMD Turin and Intel Granite Rapids CPUs.

Rongguang Jia, Distinguished Engineer at Ansys, is presenting developments in graphics processing unit computing strategies for Fluent software and diving into how these advancements are driving efficiency and performance gains for Fluent applications. Jeff Beisheim, Senior Principal R&D engineer at Ansys, is showcasing significant improvements in simulation performance and robustness on clusters and servers powered by AMD EPYC CPUs.

In an industry-first, Nvidia and Luminary Cloud are demonstrating at SC24 a virtual wind tunnel that allows users to simulate and visualize fluid dynamics at real-time, interactive speeds even when changing the vehicle model inside the tunnel.

In adopting the blueprint, the company’s new simulation AI model, built on Nvidia Modulus, learned the relationships between airflow fields and car geometry based on training data generated from its GPU-accelerated CFD solver. The model runs simulations orders of magnitude faster than the solver itself, enabling real-time aerodynamic flow simulation that is visualized using Omniverse APIs.

In March, Luminary Cloud announced its official launch out of stealth with its computer-aided engineering (CAE) SaaS platform to empower smarter and faster design cycles. A member of the Nvidia Inception program for startups, the company—backed by Sutter Hill Ventures and led by Jason Lango, Co-founder and CEO—was developed from the ground up to take advantage of the latest cloud and Nvidia GPU technologies.

Luminary Cloud’s multi-physics solution currently supports CFD for fluid-flow physics and conjugate heat transfer for thermal management. Lumi AI, the company’s AI-based engineering design copilot, cuts down the time that engineers spend in setup and simulation so that they can spend more time on analysis and optimization.

The company built a CAE platform specifically to take advantage of multi-node Nvidia GPU accelerated computing and other CAE tools were built for multi-node CPU clusters. It has seen speed improvements of over 100 times compared to traditional approaches.

Rescale, a cloud-based platform that helps organizations accelerate scientific and engineering breakthroughs, is using the Nvidia Omniverse Blueprint to enable organizations to train and deploy custom AI models in just a few clicks.

The Rescale platform automates the full application-to-hardware stack and can be run across any cloud service provider. Organizations can generate training data using any simulation solver; prepare, train, and deploy the AI models; run inference predictions; and visualize and optimize models.

The Omniverse Blueprint can be run on all leading cloud platforms including Amazon Web Services, Google Cloud, Microsoft Azure, Oracle Cloud Infrastructure, and Nvidia’s own DGX Cloud. Also exploring its adoption are Beyond Math, Hexagon, Neural Concept, SimScale, and Trane Technologies.

 

NIM catalyzes sustainable materials research

With AI and the latest technological advancements, researchers and developers are studying ways to create novel materials that could address the world’s toughest challenges in fields such as energy storage and environmental remediation.

With the new Alchemi NIM, according to Geetika Gupta, Nvidia’s Product lead for AI and HPC, researchers can test chemical compounds and material stability in simulation, in a virtual AI lab, which reduces costs, energy consumption, and time to discovery. Nvidia also plans to release NIM microservices that can be used to simulate the manufacturability of novel materials to determine how they might be brought to the real world in the form of batteries, solar panels, fertilizers, pesticides, and other products that can contribute to a greener planet.

SES AI, a leading developer of lithium-metal batteries, is exploring using the Alchemi NIM with the AIMNet2 model to accelerate the identification of electrolyte materials used for electric vehicles.

“SES AI is dedicated to advancing lithium battery technology through AI-accelerated material discovery, using our Molecular Universe Project to explore and identify promising candidates for lithium metal electrolyte discovery,” said Qichao Hu, CEO of SES AI.

She says that using Alchemi with AIMNet2 could drastically improve the company’s ability to map molecular properties, reducing time and costs significantly and accelerating innovation. SES AI recently mapped 100,000 molecules in half a day, with the potential to achieve this in under 1 h using Alchemi, showing how the microservice could have a transformative impact on material screening efficiency.

Looking ahead, SES AI aims to map the properties of up to 10 billion molecules within the next couple of years, pushing the boundaries of AI-driven high-throughput discovery.

The technology could accelerate the company’s progress toward commercialization of its next-generation lithium-metal batteries and three AI solutions.

In its Q3 financial report, the company reported that its 100-A·h lithium-metal B-samples cells successfully passed GB38031-2020, a major milestone towards C-samples and start-of-production for EVs. The company’s urban air mobility lines have completed site acceptance tests, and it signed cell supply agreements including with SoftBank. SES AI is building on its revenue pipeline from electrolyte projects for lithium-metal and lithium-ion battery programs.

 

Adopting Grace architectures

During SC24, Altair announced that several products from its HyperWorks design and simulation platform now support Nvidia Grace CPU and Grace Hopper superchip architectures. This is said to highlight Altair’s deep collaboration with Nvidia and give customers more flexibility to run Altair tools on Nvidia and Arm architectures.

“Altair’s solutions achieving peak performance on Grace and Grace Hopper marks a pivotal moment in our commitment to our customers’ needs,” said Sam Mahalingam, CTO at Altair. “Leveraging Nvidia’s CPU and GPU architectures, we are poised to drive artificial intelligence innovation and deliver substantial performance and sustainability gains.”

Internal benchmarks for select Altair solutions showed the Hopper GPU series ran simulations up to twice as fast as its predecessor and enabled Altair solvers to achieve breakthrough run times.

“Giving customers access to Altair’s powerful software tools on our Grace CPU and Grace Hopper Superchip architectures will empower them to push boundaries of innovation and design across multiple industries,” said Tim Costa, Senior Director of CAE/EDA, Quantum, and HPC at Nvidia.

The support demonstrates Altair technology’s ability to run on alternative CPU and CPU-GPU architectures that help meet the global market’s urgent needs for energy efficiency and speed. The Grace and Grace Hopper architectures—built specifically for HPC and AI workloads— offer performance, efficiency, and scalability improvements that can transform how organizations simulate at scale in a power-constrained data center.

According to a recent internal study performed by Lenovo using OpenRadioss, NVIDIA Grace CPUs deliver up to 2.2 times higher energy efficiency for running CAE simulations compared to their reference server configuration.