At CES 2024, NVIDIA announced a suite of hardware and software aimed at maximizing the potential of generative AI on Windows 11 PCs. This includes new laptops, GPUs, and tools such as the GeForce RTX 40 SUPER Series family of GPUs and the AI Workbench toolkit.
NVIDIA Unveils Generative AI Innovations at CES 2024
At CES 2024, NVIDIA introduced a multitude of hardware and software designed to unlock the full potential of generative AI on Windows 11 PCs. This move is crucial for privacy, latency, and cost-sensitive applications.
Revolutionizing the AI Era on PC
With new innovations across the technology stack, NVIDIA is set to revolutionize the generative AI era on PC. RTX GPUs, capable of running a wide range of applications with top-tier performance, are at the heart of this revolution.
“Tensor Cores in these GPUs dramatically speed up AI performance across the most demanding applications for work and play.”
Introducing the GeForce RTX 40 SUPER Series
NVIDIA introduced the GeForce RTX 40 SUPER Series family of GPUs, which offer transformative AI capabilities for gaming, creating, and everyday productivity. The GeForce RTX 4080 SUPER, for instance, generates AI video over 1.5X faster and images over 1.7X faster than the GeForce RTX 3080 Ti.
New Laptops with Generative AI Capabilities
Starting later this month, new laptops from top Original Equipment Manufacturers (OEMs) will start shipping. These systems will bring a full set of generative AI capabilities right out of the box.
RTX Desktops and Mobile Workstations
RTX desktops and mobile workstations, powered by the NVIDIA Ada Lovelace architecture, are designed to meet the challenges of enterprise workflows. These workstations can run NVIDIA AI Enterprise software for simplified, secure generative AI and data science development.
AI Workbench: A Toolkit for Developers
AI Workbench, a unified toolkit for developers, will be released in beta later this month. It provides developers with the flexibility to collaborate on and migrate projects to any GPU-enabled environment. It also offers streamlined access to popular repositories like GitHub.
“Once AI models are built for PC use cases, they can then be optimized to take full advantage of Tensor Cores on RTX GPUs.”
TensorRT: High-Performance AI Inference
NVIDIA recently extended TensorRT, a library for high-performance AI inference, to text-based applications. The latest update to TensorRT-LLM is now available, adding Phi-2 to the growing list of pre-optimized models for PC, which run up to five times faster compared to other inference backends.
With these new tools and libraries, PC developers are primed to deliver even more generative AI applications. At CES, NVIDIA and its developer partners are releasing several new generative AI-powered applications and services, marking a significant step forward in the AI era.
From the Windows Blog
From the Windows Blog