Nvidia unveils plans for the future of generative AI tech.




Article Summary

TLDR: Nvidia Rolls Out Blueprints For The Next Wave Of Generative AI

Key Points:

  • Nvidia introduces NIM strategy for making it easier and faster for developers to create AI applications.
  • NIMs are part of the second wave of generative AI, focusing on enterprises using institutional knowledge for business processes.

Hardware is always the star of Nvidia’s GPU Technology Conference, and this year they previewed the “Blackwell” datacenter GPUs, along with other components. Nvidia also introduced NIMs to facilitate the creation of AI applications, making it easier and faster for developers. NIMs are part of Nvidia’s larger plans for generative AI tools to help enterprises use their institutional knowledge for running their businesses efficiently.

Nvidia’s NIMs deliver performance optimization and token efficiency faster than other solutions, offering the best total cost of ownership for companies running generative AI on Nvidia systems. At Hot Chips, Nvidia introduced NIM Agent Blueprints for developers to create custom generative AI applications, including reference AI workflows and sample applications for common use cases.

These NIM Blueprints are part of a “data flywheel” concept that goes beyond accelerating the model. The models need to be enhanced and customized continuously using data generated as the AI applications run and interact with users. Nvidia partners with various organizations to deliver NIM Agent Blueprints, fine-tune models, monitor applications, and provide cybersecurity solutions for enterprises.

The NIM Blueprints cover scenarios such as creating digital humans for customer experience, multimodal PDF data extraction for enterprise RAG, and accelerated drug discovery using generative AI. These blueprints run on systems from OEMs and hyperscale systems, supporting enterprises in developing their own generative AI applications.