NVIDIA has just introduced the Jetson AGX Orin™, which it bills as the world’s “smallest, most powerful and energy-efficient AI supercomputer.” The debut occurred at the Santa Clara, California company’s GTC event, which was described as “a global online AI conference for developers, researchers, thought leaders and technology decision-makers.”
Targeted use cases for the Jetson AGX Orin include robotics, autonomous machines, medical devices and other forms of embedded computing at the edge. It uses the NVIDIA Ampere architecture GPU and Arm Cortex-A78AE CPUs, along with “next-generation deep learning” and vision accelerators.
“The whole device fits in the palm of your hand,” enthused Deepu Talla, vice president and general manager of embedded and edge computing at NVIDIA. He explained that it is pin compatible and form compatible to previous generation technology, so a customer can take an existing base module and just plug it in. “It runs the same software” while offering six times the performance and drawing as little as 15 watts. Talla said the product will be available in Q1 2022.
Inside Jetson
Jetson AGX Orin delivers 200 trillion operations per second, similar to that of a GPU-enabled server, making it possible to accelerate the full NVIDIA AI software stack. The upshot is that Jetson Orin gives developers the ability to deploy the largest, most complex models needed to solve edge AI and robotics challenges in natural language understanding, with 3D perception and multisensor fusion.
The new Jetson AGX Orin runs the same software used in the company’s data center GPUs and workstation GPUs, Talla said, “because it’s based on the same architecture.” The first layer of software on Jetson is the JetPack SDK, which includes Linux support and a complete package of board support. Other elements include several AI tools and components for robotics and smart city applications.
“We continue to add to that,” Talla continued. Those components include the NVIDIA CUDA-X™ accelerated computing stack and NVIDIA tools for application development and optimization, including cloud-native development workflows. Pretrained models from the NVIDIA NGC™ catalog are optimized and ready for use with the NVIDIA TAO toolkit and customer datasets.
As a result, Talla noted, even on the same hardware it’s not uncommon for NVIDIA to achieve two to four times faster performance over a period of one or two years. “That capability has been very refreshing to many of our customers and developers who invest. Once on the Jetson platform, the same software runs on all the Jetson platform so they’re able to do entry level products with Jetson Nano, or do high end autonomous machine products like delivery robots, which require multiple sensors.”
Jetson is an open platform and NVIDIA has several third-party companies participating as part of an ecosystem that provide services such as fleet management and OTA capability. The Jetson partner ecosystem also includes cameras and other multimodal sensors, carrier boards, hardware design services, AI and system software, developer tools and custom software development. “Our customers have the flexibility of developing their own or going to partners or using NVIDIA technologies,” Talla added.
For specific use cases, software frameworks include NVIDIA Isaac Sim™ on Omniverse for robotics, NVIDIA Clara Holoscan™ SDK for healthcare and NVIDIA DRIVE™ for autonomous driving. The latest Isaac release includes support for the Robot Operating System (ROS) developer community. NVIDIA has also released the new Omniverse Replicator for synthetic data generation and Isaac GEMs hardware-accelerated software packages that make it easier for ROS developers to build high-performance AI-enabled robots on the Jetson platform.
The new Jetson release, Talla concluded, is part of, “the transformation of Nvidia into a full stack computing company.”