NVIDIA has announced new initiatives to deliver a suite of perception technologies to the Robotics Operating System (ROS) developer community.
NVIDIA and Open Robotics have entered into an agreement to accelerate ROS performance on NVIDIA’s Jetson edge AI platform and GPU-based systems. These initiatives will reduce development time and improve performance for developers seeking to incorporate computer vision and AI/machine learning functionality into their ROS-based applications.
“As more ROS developers leverage hardware platforms that contain additional compute capabilities designed to offload the host CPU, ROS is evolving to make it easier to efficiently take advantage of these advanced hardware resources,” said Brian Gerkey, CEO of Open Robotics. “Working with an accelerated computing leader like NVIDIA and its vast experience in AI and robotics innovation will bring significant benefits to the entire ROS community.”
Open Robotics will enhance ROS 2 to enable efficient management of data flow and shared memory across GPU and other processors present on the NVIDIA Jetson edge AI platform. This will significantly improve the performance of applications that have to process high-bandwidth data from sensors such as cameras and lidars in real time.
In addition, Open Robotics and NVIDIA are working to enable seamless simulation interoperability between Open Robotics’s Ignition Gazebo and NVIDIA Isaac Sim on Omniverse. Isaac Sim already supports ROS 1 and 2 out of the box and features an all-important ecosystem of 3D content with its connection to popular applications, such as Blender and Unreal Engine 4.
Ignition Gazebo brings a decades-long track record of widespread use throughout the robotics community, including in high-profile competition events such as the ongoing DARPA Subterranean Challenge.
With the two simulators connected, ROS developers can easily move their robots and environments between Ignition Gazebo and Isaac Sim to run large-scale simulation and take advantage of each simulator’s advanced features such as high-fidelity dynamics, accurate sensor models and photorealistic rendering to generate synthetic data for training and testing of AI models.
Software resulting from this collaboration is expected to be released in the spring of 2022.