Open-Source Simulation Engine Using Photorealistic Simulator for Self-Driving AI Training Released

A photorealistic simulator has been developed by a group of reseachers that is capable of creating highly realistic environments that can be used to train autonomous vehicles. The VISTA 2.0 engine has been released in an open-source format by scientists at the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), allowing other researchers to also teach their autonomous vehicles how to drive on their own in real-world scenarios, without the limitations of a real-world data set. The simulation engine developed by the researchers at CSAIL, known as VISTA 2.0, is not the first hyper-realistic driving simulation trainer for AI. “Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary,” said Daniela Rus, MIT Professor and CSAIL Director said.  “We’re excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds…” said Alexander Amini, CSAIL PhD student. Rus added that with the release of VISTA 2.0, other researchers will finally have access to a powerful new tool for the research and development of autonomous driving vehicles. But unlike other similar models, VISTA 2.0 has a distinctive advantage – it’s built with real-world data while still being photorealistic.The team of scientists used the foundations of their previous engine, VISTA, and mapped a photo-realistic simulation using the data available to them. This allowed them to enjoy the benefits of real data points but also create photo-realistic simulations for more complex training. It also helped the AV AI to train in a variety of complex situations such as overtaking, following, negotiating, and multiagent scenarios. All of this was done in a photo-realistic environment and in real-time. The hard work did show immediate results. AVs trained using VISTA 2.0 were far more robust than those trained on previous models that just used real-world data.

A photorealistic simulator has been developed by a group of reseachers that is capable of creating highly realistic environments that can be used to train autonomous vehicles. The VISTA 2.0 engine has been released in an open-source format by scientists at the Massachusetts Institute of Technology’s (MIT) Computer Science and Artificial Intelligence Laboratory (CSAIL), allowing other researchers to also teach their autonomous vehicles how to drive on their own in real-world scenarios, without the limitations of a real-world data set. 

The simulation engine developed by the researchers at CSAIL, known as VISTA 2.0, is not the first hyper-realistic driving simulation trainer for AI. “Today, only companies have software like the type of simulation environments and capabilities of VISTA 2.0, and this software is proprietary,” said Daniela Rus, MIT Professor and CSAIL Director said.  

“We’re excited to release VISTA 2.0 to help enable the community to collect their own datasets and convert them into virtual worlds…” said Alexander Amini, CSAIL PhD student. 

Rus added that with the release of VISTA 2.0, other researchers will finally have access to a powerful new tool for the research and development of autonomous driving vehicles. But unlike other similar models, VISTA 2.0 has a distinctive advantage – it’s built with real-world data while still being photorealistic.

The team of scientists used the foundations of their previous engine, VISTA, and mapped a photo-realistic simulation using the data available to them. This allowed them to enjoy the benefits of real data points but also create photo-realistic simulations for more complex training. 

It also helped the AV AI to train in a variety of complex situations such as overtaking, following, negotiating, and multiagent scenarios. All of this was done in a photo-realistic environment and in real-time. The hard work did show immediate results. AVs trained using VISTA 2.0 were far more robust than those trained on previous models that just used real-world data.