Express & Star

Deepfake technology used to aid autonomous car development

Software is being used to help simulate different road environments.

Published
Deepfake technology

Cutting-edge deepfake technology is being used to speed up the development of self-driving cars.

The system’s ability to generate thousands of realistic images in minutes allows self-driving systems to ‘learn’ different driving environments and allow them to begin to adapt appropriately. Developers are able to create different times of day and weather conditions, all the while keeping the car in a virtual space rather than a physical one.

It allows developers to gain experience on how autonomous cars react to different situations without having to do any real-world testing.

Deepfake technology generates fake photo-realistic images – and has recently shot to fame through several viral internet videos.

The system is being utilised by Oxbotica to speed up the process of developing autonomous vehicles and ensuring that they’re capable of dealing with a variety of different situations.

Paul Newman, Co-Founder and CTO at Oxbotica, said: “Using deepfakes is an incredible opportunity for us to increase the speed and efficiency of safely bringing autonomy to any vehicle in any environment – a central focus of our Universal Autonomy vision.

“What we’re really doing here is training our AI to produce a syllabus for other AIs to learn from. It’s the equivalent of giving someone a fishing rod rather than a fish. It offers remarkable scaling opportunities.

“There is no substitute for real-world testing but the autonomous vehicle industry has become concerned with the number of miles travelled as a synonym for safety. And yet, you cannot guarantee the vehicle will confront every eventuality, you’re relying on chance encounters.

“The use of deepfakes enables us to test countless scenarios, which will not only enable us to scale our real-world testing exponentially; it’ll also be safer.”

The data is generated by using a cycle of two evolving artificial intelligence systems. One creates more convincing fake images while the other tries to work out which is real and which has been reproduced. Over time, the two will become smarter as they attempt to outwit one another. Then, when a system is unable to spot the difference, the deepfake module can be used to teach other systems its learnings.

Sorry, we are not accepting comments on this article.