Runway has spent the past seven years developing visual-generating tools for the creative industry, and is now exploring new opportunities for its technology within robotics.
The New York-based company, Runway, is recognized for its video and photo generation AI world models, also known as large language models, which create a simulated version of the real world. The company recently launched Gen-4, its video-generating model, in March, and Runway Aleph, its video-editing model, in July.
As Runway’s world models improved and became more realistic, the company started receiving interest from robotics and self-driving car companies seeking to utilize the technology, according to Anastasis Germanidis, Runway co-founder and CTO, in an interview with TechCrunch.
Germanidis stated that the ability to simulate the world is broadly useful beyond entertainment, which is still an ever-increasing area for them. He added that it makes training robotic policies more scalable and cost-effective, whether in robotics or self-driving applications.
According to Germanidis, working with robotics and self-driving car companies was not initially part of Runway’s vision when it launched in 2018. It was only after robotics companies and others in different industries reached out that they realized their models had much broader use cases than they initially anticipated.
Germanidis mentioned that robotics companies are currently using Runway’s technology for training simulations. He further explained that training robots and self-driving cars in real-world scenarios is costly, time-consuming, and difficult to scale for companies.
Germanidis clarified that Runway is not aiming to replace real-world training entirely. However, he believes companies can derive significant value from running simulations on Runway’s models because they allow for highly specific testing.
Unlike real-world training, these models simplify testing specific variables and situations without altering other aspects of the scenario, he added.
He explained that users can take a step back and simulate the impact of different actions. For instance, they can assess the outcome if a car takes one turn over another, or performs a specific action. According to him, recreating those scenarios in the physical world while keeping all other environmental aspects constant and only testing the specific action is very difficult.
Runway is not alone in this endeavor. Nvidia released the latest version of its Cosmos world models, along with other robot training infrastructure, earlier this month.
Germanidis mentioned that the company does not plan to release a completely separate line of models for its robotics and self-driving car clients. Instead, Runway will refine its existing models to better serve these industries and is also forming a dedicated robotics team.
He also stated that while these industries were not included in the company’s initial investor pitches, the investors support this expansion. Runway has secured over $500 million in funding from investors such as Nvidia, Google, and General Atlantic, valuing the company at $3 billion.
Germanidis said the company is built on the principle of simulation and the ability to create an increasingly accurate representation of the world. He emphasized that once you have robust models, they can be applied to a wide range of markets and industries, with even more transformations expected due to the power of generative models.
Persons: Anastasis Germanidis
Company Names: Runway, TechCrunch, Nvidia, Google, General Atlantic
Titles: Gen-4, Runway Aleph
Disclaimer: This article has been auto-generated from a syndicated RSS feed and has not been edited by Vitrina staff. It is provided solely for informational purposes on a non-commercial basis.