Meta says that its new AI model can understand the physical world
Meta says new Generated AI Model Released Wednesday, it could allow machines to understand the physical world and open up opportunities such as smarter robots.
According to Meta, a new open source model called Video Joint Embedded Prediction Architecture 2 or V-JEPA 2 is designed to help AI understand things like gravity, object persistence, and more.
Current models that allow AI to interact with the physical world rely on labeled data or videos to mimic reality, but this approach emphasizes the logic of the physical world, such as object movement and interaction. This model allows AI to understand concepts like the fact that a ball falls from a table.
Meta said the model could be useful for devices such as self-driving cars and robots by ensuring that it doesn’t have to be trained in any situation. The company called it a step towards AI that can adapt to humans.
One of the struggles in the physical AI space requires a significant amount of training data that takes time, money and resources. At SXSW earlier this year, experts said synthetic data created by AI (training data created by AI) would be useful. Prepare a more traditional learning model In unexpected circumstances. (In Austin, an example used was the appearance of a bat from the city’s famous council avenue bridge.)
Meta said the new model simplifies the process and makes it more efficient for real applications as it does not rely on all of its training data.