Meta has announced an open-source AI model called ImageBind that connects strings of data, including text, audio, visual data, temperature and movement readings. The core concept of research is linking data into a single multi-dimensional index, a report by The Verge revealed.
This comes after the social media company announced the expansion of ads on its Reels monetisation program to pay creators on Facebook based on the performance of their Reels.
Meta said in a blog post that six types of data included in its new model are: visual (in the form of both image and video); thermal (infrared images); text; audio; depth information; and movement readings generated by an inertial measuring unit, or IMU.
- Also read: How to use Microsoft Phone Link app
The social media giant added that the model helps advance AI by enabling machines to analyse many forms of information together. For example, using ImageBind, Meta’s Make-A-Scene could create images from audio, such as creating an image based on the sounds of a rain forest or a bustling market. While Make-A-Scene can generate images by using text prompts, ImageBind could upgrade it to generate images using audio sounds, such as rain.
ImageBind will open access for researchers to develop new holistic systems, such as combining 3D and IMU sensors to design or experience immersive, virtual worlds.
Comments
Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.
We have migrated to a new commenting platform. If you are already a registered user of TheHindu Businessline and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.