Meta Unveils New AI Models and Tools to Drive Innovation

Meta's Latest AI Models Accelerate Development and Creativity with Tools and Open-Source Innovations.

Highlights

  • Self-Taught Evaluator: AI-generated data eliminates human involvement in training reward models.
  • Movie Gen: AI-driven HD video and audio creation with personalised options for creative projects.
  • Collaborations: Meta partners with filmmakers like Blumhouse to refine Movie Gen tools.

Follow Us

Meta Unveils New AI Models and Movie Tools to Drive Innovation
Meta, the owner of Facebook, announced on Friday that it was releasing a batch of new AI (Artificial Intelligence) models from its research division, including a "Self-Taught Evaluator," which could reduce the need for human involvement in the AI development process. Meta's Fundamental AI Research (FAIR) team introduced a series of new AI models and tools aimed at advancing machine intelligence (AMI). Notable releases include Meta Segment Anything (SAM) 2.1, an updated model designed for improved image segmentation, and Meta Spirit LM, a multimodal language model that blends text and speech for natural-sounding interactions. Meta claims that Meta Spirit LM is its first open-source multimodal language model that freely mixes text and speech.

Also Read: Jio Showcases AI Tools, Industry 5.0 and More Innovations at IMC2024




New AI Models and Tools from Meta FAIR

Other innovations include Layer Skip, a solution that accelerates large language models (LLMs) generation times on new data, and SALSA, a tool for testing post-quantum cryptography. Meta also released Meta Open Materials 2024, a dataset for AI-driven materials discovery, along with Meta Lingua, a streamlined platform for efficient AI model training.

Meta Open Materials 2024 provides open-source models and data based on 100 million training examples, offering an open-source option for the materials discovery and AI research community.

The Self-Taught Evaluator is a new method for generating synthetic preference data to train reward models without relying on human annotations. Reportedly, Meta's researchers used entirely AI-generated data to train the evaluator model, eliminating the need for human input at that stage.

"As Mark Zuckerberg noted in a recent open letter, open source AI "has more potential than any other modern technology to increase human productivity, creativity, and quality of life," all while accelerating economic growth and advancing groundbreaking medical and scientific research," Meta said on October 18.

Also Read: OpenAI Raises USD 6.6 Billion to Accelerate AI Research and Expansion

Launch of Meta Movie Gen

Earlier, on October 4, Meta introduced Movie Gen, a suite of AI models capable of generating 1080p videos and audio from simple text prompts. These models generate HD videos, personalised content, and precise edits, outperforming similar industry tools, according to Meta. Movie Gen also supports audio syncing to visuals. While still in development, Meta is collaborating with filmmakers to refine the tool, which could have future applications in social media and creative content.

"Our first wave of generative AI work started with the Make-A-Scene series of models that enabled the creation of image, audio, video, and 3D animation. With the advent of diffusion models, we had a second wave of work with Llama Image foundation models, which enabled higher quality generation of images and video, as well as image editing. Movie Gen is our third wave, combining all of these modalities and enabling further fine-grained control for the people who use the models in a way that's never before been possible," Meta said.

Movie Gen has four key capabilities: video generation, personalised video generation, precise video editing, and audio generation. Meta says that these models are trained on a combination of licensed and publicly available datasets.

Meta says it continues to improve these models, which are designed to enhance creativity in ways people might never have imagined. For instance, users could animate a "day in the life" video for Reels or create a personalised animated birthday greeting for a friend to send via WhatsApp, all using simple text prompts.

Also Read: Meta AI Expands to 21 New Countries, Including the UK and Brazil

Collaboration with Filmmakers for Movie Gen

Continuing, on October 17, Meta announced that, as part of a pilot program, it is collaborating with Blumhouse and other filmmakers to test the tool before its public release. According to the company, early feedback suggests that Movie Gen could help creatives quickly explore visual and audio ideas, though it is not intended to replace hands-on filmmaking. Meta plans to use feedback from this program to refine the tool ahead of its full launch.

"While we're not planning to incorporate Movie Gen models into any public products until next year, Meta feels it's important to have an open and early dialogue with the creative community about how it can be the most useful tool for creativity and ensure its responsible use," says Connor Hayes, VP of GenAI at Meta.

"These are going to be powerful tools for directors, and it’s important to engage the creative industry in their development to make sure they’re best suited for the job," added Jason Blum, founder and CEO of Blumhouse.

Meta is extending the Movie Gen pilot into 2025 to continue developing the models and user interfaces. In addition to collaborating with partners in the entertainment industry, Meta plans to work with digital-first content creators, the company said.

Reported By

Kirpa B is passionate about the latest advancements in Artificial Intelligence technologies and has a keen interest in telecom. In her free time, she enjoys gardening or diving into insightful articles on AI.

Recent Comments

shivraj :

hell yeah man ,if they offered 3.3TB on Airfiber i wouldnt have switched back to Local fiber internet

Jio Net Profit at Rs 6,861 Crore, ARPU Jumps to…

shivraj roy :

guys is there any noticeable quality difference with VoNR? VoLTE was a significant upgrade from 2G 3G calls wonder how…

Jio Confirms VoNR Deployment for 5G Users

TheAndroidFreak :

Off Topic : Samsung may have a ‘plan’ to beat Sony's iPhone camera dominance Samsung is working on a 3-layer…

OnePlus 13 and Xiaomi 15 to Feature Qualcomm Snapdragon 8…

Faraz :

AirFiber customers would have been more if their price was not equal to real fiber and if this 1 TB…

Jio Net Profit at Rs 6,861 Crore, ARPU Jumps to…

Faraz :

Best case Atleast 10 MHz sub-GHz (< 1 GHz like 700 or 900 MHz band) Atleast 20 MHz FDD midband…

Cabinet Approves Spectrum Refarming for 5G and Future 6G Services:…

Load More
Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments