Meta Unveils New AI Models and Tools to Drive Innovation

Meta Unveils New AI Models and Movie Tools to Drive Innovation
Meta, the owner of Facebook, announced on Friday that it was releasing a batch of new AI (Artificial Intelligence) models from its research division, including a “Self-Taught Evaluator,” which could reduce the need for human involvement in the AI development process. Meta’s Fundamental AI Research (FAIR) team introduced a series of new AI models and tools aimed at advancing machine intelligence (AMI). Notable releases include Meta Segment Anything (SAM) 2.1, an updated model designed for improved image segmentation, and Meta Spirit LM, a multimodal language model that blends text and speech for natural-sounding interactions. Meta claims that Meta Spirit LM is its first open-source multimodal language model that freely mixes text and speech.

  • Make Telecom Talk My Trusted Source
  • Source of Google
  • Source of Google

Also Read: Jio Showcases AI Tools, Industry 5.0 and More Innovations at IMC2024

New AI Models and Tools from Meta FAIR

Other innovations include Layer Skip, a solution that accelerates large language models (LLMs) generation times on new data, and SALSA, a tool for testing post-quantum cryptography. Meta also released Meta Open Materials 2024, a dataset for AI-driven materials discovery, along with Meta Lingua, a streamlined platform for efficient AI model training.

Meta Open Materials 2024 provides open-source models and data based on 100 million training examples, offering an open-source option for the materials discovery and AI research community.

The Self-Taught Evaluator is a new method for generating synthetic preference data to train reward models without relying on human annotations. Reportedly, Meta’s researchers used entirely AI-generated data to train the evaluator model, eliminating the need for human input at that stage.