Vote for Cryptopolitan on Binance Square Creator Awards 2024. Click here to support our content!

Meta’s Ray-Ban Smart Glasses Introduce Multimodal AI Features in Early Access Preview

In this post:

  • Meta’s Ray-Ban Smart Glasses now offer a preview of their new AI features, allowing users to take photos and get AI assistance for fashion and more.
  • Users in the United States can access real-time information via Microsoft’s Bing without opting in, enhancing their knowledge through these smart glasses.
  • This update brings practical AI to smart eyewear, making it easier for users to interact with the world and receive personalized information.

Meta, formerly known as Facebook, is rolling out a significant update to its Ray-Ban Smart Glasses, introducing multimodal AI capabilities to enhance user experiences. 

This update allows users to harness the power of AI through the onboard camera, providing a shared reality experience with Meta AI. While these smart glasses lack a built-in display, their AI-driven features promise to offer practical solutions to everyday tasks.

Meta’s Ray-Ban Smart Glasses have taken a unique approach to wearable technology by incorporating a 12-megapixel camera that facilitates first-person captures. 

While other smart glasses often delve into mixed and augmented reality realms, these smart sunglasses prioritize practicality and user engagement.

The Early Access Preview: Users based in the United States can now opt in to the early access program via the Meta View app, available for both Android and iOS devices. This preview presents a groundbreaking development in the world of smart eyewear.

AI-enhanced contextual understanding

The key feature of this update is the integration of multimodal AI capabilities into the Ray-Ban Smart Glasses. Unlike traditional AI tools that rely on text prompts for interpretation, multimodal AI can process information in various forms, enabling more accurate and contextualized results.

Read Also  Meta to launch AI model set to challenge OpenAI's dominance

Unlocking Context: With the onboard camera, users can capture and share images of their surroundings with Meta AI, allowing it to gain a deeper understanding of the context. 

For instance, if a user is pondering what to wear with a specific clothing item, they can simply snap a photo of it. Meta AI can then identify the clothing item and provide tailored style suggestions, enhancing the user’s fashion choices.

Beyond Fashion: The potential applications are not limited to fashion advice. Users can also ask Meta AI to identify objects, provide information about locations, and even recognize landmarks. This multimodal approach enables a more intuitive and informative interaction with the world.

Access to real-time information with Microsoft’s Bing

In addition to the multimodal AI capabilities, Meta has collaborated with Microsoft’s Bing to provide users with access to real-time information. This partnership enhances the scope of Meta’s AI by offering up-to-date information on global events, web content, and more.

From Zero to Web3 Pro: Your 90-Day Career Launch Plan

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Editor's choice

Loading Editor's Choice articles...

Stay on top of crypto news, get daily updates in your inbox

Most read

Loading Most Read articles...
Subscribe to CryptoPolitan