Meta‘s Smart Glasses Debut Plagued by Technical Issues, raising Questions About Real-World Usability
SAN MATEO, CA – Meta’s highly anticipated showcase of its new smart glasses at the Connect conference this week was marred by a series of technical difficulties, highlighting the challenges of bringing AI-powered wearable technology too consumers. During a live demonstration, attempting to utilize the glasses’ voice assistant for a cooking recipe triggered a cascade of unintended activations as hundreds of attendee devices together responded to the “Hey Meta” wake word.
According to Meta CTO Andrew Bosworth, in a post-event Instagram Reel, the issue stemmed from a self-inflicted denial-of-service (DDOS) caused by the density of Meta’s AI instances operating in close proximity. Further demonstrations, including a video call, also experienced lags and interruptions.
The rocky launch underscores the gap between the promise of seamless AI integration and the current reality of the technology. “The main problem for me is the raw amount of times where you do engage with an AI assistant and ask it to do something and it doesn’t actually understand,” stated Leo Gebbie, a director and analyst at CCS Insights.”The failure risk just is high, and the gap is still pretty big between what’s being shown and what we’re actually going to get.”
While Meta envisions smart glasses as a future computing platform capable of enhancing cognitive function, the Connect keynote suggested the technology may currently introduce social awkwardness. The demonstrations were characterized by “timid exchanges, repeated commands, and wooden conversations,” indicating the limitations of current voice assistant capabilities in real-world scenarios.
The glasses feature live captions displayed on the lenses, as showcased by Meta. However, the overall experience, as publicly demonstrated, raises concerns about whether the benefits of hands-free information access outweigh the potential for frustrating interactions and a perceived social disadvantage.