In September 2025, Meta, the owner of social media platforms Facebook, Instagram and WhatsApp, unveiled three new types of smart glasses, each controlled by a wristband and boasting a “voice-based artificial intelligence assistant that can talk through a speaker and see through a camera,” according to the New York Times.
Given the tech industry’s record of rolling out products before assessing potential harms and misuse, it’s worth contemplating how these new smart glasses, which Meta founder and CEO Mark Zuckerberg claims will eventually “deliver personal superintelligence,” may be used for nefarious purposes.
The ability to livestream from Meta glasses is already a product feature, and as more of the devices propagate in the market, it should be anticipated that they will be used in cyberattacks.1
As businesses embrace digital transformation, smart glasses and other wearable gadgets are increasingly making their way into boardrooms. These devices, often equipped with cameras, microphones, and connectivity features, promise productivity gains and seamless collaboration. However, when paired with AI agents capable of listening for code words and triggering automated actions, they introduce significant risks that every business leader needs to understand.
The unauthorised use of personal smart and internet-connected devices for work purposes is already an established risk, commonly known as shadow IT. Today’s smart glasses present a unique challenge as they are designed to be unobtrusive, making them inherently covert and difficult to detect. Ray-Ban’s Meta smart glasses are almost indistinguishable from regular glasses. This makes it difficult for employers to achieve their legal obligation to be transparent about the way in which their employees’ data is collected and processed.