According to a recent investigation by Swedish media, Meta's Ray-Ban AI smart glasses pose serious privacy risks. The report states that a large amount of sensitive video footage captured by the device is being sent to human reviewers in Nairobi, Kenya, for annotation and training of artificial intelligence models.

The investigation found that the videos sent to overseas contractors included extremely private moments, such as users using the bathroom, being naked, and intimate activities. Although Meta claims the glasses are designed to protect privacy and automatically blur faces in the footage, interviewees from Kenya revealed that due to technical flaws, the blurring often fails, leaving users' faces clearly visible.

This incident has triggered strong legal backlash. Currently, Meta is facing at least one class-action lawsuit, accusing it of violating false advertising and privacy laws. The lawsuit points out that Meta concealed the fact that using AI features could lead to consumers' private lives being observed by strangers on the other side of the world.

Meta has not yet provided further explanations regarding the mechanism through which sensitive data flows into the manual review process. This scandal has once again sparked public concerns about the boundaries of data collection by wearable AI devices in both public and private spaces.

Key Points

  • 🚨 Sensitive Privacy Leak: Human annotators in Kenya confirmed that they often see extremely private home life and intimate videos in the background.

  • 🛠️ De-identification Technology Fails: Although the system theoretically blurs faces, in practice, it often fails, failing to effectively protect users' identities.

  • ⚖️ Face Legal Action: Meta is accused of false advertising and failing to fully inform consumers about the extent of manual review involvement.

  • 🌍 Risks of Global Crowdsourcing: AI training relies on labor from low-cost regions for manual annotation, leading to sensitive data being transferred across borders without proper oversight.