Meta Faces Class Action Over Privacy of AI-Powered Smart Glasses

Alysa Gavilan

Alysa Gavilan has spent years exploring photography through photojournalism and street scenes. She enjoys working with both film and mirrorless cameras, and her fascination with the craft has grown over the decades. Inspired by Vivian Maier, she is drawn to capturing everyday moments that often go unnoticed.

Air Force smart glasses ban

A class action lawsuit filed in federal court in San Francisco accuses Meta Platforms of misleading consumers about the privacy protections of its AI powered smart glasses.

This raises new concerns about how images and video captured through wearable cameras may be used to train artificial intelligence systems. 

According to a report by Engadget, the lawsuit claims Meta failed to disclose that footage recorded through some features of its glasses could be reviewed by human contractors as part of its AI development pipeline.

The case centers on whether the company’s marketing statements accurately represented how data from the glasses is handled.

Privacy Claims

The proposed class action was filed by Clarkson Law Firm and names two plaintiffs who purchased Meta’s smart glasses in California and New Jersey. The complaint alleges that both individuals relied on the company’s privacy related marketing when deciding to buy the device.

According to the report, the lawsuit argues that Meta’s advertising emphasized privacy protections but did not clearly disclose that captured footage could be reviewed by human contractors involved in AI training. The plaintiffs claim they would not have purchased the glasses if they had known about this review process.

The lawsuit seeks monetary damages as well as injunctive relief that could require changes to how Meta communicates privacy features for its wearable devices.

Air Force smart glasses ban

Content Review Contractors

The legal complaint follows reporting by the Swedish newspaper Svenska Dagbladet about subcontractors reviewing footage captured through the glasses. Contractors working in Kenya said they encountered highly personal material while performing tasks that involve labeling objects within video clips.

Workers reportedly described viewing footage that included bathroom visits, sexual encounters, and other private moments. These clips were reviewed as part of the process used to train artificial intelligence systems that analyze images captured by the glasses.

The lawsuit cites these reports as evidence that users may not fully understand how captured images are handled once AI features process them.

How Smart Glasses Process Images

Ray-Ban Meta smart glasses include built-in cameras designed to capture photos and video from the user’s perspective. The glasses also offer AI tools that analyze the environment around the wearer and respond to spoken questions.

According to a report by Engadget, Meta confirmed that some data captured by the glasses can be reviewed by human contractors under certain conditions. A company spokesperson told the publication that content captured by the device usually remains on the user’s phone unless it is shared with Meta or used with AI features.

However, certain multimodal AI functions analyze images of the user’s surroundings in order to answer questions or identify objects. Images processed through these features may be transmitted to Meta’s systems and in some cases reviewed by contractors to improve the performance of the AI models.

Meta has said that it attempts to filter data and remove identifying details before review. Critics argue that the possibility of human review should be disclosed more clearly to users.

privacy

Privacy Concerns Around Wearable Cameras

Smart glasses are part of a broader trend toward wearable cameras that capture photos and video hands free. While these devices offer convenience for everyday photography, they also introduce new privacy challenges because they record scenes continuously from the wearer’s point of view.

When AI features analyze images captured by wearable cameras, companies often use those images to train machine learning systems. That process frequently involves human reviewers labeling objects, locations, or activities within images and video clips.

The complaint also warns that sensitive footage could expose individuals to risks such as identity theft, harassment, or reputational harm if images containing private details are reviewed by third parties.

Concerns about wearable cameras have also appeared in security sensitive environments. For example, a unit of the United States military previously banned the use of smart glasses in certain operational settings due to the risk that the devices could record or transmit sensitive information.

What Happens Next

The lawsuit against Meta is still in its early stages and the court has not yet ruled on the claims. If the case proceeds, it could examine how companies communicate privacy policies for AI powered imaging devices.

Meta declined to comment directly on the allegations in the lawsuit but confirmed that some captured data may be reviewed by human contractors when used to improve AI systems.

For photographers and technology users, the case illustrates how rapidly evolving imaging tools are intersecting with privacy law. Smart glasses combine cameras, artificial intelligence, and cloud processing in ways that are still being tested in courts and public policy debates.


Filed Under:

Tagged With:

Find this interesting? Share it with your friends!

Alysa Gavilan

Alysa Gavilan

Alysa Gavilan has spent years exploring photography through photojournalism and street scenes. She enjoys working with both film and mirrorless cameras, and her fascination with the craft has grown over the decades. Inspired by Vivian Maier, she is drawn to capturing everyday moments that often go unnoticed.

Join the Discussion

DIYP Comment Policy
Be nice, be on-topic, no personal information or flames.

Leave a Reply

Your email address will not be published. Required fields are marked *