Blog | Linewize

Are AI photo editing apps endangering student privacy & mental health?

Written by Sam Cortez | Feb 8, 2023 1:32:19 AM

The popularity of AI photo editing apps (like Lensa) is skyrocketing. The number of concerns and complaints about their usage is growing just as quickly. These tools — which offer users the ability to automatically transform their images into fantastical portraits — have set off alarm bells relating to data privacy, non-consensual sexualization, and potential content theft. 

These concerns are heightened with a user base that consists primarily of teens and young adults. For an age group that’s already at high risk of being exposed privacy and monetization schemes online, the last thing any parent wants is another platform seeking to manipulate their child’s emotions, misuse their data, invade their privacy, or negatively impact their mental health.

Parents and educators alike are grappling with the question: Are these tools simply struggling to appropriately meet the needs of their target user or deliberately preying on a vulnerable demographic?

What is Lensa?

Not to be confused with the job search platform by the same name, Lensa is an AI-driven photo editing app created by Prisma Labs. It offers various photo editing features and has recently surged in popularity after adding a new feature called “Magic Avatars.”

For $3.99, the AI will create a selection of colorful, imaginative, highly stylized portraits from submitted selfies.

Compared to similar photo editing apps, Lensa currently holds the spotlight due to its viral popularity — but it is far from the only app of its kind. Dozens of other photo editing tools offer social media users quick, artistically appealing (albeit unrealistic) avatars. 

These apps raise valid concerns. Scrutiny of Lensa has revealed troubling implications around user privacy, mental health, content theft, and the potential sexualization of minors. 

Individually, each of these concerns could point to a combination of fixable human and machine errors. However, taken in aggregate, it is difficult not to question whether these ethically dubious practices are core to the platform’s business model. 

Key Privacy Concerns

It has been roughly 15 years since the first wave of teenagers rushed to social media platforms to broadcast their thoughts, photos, location, and other private information to a wide audience. Since then, our society has learned a great deal about the privacy and mental health risks of these platforms. But there's still a lot to learn. 

While today’s teenagers are generally more aware of the commodification of their data and other online risks than their counterparts were in the mid-late ’00s, it's clear that platforms like Lensa continue to implement new means of exploitation and tricky tracking methods.

Data Sharing

The improper collection and distribution of user data is highly prevalent among AI photo editing apps.

Experts have raised flags about the numerous permissions photo editing apps require from users’ devices and the amount of sensitive data they can access. In addition to camera access, these can include permission to access: 

  • Users’ files

  • Geographic location

  • Microphone access

  • Access to phone contacts

Many of these apps come from developers with poor ethical track records around privacy. Some have even been known to proliferate adware, spyware, and phishing attacks.

Lensa in particular has come under fire for suspicion around how it uses and retains photos and facial recognition data. While the app claims to handle data properly, experts say its software and terms leave content and data open for future re-use, sharing, and selling.

Today’s audiences may be more aware of the ways companies try to access and profit from their private information, but users must remain vigilant as companies continue to develop innovate tactics to learn as much about their users as possible.

Content Theft

Photo editing apps pose data risks to more than just their users. In many cases, apps illegally source the content that drives their algorithms.

Most modern AI-based apps leverage machine learning. This means that rather than human programmers defining exactly how the AI algorithm works, they create a set of rules that enable the AI to learn for itself by consuming real-world data and content.

The particular algorithm Lensa employs is called “Stable Diffusion,” an open source AI model used by several apps, which analyzes a library of images from the internet to learn what types of visuals to use or what people like about different styles.

The problem with many commercially popular photo apps is that they may not be sourcing data ethically. Numerous credible allegations suggest these companies are taking the work of online artists and creators without their permission and using it to “train” their algorithms.

 

 

As a result, an AI tool could scrape the work of a digital artist, "feed" it to their machine learning algorithms, and create an eerily similar “original” image for someone else.

This wouldn’t necessarily be an issue if there was a fair and legal use of source content. However, artists are understandably concerned about third parties profiting from their work without their awareness or consent.

Unwelcome Exposure

Lensa’s terms of use expressly prohibit the submission of nude photos, which makes it all the more concerning when users receive non-consensual nudes generated by the app

In addition to outright nudity, many users have noticed the app generates unwelcome sexualization to their photos. As mentioned above, the AI algorithm works by scraping popular images from the internet to teach itself to produce similar images.

Lensa's library of images available contains a high number of sexualized assets (particularly of young women) due to the popularity of this type of content online.

These libraries help “teach” AI-based apps that suggestive images are most desireable. As a result, many of the images generated by Lensa and similar apps incorporate sexualized elements into user photo — regardless of age. This raises around body image and mental health and is a massive privacy violation.

 

 

Exposing a person’s body without their explicit consent — even in an artificially generated format — is a clear violation of a person’s body autonomy and privacy. Paired with the aforementioned concerns about the retention and selling of user photos, the problem is significantly more serious.

It’s hard to ignore the parallel between the rise of photo editing apps to the ongoing legal conversation around non-consensual pornography in the US. Victims are still fighting for the right to remove explicit online images of themselves published without their consent.

While lawmakers made some progress on this matter in 2022, the law still doesn’t necessarily protect victims from the non-consensual proliferation of artificially generated nudes.

The Impact of Photo Editing on Mental Health

Automatic photo editing can be a wonderful thing. The ability to modify lighting, saturation, and other elements of can help those with limited editing experience create beautiful imagery.

However, the problem is that smart photo editing software go beyond editing images by altering the physical traits of the subject.

Many observers note that Lensa’s Magic Avatars, which some have dubbed “hot selfies,” attempt to artificially beautify subjects by perpetuating familiar, unrealistic body standards, such as:

  • Removing unique facial features deemed “imperfections”

  • Making male-presenting torsos leaner and more muscular

  • Making female-presenting torsos thinner and curvier

  • Adding artificial symmetry to faces and torsos

  • Making subjects appear younger than their natural age

This feature is no accident; it’s a selling tactic. As of this writing, Lensa’s own marketing page promises to “Perfect the facial imperfections with tons of cool tools.”

 

Digitally editing professional photography has long been at the center of the body image discussion, but now AI photo editing apps are bringing self-esteem problems home. Many users of the app have posted publicly about the feelings of discomfort, shame, and negative self-talk that they’ve experienced after seeing their own stylized AI-generated images.

It’s difficult not to notice all of the ways that stylized selfie differs from natural, unfiltered selfies. The underlying message the app promotes is that the changes they’ve artificially made to your selfie make you more attractive. This could entice even adults to wonder, Am I attractive enough? 

Model Sophie Hughes shared her feelings on seeing digitally edited photos of herself as a professional model:

“For years all I saw was photoshopped images of myself to the point where I barely even recognized myself — my jaw was more chiseled, my nose was slimmer, lips bigger, my hips were non-existent. This was incredibly damaging and in my personal experience, shattered my self esteem, deepened my disordered eating and deepened my hatred for my body which I held to a completely unrealistic standard.”

The problem is poised to disproportionately affect women and young girls, as the Stable Diffusion algorithm learns from a dataset of available and popular images on the internet, which “has an extreme leaning towards pornified content containing women.” The massive popularity of sexualized content portraying women online leads the app to over-sexualize women more often than not. 

Furthermore, the AI cannot avoid learning and internalizing the racial biases inherent in the internet’s preferences and definitions of female beauty, which highlight white, European features over all others.

This means a woman’s results may differ depending on her race and ethnicity, and non-white women are more likely to see their stylized selfies incorporate “anglicized” features — such as lightening their skin and hair and altering other facial features to match traditional Western beauty ideals.

The impact on mental health, particularly for youth and those who identify as female, cannot be understated. Student mental health has reached extreme lows in the wake of the pandemic, a period that saw a rise in eating disorders among other mental health conditions.

Negative body image has been linked to higher rates of depression, anxiety, and self-harm — particularly for LGBTQ youth.

Now, in addition to the impossible body standards already perpetuated in media, kids and young adults have a new unattainable beauty standard to contend with thanks to apps like Lensa; a “perfect” version of themselves, generated with the click of a button.

The sexualization of youth

Photo altering apps pave a disastrous path towards crossing one of the most universally agreed-upon lines of legality and morality: the sexualization of children and minors.

Consider what we have already learned about Lensa, whose algorithms create a “perfect storm” of objectification. When you take each of Lensa’s unethical editing practices into consideration, a real — however unlikely — nightmare scenario emerges:

Theoretically, a young adult user could submit perfectly normal PG-rated selfies to the app and in exchange receive infantilized sexual imagery that may never be removed from circulation.

While encountering all of these problems at once may not be every user’s experience, the risk is there and it’s important that school districts and parents keep an eye out. 

Parents should pay close attention to how their children are uploading images online; not only do apps like Lensa pose privacy risks, but as previously mentioned, regular use can result in body dysmorphia and set and further perpetuate Hollywood's amateur ideals around beauty and worthiness. 

Talk to your children about whether they use Lensa or similar apps. Youth who use the app can quickly become enamored by its "magic." Invest in parental control apps like Qustodio to help you keep a pulse on your children's online safety and app usage.