Do you have an AI-generated look-alike living online?

Privacy news
14 mins
  • Advanced AI is producing highly realistic digital doubles, raising important questions about identity security and personal privacy.
    .
  • The creation of AI doppelgängers prompts significant ethical concerns, especially regarding authenticity and the lack of consent in replicating someone’s likeness.
    .
  • Current legal frameworks struggle to address the unique issues posed by AI-generated images, revealing a pressing need for updated regulations to protect individual rights.
    .
  • ExpressVPN, known for its premium VPN download, investigates the evolving world of AI doppelgängers, emphasizing the need for user consent and robust privacy protections.
    .
  • While AI doppelgängers could revolutionize personalized learning and mental health support, their development and use require careful ethical consideration and strong regulatory oversight to ensure they benefit society without compromising individual privacy.

Remember Lensa AI? Back in December 2022, social media feeds were inundated with oddly beautiful images of our friends in fantastical settings, thanks to an AI-powered photo app that was all the rage at the time. 

While making those images was fun, what if your face were used in a similar way but without your permission? That’s a real possibility, with many people now having used services that scan their faces and few regulations overseeing what is and is not permissible when it comes to owning your likeness. 

Welcome to the world of AI-generated doppelgängers, where your digital twin might already exist in the vast networks of the internet without you even realizing it. What implications do AI doppelgängers hold for personal identity when your face can be duplicated without your consent? And how do we handle the fading line between the real and the artificially generated?

Let’s find out. 

Jump to…
How AI-generated faces have evolved
The phenomenon of doppelgängers
Legal and ethical implications
Addressing the challenges of AI doppelgängers
How you can protect your digital identity
Can having an AI doppelgänger ever be a good thing?

How AI-generated faces have evolved

The journey of AI in creating hyperrealistic faces traces back to significant advancements in machine learning, particularly with the introduction of Generative Adversarial Networks (GANs) in the mid-2010s. Initially developed to create digital art, GANs involve two neural networks competing against each other—one generates new images, while the other evaluates their realism. This technology quickly extended beyond art, enhancing computer vision systems and providing realistic simulations for AI training.

Examples of faces generated by digital art AI; Midjourney

The application of GANs reached a new level of public interaction with the launch of the website ThisPersonDoesNotExist by software developer Phillip Wang. On the site, every refresh generates a new face, lifelike yet completely fictional, showcasing the power and creativity of GAN technology. Wang, inspired by a conversation with AI researcher Ian Goodfellow, used a model developed by Nvidia A.I. Labs to train his algorithm with over 70,000 high-resolution images, resulting in faces that challenge our notions of originality and authenticity.

These aren’t real people. They’re life-like human faces created by an algorithm on the website, ThisPersonDoesNotExist.

Hyperrealism in action  

Today’s AI-generated faces, which in studies are guessed to be real even more than actual human faces, are crafted not just to amaze but to be used in materials like advertising and marketing visuals. Not only do they look real but they can also be created quickly and at almost no cost. These digital beings offer an efficient alternative to human models.

A prime example of AI efficiency in the commercial sphere is Aitana Lopez, an entirely AI-generated model with over 311,000 followers on Instagram. Created by The Clueless, a Barcelona design agency, Lopez serves as an influencer who interacts with fans and promotes real products. The agency said that they created Lopez because the character offers reliability and efficiency, reducing the costs and logistical challenges of hiring human models. 

Aitana Lopez is a fashion model, gamer, and fitness lover who has over 311,000 followers on Instagram. She’s also not a real human. Instead, she’s the AI construct of The Clueless, a Barcelona design agency; Instagram

The phenomenon of AI doppelgängers

As we’ve seen with the evolution of AI-generated faces, the capabilities of AI have reached impressive new heights, leveraging advanced algorithms to produce hyperrealistic digital figures. This transition from traditional digital art to creating lifelike, interactive models showcases a significant leap in AI technology. But as we explore this further, we encounter a phenomenon that merges the fantastical with the familiar—AI doppelgängers.

The concept of a doppelgänger—someone who eerily mirrors your appearance but isn’t related—stems from the German “double walker,” ghost doubles traditionally served as ominous harbingers believed to foretell a person’s misfortune or demise. Today, this ancient concept has undergone a technological transformation. Using AI, likenesses of real people are being created digitally, potentially to lead a separate online existence.

For instance, in the entertainment industry, AI has been used to create digital versions of deceased actors for movies, bringing nostalgic characters back to the screen. A notable example is Peter Cushing in Star Wars: Rogue One, where AI and CGI were used to recreate his appearance as Grand Moff Tarkin, despite the actor having passed away in 1994. 

We’re feeding AI our digital selves

Apps like Lensa AI popularized turning ordinary selfies into stylized portraits reminiscent of high-fantasy realms. These apps utilize advanced AI algorithms trained on vast datasets of artistic styles and human features, allowing them to replicate and reimagine our faces in various creative forms. 

Towards the end of 2022, Lensa AI-generated self-portraits were all the rage; Instagram

This initial fascination has since transitioned into more serious applications. AI now helps us curate our professional images; it crafts the perfect LinkedIn headshot, tailors resumes and even produces personalized video content for branding. What began as a fun experiment with our digital identities has evolved into a tool for personal and professional self-presentation.

In customer service, AI doppelgängers are also becoming a reality. Companies are employing AI-driven digital agents that can interact with customers using the facial expressions and voices of human customer service representatives, providing a personalized and engaging customer experience.

The same technology is also helping us create deepfakes. These highly realistic and convincing AI-generated images, videos and voice clips can falsely depict anyone in fabricated scenarios. 

These deepfake images showcase familiar celebrity faces with their digitally created counterparts. Source: @jyo_john_mulloor; Instagram

With every selfie uploaded, every voice clip shared, and every video posted, we are actively contributing to the AI that learns relentlessly from our digital breadcrumbs. When we upload photos to social media or directly into AI systems, these images are stored and they undergo a transformation that turns them from snapshots into valuable data for AI training. Here’s how this process works:

  1. Our digital uploads provide the raw material for AI learning.
  2. These images are then tagged and categorized, sometimes even anonymized, to prepare them for the next stages.
  3. The tagged images feed algorithms like GANs, which progressively learn to recognize and replicate human features.
  4. AI uses this training to create new, unique faces that may be eerily lifelike, familiar, or stylistically altered but based on real people.

This means that AI can potentially create an AI version of you with only slight differences—i.e., your AI doppelgänger—and you may never know anything about it. This raises significant ethical concerns about who controls and uses our digital likenesses. When AI can replicate our faces for advertisements or political campaigns without our explicit consent, it blurs the lines between personal autonomy and technological exploitation.

This underscores urgent questions: who truly controls these AI-generated images? Who owns the rights to these digital identities? 

Legal and ethical implications of AI doppelgängers

The rise of AI doppelgängers ushers in a slew of legal and ethical challenges, especially concerning ownership and control. These digital entities, which resemble real individuals, navigate a murky area between original creation and direct replication, exposing significant gaps in current copyright and identity rights frameworks.

Legal uncertainties 

Current legal systems are ill-equipped to handle the novel issues posed by AI. In the U.S., for example, the reliance on antiquated copyright laws means that significant aspects of AI-generated content remain largely untested in courts. This scenario mirrors early legal challenges faced during the advent of AI art, suggesting that society needs new legal precedents to clarify copyright in the era of AI.

Take the case of Scarlett Johansson, who filed a lawsuit against an AI app named Lisa AI: 90s Yearbook & Avatar for using her likeness without permission in their advertising. The app created an ad that made it appear as if Johansson endorsed the product, leading to confusion and potential damage to her reputation. The legal action taken by Johansson led to the removal of the advertisement from online platforms, highlighting the legal battles faced by celebrities as they navigate the unauthorized use of their digital likenesses.

Meanwhile, the EU’s AI Act, although a step toward regulatory clarity, still lacks specific provisions for the ownership of AI-generated content, indicating a piecemeal approach to emerging challenges​.

Copyright complexities

The challenge with AI-generated content is pinpointing originality and authorship. Given that AI often remixes existing data to create something new, we’re left wondering: is the output original, or just a derivative of its training data? The U.S. “fair use” doctrine provides some leeway for using copyrighted material to create transformative works, but the boundaries of what is considered “transformative” remains a hot topic for legal debate.

Recognizing these issues, some companies are forging partnerships to ensure that original creators are compensated for the use of their work in training AI models​—but this is only a small handful of organizations. 

Ethical considerations 

Beyond legalities, the ethical implications are profound. If an AI can create a lookalike that almost perfectly mimics a person without their consent, it challenges the very notion of personal identity. This not only sparks a debate over the ethical use of this type of technology but also poses practical risks to privacy, security, and freedom. 

For example, an AI-generated video might show a person engaging in behavior they never actually did, such as attending controversial events or engaging in illegal activities. This kind of misuse can lead to public embarrassment, stigmatization, or severe personal consequences without the person ever having participated in the depicted actions.

Additionally, in severe cases, a digital doppelgänger could be used to access a secure facility or digital accounts, leading to identity theft or unauthorized access to sensitive information. The implications are especially dire in contexts involving national security or corporate espionage.

Risks of misuse  

The advent of digital doppelgängers also brings with it significant risks of misuse. Unauthorized use of someone’s digital likeness for advertisements or political campaigns without consent infringes on personal rights and blurs ethical lines. This not only poses risks of impersonation and fraud but also impacts personal and professional reputations, calling for stringent legal protections to safeguard individuals against such abuses.

For example, there have been widespread concerns about the potential use of deepfakes to fabricate statements or actions by political figures. The fear is that these videos could be used to mislead voters or tarnish the reputation of political opponents, especially as the technology becomes more accessible and convincing. 

Addressing the challenges of AI doppelgängers

As AI becomes increasingly skilled at creating digital doubles that are almost indistinguishable from real humans, the urgency for actionable solutions intensifies. The big question we face is: can our legal systems and personal practices evolve quickly enough to keep pace with these technological leaps?

Revamping legal frameworks 

The rise of AI-generated likenesses calls for a pressing reevaluation of existing legal frameworks. This includes recognizing and categorizing digital personas as distinct entities, which may require new rights and protections. For instance, legislation could be proposed to recognize the creation of a digital double as a “digital birth,” which could grant individuals legal rights over any AI-generated image that closely resembles them.

  • Legislative actions: Immediate and proactive legislative measures are necessary to regulate the use of AI in creating likenesses. Our laws need to clearly outline what counts as unauthorized use of digital images, ensuring that individuals maintain control over their digital identities. One approach could be to establish a registry where individuals can claim and manage AI-generated images of themselves.
  • Consent and ownership: Establishing clear guidelines on consent and ownership is essential. Individuals must have the right to be informed and to consent before their likenesses are used or replicated. This includes making it clear how their data is being used to train AI models, ensuring transparency and control.

Strengthening global cooperation

The global nature of digital technology and AI means that addressing these challenges also can’t be confined to any one country. International collaboration is vital to developing comprehensive regulations that protect digital identities across borders.

  • International standards: We need a unified set of international standards and regulations that govern the creation and use of digital likenesses. These standards should aim to harmonize approaches to digital rights and privacy, ensuring protection for individuals worldwide.
  • Cross-border legal frameworks: Efforts should also focus on establishing cross-border legal frameworks that address and penalize the unauthorized use of digital likenesses internationally. This would help prevent entities from exploiting regulatory gaps between different jurisdictions.

How you can protect your digital identity 

While society pushes for better legal reforms concerning AI, there are proactive steps you can take now to safeguard your online identity: 

  • Be selective with your data: Think twice before sharing personal information and photos online. Each piece of data can be used to train AI systems, including creating digital likenesses. Limit sharing to essential instances and prefer secure, privacy-respecting platforms whenever possible.
  • Review and restrict app permissions: Regularly audit the permissions you’ve granted to mobile apps and online services. Restrict access to your camera, microphone, and photo library unless absolutely necessary—and always be wary of apps requesting more information than they need to function.
  • Use data masking tools: Consider using services that mask or alter your photos slightly to prevent AI from accurately using them to create digital models. These tools can subtly change image details in ways invisible to the human eye but disruptive to AI algorithms.
  • Engage in digital clean-ups: Periodically review your online presence. Remove old accounts and unnecessary photos from social media, and consider cleaning up digital footprints that no longer serve a purpose but could be exploited.
  • Advocate for better policies: Stay informed about digital privacy policies and support legislation that protects personal data. Participate in campaigns and sign petitions that call for stricter regulations on AI-generated content and better transparency in how personal data is used.
  • Educate yourself and your community: Knowledge is power. Take advantage of free resources to learn more about AI technology and its privacy implications. Share this knowledge in your community to raise awareness and help others understand the risks and defenses against AI misuse.
  • Monitor for misuse: Set up Google Alerts for your name and regularly search for your images online to see if they appear in contexts you haven’t authorized. Various tools and platforms can help you monitor where and how your likeness is being used.

Can having an AI doppelgänger ever be a good thing?

As we’ve seen, AI-generated doppelgängers raise significant concerns about privacy and the control of personal data. The ability of AI to replicate our very likenesses so accurately presents a host of ethical questions that demand careful consideration and rigorous oversight. But alongside these challenges, could there be positive uses for this technology? Potentially, yes.

AI doppelgängers could, for instance, revolutionize the way we learn and interact with information. Personalized learning assistants, tailored to match the learning styles and paces of individual students, could make education more accessible and effective for everyone. These AI entities could simulate different teaching methodologies to find the one that works best for each learner, potentially transforming educational outcomes.

In terms of mental health, AI doppelgängers might one day serve as therapeutic aids. They could be programmed to provide psychological support, practice conversations, or help individuals develop social skills in a low-pressure environment. For those dealing with isolation or specific mental health conditions, a responsive AI that understands and reacts with empathy could be a significant source of support.

Moreover, AI doppelgängers could assist in professional environments, handling routine tasks and interactions that allow human employees to focus on more complex, creative work. This could lead to increased workplace efficiency and allow workers to engage more deeply with aspects of their jobs that require human insight and creativity.

However, for these benefits to be fully realized without compromising our ethical standards, strong regulations and clear guidelines must first be firmly in place. This framework must ensure transparency in how AI doppelgängers are developed and used, with strict measures to protect individuals’ data and prevent misuse. It’s only under these conditions that the positive potential of AI doppelgängers can be harnessed safely and effectively.

What do you think about AI doppelgängers? Could the benefits of this type of technology ever outweigh the ethical and legal concerns it raises? Let us know in the comments below. 

Phone protected by ExpressVPN.
Privacy should be a choice. Choose ExpressVPN.

30-day money-back guarantee

A phone with a padlock.
We take your privacy seriously. Try ExpressVPN risk-free.
What is a VPN?
I like hashtags because they look like waffles, my puns intended, and watching videos of unusual animal friendships. Not necessarily in that order.