AI Headshots vs. My Senior Pictures

Alt + E S V

AI Headshots vs. My Senior Pictures

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

I was just asked to submit my first speaker headshot and bio for a 2024 conference. Given that my existing photos are a few years old and the event’s theme is around AI, I figured it was worth experimenting with an AI-based headshot generator. What’s the worst that can happen, right?

The results of my HeadshotPro experiment were both humorous and somewhat disconcerting.

The Prompts

Using the website was fairly straightforward: pay $29, answer some questions, upload some photos, get some headshots.

I was asked to define (via dropdown) my:

  • age range
  • eye color
  • ethnicity

Then I chose backgrounds and clothing styles. (I went with brick wall/white top and white background/blue top, hoping that I would land somewhere in neutral territory.)

The system requires you to upload at least 12 images and has various constraints and recommendations around quality and composition. This was slightly difficult for me primarily because my camera roll is almost entirely pictures of my kids. But I pulled together images, all from the last couple years.

I should have taken a screenshot of the input step, but here’s a flavor of my inputs photos.

nine selfies/headshots of Rachel

The Results

HeadshotPro returned ~120 possible pictures to sort through. You select your favorites of the batch and they provide fully rendered, non-watermarked versions of your top choices. (I’m purposely sharing the watermarked versions here because while I appreciate all you weirdos reading this, I absolutely don’t trust you.)

In the process of sorting through them I encountered the standard AI issues. Six fingers. Weird rendering of photo details on things like shirt collars or jewelry. But beyond any of these specific issues, there were many uncanny valley moments of feeling like I was looking at something that was close to but not quite me.

But also: holy problematic encoding of societal beauty standards, Batman!

Look at how absurdly thin they rendered “me” in some of these photos

3 AI-generated photos of Rachel looking like a bobblehead

Notice how there’s completely unnecessary, AI-generated sexualization

These professional headshots are with generated cleavage and nipples visible through clothing, even when the instructions explicitly tell you not to upload prompt images with exposed skin. Why?

3 AI-generated photos of Rachel in white shirts that reveal too much

Look at how long and lustrous the hair is

And notice that the AI clearly leaned into longer and blonder hair as the desired outputs even though I definitely shared prompt photos that also included my current darker, shorter hair.

3 AI-generated photos of Rachel with long flowing hair

Notice how the skin in any of the above photos shows no evidence of age at all

There are no pores or wrinkles or rosacea in AI-ville. In fact, there is no aging at all. After I got these results I called my mom and had her dig up my high school senior picture so I could compare them. And because I love you, internet, I am going to share the results.

Senior photo on the left, AI-generated photo on the right

This is 17-year old me compared with an AI generated version of what I ostensibly look like now. That’s the same skin! I know I definitely spend more than is reasonable on moisturizers, but the amount of anti-aging in this photo is absurd. This AI is depicting what I looked like half a lifetime ago.

Given that my goal was to see if I could get an updated professional headshot using recent-but-not-professional photos, seeing my 17-year old self reflected back in the results was not super helpful.

So What Of It?

I know that we are societally conditioned to want to look younger, thinner, and more polished. You can argue that the AI is merely reflecting these societal standards back to us.

These concerns are not new, and in fact come up with every technology wave. See, for example, pieces on the danger of TikTok’s Bold Glamour Filter (2023), the problem with Snapchat filters (2018), 25 years of how Photoshop changed the way we see reality (2015), etc etc.

Over and over again we wring our hands about the societal impact of a given technology, and the answer always seems to be that the problem is not the technology itself but how people using the technology. Yes, and no.

We build these systems knowing that there is an interplay between the world as it is and the world we’re creating, and we have to consider what we’re encoding into these AI outputs (either via what training data is used, how it’s labeled, how models are weighted, how the algorithm responds to adversarial feedback, etc etc.)

In the grand scheme of things it’s absolutely not a big deal that my experiment with headshots didn’t pan out. But it is worth thinking about what happens as AI applications move from being a toy we experiment with to something that becomes load-bearing in significant ways. What do we need to be noticing, considering, and changing about AI outputs now if we want to ensure future outputs reflect the society we want?

I leave you with a quote from Cathy O’Neil’s Weapons of Math Destruction:

“Our own values and desires influence our choices, from the data we choose to collect to the questions we ask. Models are opinions embedded in mathematics.”


Related Content

  • What does it mean to practice responsible AI? Conversations with Diya Wynn, part 1 and part 2

No Comments

Leave a Reply

Your email address will not be published. Required fields are marked *