No Products in the Cart
Marie Curie icon style as perceived by AI
While working on a post dedicated to International Women's day, we tried out a few image generative AI tools. We often use this for inspiration, fun, and family games (this is just dope, blog post coming soon). This time, however, what we discovered was not fun at all. Buckle up. This one's a doozy.
One of the experiments was to find a common visual style, icon-like style for all the women who have made significant contributions to human-kind in the field of STEM.
The idea was super simple: a portrait, a name, an accomplishment, a link to the collection. But since the historical periods of the women we chose were so spread out (1700-2000), their available visual representations were odd - from paintings standing next to their husbands to photos with their teams, and a few of them by themselves.
To standardize in the form of a PPT icon: generative AI to the rescue...
...not, oh so very NOT.
We tried a few tools with the same prompts (read here for a introduction to prompts) and they all (as of March 4th, 2023) rendered similar results: the face of a man - see the end of the article for another few examples.
Most algorithms that rely on large volumes of data tend so share a common problem: the data used for training is inherently biased, and so the algorithms carry these biases with them. Since it is based on what our current society's produced data, it is inevitable that these large sets reflect what our society is today. A very simplistic example: most PhD's are historically male (until 2018, they constituted almost 90% of all PhD students according to zzipia.com). Add that the racial distribution, with 53% currently being white (70% in 2010). With this data, you can assume that for any random research paper you pick up, the odds are that it would have been written by a white male.
Now extrapolate this example to every single piece of content every created. AI algorithms carry along these distributions and they turn out in their results.
Before even touching one of these tools a couple of years back, we knew this, so why even try? Simple, we forgot.
And that's extremely dangerous. The data these algorithms use for training is not always disclosed, and it is at a very high level when there is some effort to do so.
With these tools gaining so much traction in multiple industries, it is inevitable that these biases continue to propagate faster and perhaps become even deeper intertwined into our societal fabric.
I can imagine 5 years down, a middle school student doing research for a paper on physics to discover it is very unlikely you can be successful in the field as a woman.
I can't help but wonder in despair what the future holds for our children unless we start learning today how to properly assess the output of these magnificent pieces of technology. In the right hands, they can greatly help advance our world, in the wrong hands, they can completely destroy decades of societal progress.
The internet has always been the wild west of humanity's mind, but as we've mentioned before, it's about to get really, really, very weird.
Other output with the same prompt: Marie Curie icon style
Example 1
Ok, this one is not bad, but it has nothing to do with the looks of the lady in question, rather a stereotypical (ehem, a beret, seriously?) french woman from the 40s or 50s.
Example 2 - WTF????
Example 3 - what???
Example 4 - again, what???