Corpora: Literal and Metaphorical | Margaret Fero | Monktoberfest 2022

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

Inspired by the book “Bone Rooms” by Samuel J. Redman about the history and role of bone rooms and physical anthropology in scientific racism, this talk highlights the strategies for bias mitigation that a previous field pursued that did not actually reduce bias and draws parallels to AI bias mitigation strategies being used today that do not actually prevent AI from replicating the underlying human bias. Throughout the history of bone rooms, scientists described themselves as mitigating bias even while promoting the theft of human remains from graveyards on the basis of race. In the interest of not repeating that pattern, it’s important for technologists who create or work with AI-based systems to understand common pitfalls wherein one feels like they’re mitigating bias but we are actually just making it more palatable.

Transcript

So in this talk I have entitled Corpora: Literal and Metaphorical and I am going to talk about bone rooms, as well as this book called bone rooms, as well as its relationship to AI ethics. My next slide is entirely content warnings but before we move on to the content warning slide, I do want to make explicitly clear that while this talk is full of criticisms of things that we are doing as an AI community, they are not criticisms of any particular individual. They are problems with the complex social systems that we live in and that we can all help to change, but there’s not one person putting a bunch of racism into AI specifically.
With that said, this talk deals with themes of racism and scientific racism, mistreatment of remains and medical abuse. There is a door in the back, I am not be even a little bit offended if you leave and come back. I will be available later if you want to step away but you still want to hear some of the stuff that comes afterwards, but you can’t hear it all in a row. This is a heavy one.
First, I’m going to talk about Bone Rooms, the book, as well as bone rooms, the concept. I’m going to talk a little bit about connections to AI, I’m going to talk a little bit about the concepts of de conceptualization and how did we get here? This is the pause that I promised. This is the last talk before the break, so if you need to step out …
First I’m going to talk little bit about Bone Rooms. There aren’t a ton of direction direct quotes from here. This book heavily influenced my perspective on some of the problems that I had been working there you as an AI practitioner. I was reading 20 papers a week at the time I started read this go book and it still heavily influences my work. Here we have our first quote from the book: The slide says: Scientists, eager for evidence to support their ideas, organized spaces colloquially known as bone rooms. In these spaces they studied the bones in an effort to classify the races and develop an understanding of the deeper human past. The bones that are in question here were often stolen or purchased bones as well as other remains from people who did not consent to being in these collections. Around the turn of the 20th Century, clashes between white settlers and native this violence became integrally connected with US and European scientists pursuit of racism, the pursuit of a theory to justify or explain white supremacist violence.
But I also said that we’re going to talk a little bit about AI today.
First, I’m going to talk a little bit about the etymology of corpus and kind of the different senses in which the word corpus is used, because corpus is a word that turns up a lot in the context of AI and that we talk about regularly but additionally, corpus refers to the physical body of a person, as well as a body of the work like we use in artificial intelligence. A couple of the definitions that I would like to highlight are one.S body of a human or animal he especially when dead tells 2b, the main body or corporeal substance of a thing and also 3a and b, all the writings of a particular kind or particular subject and 3b is a collection or body of evidence.
Today we’re going to talk about both of these types of corpus in relationship to each other. Physical bodies in that sense one, the body of a human or animal especially when dead and we’re also going to talk about bodies of from AI training data from 3b, a collection or body of knowledge or evidence. So that we are able to have nice AI tools that hopefully do good things to help us solve problems and achieve our creative goals.
But in both of these contexts, we’re talking about this material being viewed as learning material. Which is much less problematic when we’re talking about words than about people’s physical bodies, but I will argue still problematic, even at — based on the ways that the corps ora used to train AI systems are being developed. These are also deeply personal to the owners and the families of the material that becomes a part of this corpus.
Also they’re both a snapshot that can be used to represent or misrepresent a community, group of communities or an entire species.
But to talk about the ways that it’s misrepresenting, we’re going to talk a little bit about decontextualization and construction. Here I have a definition of decontextualization.
Who didn’t really have a technical background. I know everybody in this room has done it at least once, maybe it was a relative, a friend, a neighbor, somebody on a bus, a person who shared to you in an airport bathroom, there is somebody you have gotten stuck explaining something to who had no frame of reference to which to start.
It’s an extremely different conversation than you would have about that topic with somebody who had a shared context with you and it was probably frustrating to people on both sides of that exchange.
They probably had periods of both being talked down to and being unable to follow what was going on because with it was at too advanced of a level. This happens with cultures, too, and part of the process of minoritization of cultures is when one group gets to set their context as the default context where they don’t give that extra information and you just have to follow along. You’re either with them or you’re not. While all of the other group’s cultures, languages, traditions and in this case burial end up devoid of the context that gives them meaning to the participants.
The power to be a default context is a core part of colonization and also is because these bodies existed outside that default context of a certain expected set of burial practices, there were a number of assumptions made that the cultures that exhibited different burial practices unfortunate just didn’t care about these bodies or remains or that they wouldn’t mind if this was being used for some greater scientific purpose, like doing more racism.
Decontextualization also occurs in collection. This is a really interesting photo because part of the photo description in the creative commons library I often use said that this was intentionally an assortment of random objects that they found around their office that had been grouped together for no particular reason and as this collection and presented as a group.
In the process of collection you group maybe similar but often unlike objects or concepts together and you set a default frame but to some degree viewers are going to control or interpret that frame.
The grouping those objects or concepts together and requiring that a default frame being set. Contextualizing objects appropriately is a core goal of modern archeology.
However, there will always be some nuance lost when an object is included in a collection or a corpus, and the viewers will to some degree and by virtue of being more attuned to Don dominant culture objectives they will often apply the dominant culture’s frame even if there isn’t malice intended.
This is something I’m going to come back to again later because I think it’s really important here.
You don’t have to be a bad person or trying to cause harm in order to see collections of information or materials through that default lens, and you do have to put in a great deal of effort if you’re going to avoid that.
So how did we get here? Why were people collecting human remains? And why are we now collecting human cultures?
This quote begins — I only have part of it on the slide, but this quote begins: While not all scientists were as bold or directed in their racist conclusions, largely supported the scientific and pseudoscientific racism that dominated the era. In many respects, the practice reinforced existing and emerging colonial power dynamics veiled as scientific and social progress.
There were occasionally some remains collected from white people, especially white people from other countries, where they were possibly able to side-step local laws and regulations in order to get these as part of the collection to have a complete set, but the vast majority of these remains were of Black and Indigenous people of color and a number of remains in particular were collected from South American countries, and then smuggled to the US, which was the primary location of these bone rooms and the museums that promoted this type of study.
While this kind of grab what you can approach that was used to compile initial collections of bones for bone rooms was also initially how we began to develop the large language learning models that we now use in AI, in the hopes that things like safety or debiasing the training data would be able to come later.
Further research, including the excellent stochastic paper by (authors!) Has shown that we cannot simply add safety as an extra layer after we’ve already approached collection without context.
Image generators are somewhat newer so they’re behind — I suspect that once again, just as at least one of the models has released an open source version that people can use locally to generate non-work-safe content, we will discover that the biases and everything else that were in the original training set are in fact amplified in those generated images and you can maybe reduce the prevalence of some of it, but it’s going to be a huge, huge effort after the fact.
I’ve included some examples of types of collections that we’re using now in AI. At the top we have a couple of examples of training data, including the pile, which is exactly has it sounds, a big pile of a whole bunch of different sets of online content, we have wikitext, which is text from Wikipedia, as well as various versions of what is called a standardized project Gutenberg corpus datasets.
We also have benchmarks like Glue and super glue, and different benchmarks are available to measure all sorts of things. They’re not all bias-focused, but these are some that I’m more familiar with from that angle. There are also collections that form the actual models and pre-training approaches, Bert and Roberta are two that are particularly well known and are commonly used for research because they’re reasonably accessible. But if you want to view a whole bunch of collections and the ways that they are being used in AI, I really recommend the hugging function, you can follow the impact of that particular collection on all of the — really on the AI industry in general.
So what now?
This has been a very heavy talk so far, and I appreciate you all bearing with it, but let’s think a little bit more about the current state of AI corpus development, and how we should decide when and how to use AI-powered tools in our workflows? How could this be different? We don’t have to, and should not just accept the amplification of racism and other biases in our policies and processes, but neither should we throw out AI options altogether. I think there’s still hope but we’re going to have to approach the problem straight on and do real work to handle these problems.
One of the core things that we need to do is work on building corpora respectfully. The image on this side shows a group of people who are standing around outside of a door on a brick building, and in front there is a sign reading: Community CafÈ, that is pointing in their direction.
The disability rights community popularized in English the phrase “nothing about us without us” meaning that people should have a say in how decisions are made and how policies are made. If you’re in a position to set AI policy or to build corpora of any kind pay attention to those people on your team who are likely to be most affected if something goes on. If those people are not represented on your team, consider that you might not be the right team to build that thing, at least not right now.
I think it’s also important to do our best to avoid decontextualization. Here we see a CD collection and so these are all CDs owned about I a specific person it is a thematically developed corpus, unlike the previous one. As you involve your community more, you can also reduce the decontextualization inherent to your corpus. Consider the specific purpose of any AI tool that you’re use building or considering using as to whether that was appropriate or expected to be appropriate for your intended use case.
Generalized artificial intelligence sounds very flashy but without guardrails and better vetting practice in the era of corpus development that we currently have, your tool will reproduce and magnify social biases in ways you don’t want.
But how can you avoid these patterns? Unfortunately there is no easy answer. The best you can do is practice. You’re going to mess up and you’ll need to work very hard to make it right. In some cases you may not be able to make those mistakes right, and all you can do is try to do better and not repeat those mistakes moving forward.
By involving relevant stakeholders, considering the historical implications of your processes and how that may affect their future impact on others, and being careful, even when you’re excited. You can avoid the worst mistakes and continually improve as you try to grow with your community.
Oh, no! — well, there was going to be a photo on here of a pretty typical-looking house in downtown Philadelphia that some of you may have recognized. Some of you probably would have recognized this house, either by sight or by the image citation that was just below it on the slide that is not loading, and if you did, you probably would have known where I was going with this already.
This fairly normal, though somewhat fortified house, was the home of members of the Black liberation group MOVE, which the Philadelphia police bombed with all members inside, about a month after this photo that I’ll post to Twitter so you can all still see it, it was taken in 1985. This photo was from April of 1985 and the bombing was in May. The remains of Delisha and Tree Africa were finally released to their surviving brother about two months ago, on August 3rd, 2022. After years of pressure that mounted and increased, especially over the last year. The remains were clearly mishandled, including being stored in cardboard boxes in a professor’s office and in a professor’s home. And they were used for teaching, including a Coursera course, this was the latest stage of what even the University of Pennsylvania alongside Princeton admitted is a legacy of the mishandling, disrespect for, and mistreatment of the bodies of people of color in the United States, especially Black and Indigenous bodies. This isn’t a talk with a happy wrap-up about how we’ve solved this problem before and how we can do it again, because we haven’t even solved the first form of this yet. We can, however, avoid perpetuating the problems by avoiding the same problems we’ve ignored in the past. Thank you.
Thank you.
[applause]

More in this series

Monktoberfest 2022 (11)