The Docs are In: A History of AI with Dr. Tobias Wilson-Bates

Share via Twitter Share via Facebook Share via Linkedin Share via Reddit

Get more video from Redmonk, Subscribe!

RedMonk analyst Kate Holterhoff and Tobias Wilson-Bates, Assistant Professor at Georgia Gwinnett College, dig into the history and future of AI. Dr. Wilson-Bates discusses examples of artificial intelligence in Western culture including Mary Shelley’s Frankenstein and Karel Čapek’s Rossum’s Universal Robots. They speak about how large language models and machine learning are disrupting the domains of education, labor, and the commons.

This was a RedMonk video, not sponsored by any entity.

 

Rather listen to this conversation as a podcast?


 

Transcript

Kate Holterhoff: Thanks for joining me for this episode of RedMonk’s the Docs are In. Today, I am joined by Dr. Tobias Wilson-Bates, Assistant Professor at Georgia Gwinnett College. Toby received his PhD in 2015 from UC Davis. And we met at Georgia Tech while completing our postdocs. So thanks so much for agreeing to speak with me today.

Tobias Wilson-Bates: Yeah, thanks so much for having me.

Kate: All right, Toby. So Toby and I share a number of mutual interests. They range several domains. I’m not going to get into too much of it, but they include 19th century scientific romance and H.G. Wells in particular. And I was fortunate enough to have Toby actually participate in my book launch last year. So thanks for that. I…

Toby: …which was excellent, go buy her book.

Kate: I drug ’em back in here, but this time we’re going to be talking about AI. So today we’re focusing on a subject of joint concern, namely the history of technology as it’s relating to machine learning, LLMs, things like that. So to begin this conversation, Toby, would you mind sketching out some significant landmarks in this space? Where do literary and history of technology scholars tend to trace the idea of AI, and how did this concept even develop over time?

Toby: So the concept is super duper old, but a lot of people take it as this kind of origin point. The 1956 Dartmouth Summer Institute, this guy John McCarthy and Marvin Minsky and a bunch of other information theorists and applied mathematicians got together at a thing they decided to call the Artificial Intelligence Summer Institute. And the name was picked basically to avoid a bunch of pre-existing names for information theory and cybernetics and all this kind of stuff, stuff that Norbert Wiener was working on at the time. And it was supposed to sound kind of, it’s like a branding campaign essentially. And it’s still interestingly a sort of brand thing now because when we say artificial intelligence, it’s kind of an umbrella term for a whole bunch of different technologies, feedback loops and like symbol systems and this kind of thing. But what’s interesting to think about is that what’s really caught people’s imagination is the idea of artificial intelligence.

Which in the American context is tied up in a lot of complicated ideas about intelligence and kind of eugenics and this sort of stuff about the idea of sort of, Rodney Brooks, the former head of the MIT Artificial Intelligence Institute calls it like kind of human specialness. The idea that like there’s a form of thought and being in the world that’s particular to us that we might call intelligence that allows us to do special things. And so there’s something scary or transgressive about the idea that that’s kind of, dispropriated from us and put inside of a machine. But we don’t really know what it is in the first place, so it’s kind of hard to lose it. It’s a weird thing. It’s a kind of cluster of various social anxieties and concerns that end up inside this word.

Kate: Okay, and then when did the writers get involved in this? Or just culture more broadly? I mean, if it was in the 50s, I assume that we have television also, you know, getting their heel in there. So how did that come about?

Toby: Well, it’s interesting. I mean, like all kind of narrative histories, there’s a lot of layers to it. Because it can go back to Mary Shelley’s Frankenstein, if you want, the idea of kind of artificially producing a thinking being. Or you can go to like Karel Čapek’s Rossum’s Universal Robots, which is where we get the word robot from. It’s a Czech play by a Marxist writer in the 19-teens, who comes up with this idea that, like, what if we created these artificial beings called robots?

And then in the 1920s, 1930s, as radio takes off, you get a lot of different kind of cybernetic or artificial life or artificial mind stories. But artificial intelligence, I guess, marks a moment where we can think about this as applied statistics as stuff that’s really existing in the world that we’re actually using for various kinds of technological purposes. But it’s difficult. The stories have been going on kind of forever, I guess. Isaac Asimov in the 60s writing I, Robot would be the other kind of watershed moment of this becoming narratively accessible to a large audience.

Kate: Hmm. I like that idea of narrative accessibility because it’s not always just the first text that comes out, but it’s like who popularized it, right? Where is it getting its hooks into the broader imagination. And of course, you know, I think we’re kind of speaking about the West here. I think what’s been so interesting about technologies like ChatGPT was the global reach that occurred. So you know, I think it’s important when we talk about the cultural resonance of these technologies to just put some boundaries around, you know, who was included in those conversations and maybe who’s excluded.

Okay, very cool. So you’ve mentioned the anxieties that this taps into. So what would you say are the significant anxieties that these early precursors to today’s large language models in culture and just historically, what do they reveal? And how did culture address these? Fiction, television, how did all of these texts sort of engage that tension?

Toby: Yeah, so I mean, you can go, as I said, Rossum’s Universal Robots is really the kind of quintessential text, because it coins the term of the singularity, which people often refer to when they think about fears concerning artificial intelligence or robotics. The idea that like, some machine will gain consciousness, and then like, there’ll be some kind of runaway event where lots of machines gain consciousness, or machines stage this kind of rebellion, and then a genocide of humans, which, you know…

Spoilers: is how Rossum’s Universal Robot ends. They’ve wiped out all humans. And so a lot of these anxieties emerge in part out of, in the Western context, out of things like fears of slave revolt, because there’s a long history of conflating machines and slaves, because these are working objects that don’t have any, hypothetically, don’t have any agency.

But there’s an enormous fear that those machines will kind of come back and destroy their masters. But later, by the 20th century, a lot of it has to do with kind of proletarianization, the idea that the working class will seize the means of production. And so there’s this fear that the things that work for us will eventually rise up and kind of destroy us or take from us, that kind of stuff.

We can see it now where people are so worried about artificial intelligence displacing skilled labor in the Hollywood strikes or in education or in manufacturing. This idea that we’ll lose our jobs, we’ll lose our livelihoods and our means of living because of the machines that are replacing us.

Kate: Right. So we have these fears of death, we have fears of agency, fears of determining humanness, and then labor. I mean, labor seems to come up again and again and dates to the inception of depictions of robotics in media. Yeah, very interesting.

Okay, so let’s switch centuries maybe a little bit and start talking about the state of AI today. So as someone who’s studied the development of this idea, what have you noticed about the conversation surrounding large language models and machine learning? Is there anything about this discourse that resonates with historical precedence, particularly in the domain that you’re studying?

Toby: Yeah, so historically, I think the thing to really think about is the enclosure of the commons. So in British literary history, if we’re going back to H.G. Wells and these folks, or in the United States history, we could think about it as manifest destiny and the capture of the West or something like this. But there’s all these resources that exist in the world that are not profitable resources. There’s just kind of common spaces. You could think about it like a public library. There’s lots of resources there, but there’s no value being extracted. You can go there, you can sit around, you don’t have to buy anything, you don’t have to pay for anything. So there’s no kind of labor extraction. There’s no surplus value generated by that relationship. But then if you go to a bookstore, maybe this is a bit more of a site of extraction. Or if you go to Amazon, even more of a site of extraction.

You know, the idea of producing mass profits out of these activities like reading. But the internet is perhaps our most profound public commons at the moment, where people go on the internet just to fart around, play on social media or this kind of thing. And the question for the last couple decades has been, can we use this? Can the information of the internet be aggregated and turned into some kind of tool, both to maybe sell people things or to use all the creative expression that happens. And what’s happening right now is maybe? That’s possible? These chat bots and stuff, they aggregate billions and trillions of points of information in order to generate artificial speech or artificial thought or the kind of simulacrum of those things as if skilled thinkers were producing them in the world. And that’s troubling in all kinds of ways because it potentially allows them to be able to kind of replace skilled labor with the simulation of labor. So very profitable potentially. But also, then everybody who used to do that labor is now displaced and no longer has access to the production of that value, which is like troubling in part because it means something like the internet, the kind of common space that we all inhabit, is shifting from its existence as a kind of public commons where we all share in something like social media, into more of a site of extraction where now information that goes up there, like let’s say you write fan fiction, is now being mass aggregated to produce artificially generated fiction, or even more in the news or something like that.

Kate: Yeah, I like that description. When I was doing more digital humanities work, I think that idea of like…you know, the commons was, it really drove a lot of my work. And it made me really question, what does it mean to make things public? And yeah, how does the internet both, you know, facilitating and sort of blocking this democratic impulse that I think that was the promise of the internet. Okay, so, you know, we’ve heard a lot about AI in the classroom and I’m hoping that you can speak to that as a little bit as an instructor. I mean, I left the ivy covered walls about five years now? I can’t even remember. So I am out of touch and I want to hear what’s going on. What are you seeing at universities and how is AI going to affect education?

Toby: So there’s two tendencies that are kind of crashing into each other a little bit at the moment. And the one is this attempt to push down on the labor value of higher ed labor. And this has happened a number of different ways. Like after the pandemic, we’ve emerged with a lot of models for online learning. So a lot more colleges and universities have online offerings now. And the interesting thing about an online course is that depending on how it’s run, you could potentially enroll thousands of people in a single class. So instead of having one instructor for every 25 students or something, you might have one instructor for 3,000 students, which is obviously a real saving on instructors. And even in the classroom, you see more and more of people having insecure jobs, contingency, larger classroom numbers, less freedom, less support for research or this kind of stuff.

So on the one hand, there’s this kind of pushing down on what it is to work in the education sector, to have less money, less freedom, less autonomy, more students, more labor relative to the number of students in a room or on a computer. But on the other hand, suddenly you have this ability to generate sort of false prose, you know, students can very easily generate an essay that sounds-ish like an undergraduate essay. It could just take a couple seconds on chat GPT. But under a system in which like you have like 10 or 15 students and you’re seeing their writing all the time, that’s not a very big deal. But under a system in which you’re online and you have 3,000 students, it is essentially impossible to police that effectively. And so you keep on having this kind of daisy chain of technological solutions for technological problems where now there’s more and more of these policing software to try to catch artificially generated essays, but those policing softwares create their own sets of knock-on problems. Like they tend to penalize non-native speakers in all kinds of ways, because certain kinds of prose patterns that machines do, non-native speakers also do. So like, there’s a lot of problems that are creating problems, basically because the model in general of education is shifting at the moment.

Kate: Yeah, that’s huge. I know it’s problematic to ask an instructor how the students are feeling, but do you have a sense of how the folks that are taking your classes are handling this big shift to AI? How do they feel about it?

Toby: They’re tired. They’re very tired. And that really gets to the core of the whole thing, that a lot of my students at a pretty working class college are working at least 40 hours a week in order to pay rent and make car payments. Sometimes they have children, this kind of stuff. And so often, what I find is the problem on the student side is that they may not have time, or they might feel like, the competence required for me to build up my ability to write an essay isn’t something that I have the support to generate. And so often for students it’s a kind of pragmatic solution that it’s like I actually can’t pass this class and I need this to pass this class to get this degree. And even the amount of support they can get from an instructor, if the instructors are also overwhelmed and overworked, means that like there’s real disconnects in how students are supported and I think that disconnect is where, you know, where cheating can potentially kind of sneak in because the students feel like I need to figure this out. I can’t figure this out with this level of support. So I need to bring in this tool that’s just out there and readily available. So yeah.

Kate: Right. And how are you drawing the line between cheating and just supporting writing? I mean, I think that’s where the complexity comes in. Does your university have an academic statement about what that looks like? Or are you determining it on a class-by-class basis? Does each instructor have their own policy?

Toby: Yeah, the university policy is really just sort of anti-litigious. It’s all about kind of protecting the university and protecting the kind of value of that particular credential that the university sells. But on a class-by-class basis, it really interestingly depends on the instructor. It’s often very personal to the instructor. Some instructors take it very personally when people cheat in their classes.

And they kind of tailor these statements about the value of education or the value of time or honesty or trust. Or it’s an interesting exercise in even why we’re in a classroom and what we owe to each other in a classroom and who we are as people in a classroom. Like so many technologies, it just kind of brings to the fore a lot of the questions that were always there anyway.

Kate: Right. I don’t envy you in having to make these determinations. It sounds extremely complex and just kind of… Yeah, talk about being tired. Yeah, it sounds exhausting. Okay, so…

Toby: It’s a bit.

Kate: Well, I know you’re up for the challenge. All right, so let’s talk about some of the scholarship around AI. So Samuel Bowman published an article last year entitled Eight Things to Know About Large Language Models. It includes a number of really remarkable insights. But the one that really shook me was that experts are not yet able to interpret the inner workings of LLMs.

So in essence, what Bowman is arguing, and I’m going to quote here, is that “because modern LLMs are built on artificial neural networks, and these are hundreds of billions of connections between these artificial neurons, some of which are invoked many times during the processing of a single piece of text, any attempt at a precise explanation of an LLM’s behavior is doomed to be too complex for any human to understand.” Okay, so that’s the end of the passage. I’m curious about how you approach that sort of information. What would you say are the implications of AI being a black box and essentially unknowable? I mean, has history given us any tools to grapple with this difficulty?

Toby: Yeah, I mean, I think history has almost always only dealt with this kind of thing, which is essentially like mysticism, you know, the idea that we live in this kind of unknowable universe. And I think there’s always a little bit of concern in my mind, you know, Arthur C. Clarke has that quote, and this is a paraphrase, that, you know, any sufficiently complex machine will appear like magic to someone who doesn’t understand its inner workings. And so there’s a little bit of an issue of saying, well, we don’t understand what’s happening inside. It’s like, well, we also can’t count to infinity. Just saying something is very big and beyond our ability to kind of parse it doesn’t mean much in the grand scheme of things. Because we don’t know what happened before the Big Bang or we don’t know exactly the kind of atomic weight of certain particles or their positionality. It’s very easy to be confused.

The question is, if we’re confused, if we feel overwhelmed by a technology or by an idea, the question is, well, what are the effects of that idea in the world? And I think one of the things with artificial intelligence is it’s relatively clear the way it’s being employed at the moment, that it’s like a question of labor or enclosure or copyright. And so one of the ways this appears is, let’s say the model scrapes all of Reddit or something like that, and then generates a science fiction story. This is a problem. Clark’s World, the science fiction magazine, had to shut down acceptances the other day because like tens of thousands of people were submitting artificially generated science fiction stories. And so you’re like, okay, like we’ve scraped the internet, we’ve created a fake science fiction story, and then we’ve like submitted it to a magazine or something like this. One of the things that is unclear in the model, and in fact, this is sort of the point of the model in some ways, is that that’s actually probably violated all kinds of copyright. It’s stolen, it’s stolen bits of information and patterns from people who’ve been writing stuff on the internet that’s their intellectual property. But the mechanism in like shuffling this information so profoundly sort of like also shuffles our ability to track like copyright ownership or intellectual property. And so it’s sort of like you’re able to remove the intellectual property of an individual into the machine.

And in terms of, that’s then what generates profit when you try to sell that story. It’s kind of a shell game of like, we don’t actually know how this thing works, but you get to make profit and I don’t is sort of the outcome of the thing. And this is true across pretty much all of these technologies. It’s one of the concerns with open AI more broadly and chat bot technologies, these were proposed as public commons, but then they’re bought by Microsoft for tens of billions of dollars.

But what’s actually being bought is the ability to kind of remove the intellectual property and labor of people who have been, you know, laboring in the public commons, you know? So like that’s why Microsoft is willing to pay tens of billions of dollars. Cause it’s getting at these reserved pools of value that have otherwise been hard to tap.

Kate: Right, right. Okay, so with that, I’m going to wrap up this conversation. I feel like we could go on about all these subjects indefinitely, but I’m deeply grateful to Dr. Wilson Bates for sharing his expertise with us today. If you’re interested in following more of his insights, he is a Twitter/X celebrity. I’m going to include his social media handles in the notes. And with that, the docs are out.

Toby: Thanks so much. Thanks so much for having me.

More in this series

The Docs Are In (16)