One of the most common attitudes with respect to AI today is the so-called “doomerism,” the idea that AI technologies are inevitably fated to present an existential risk to humanity. This talk takes that idea on head first, systematically examining the theoretical risks versus the reality on the ground, taking a skeptical but thoughtful view to how we balance the potential of the technology with the varying risks AI may – or may not – represent.
Hey, everyone, I’m Brian Cantrill, it’s great to be back. Steve, you often say this is an opportunity for us to have those conversations, get those presentations that we never see or hear anywhere else, that are very important. I think — what a terrific lineup we’ve had today and a lot of really important conversations, important presentations. So, you know, we say that MonktoberFest always starts with a tweet. Certainly I think most of my presentations here have always started with a tweet. In truth, they start with the — or I guess a post on X now. In truth, they actually start with being trolled. And this year is no exception. This is not a single tweet. I’m just actually picking a representative tweet. This is actualry a whole flotilla of internet garbage that I have been unspeakably trolled by and I’m sorry that I’ve been so trolled by that I’m going to drag you down with me, into a little bit of a rabbit hole here. Of course it’s like, it’s post-Musk Twitter, so we can’t even fit a tweet on a screen anymore — I literally can’t even be trolled by a tweet properly anymore because I can’t even fit it on a screen. There were other parts of it but the gist of it is, is this is a well meaning person who has reluctantly come to the support of this issue we saw in the spring of pausing all AI. AI is scary, and we must pause all AI research, and this is saying that this is going to be extremely unpopular in my circles, if we pause AI. In fact, I share the aesthetics of e/acc, if you don’t know what that is, I’m really sorry to have to — this is one of these like things on the internet that are terrible, don’t blame the messenger, this is something called effective accelerationism. Imagine that Adam Smith and (who) had a baby. I’m not going to talk about that. This person they lean Libertarian, I mean generally extremely anti-regulation. It’s also like anti-Health Department and anti-OSHA, and I hope you appreciate the advantages of — blah, blah, blah, for progress, yay.
So but one this gets to the gist of what I want to talk about. That the intelligence is the most powerful force in the world, I say. And we’re about to give a nuclear weapon to everyone on Earth, without giving it much thought, because of vibes.
I’m no expert and so I understand the desire for Millennials and Zoomers, who use nuclear weapons metaphorically, but I grew up in the era of actual nuclear weapons. This is not actually meant as a metaphor, this is actually meant literally. There is a literal concern that we are giving everyone a literal nuclear weapon because of vibes, because of no substantial counter-argument has been offered to the existential risk concerns. Existential — what existential risk concerns? So I apologize if I’m the one — hopefully I’m not, but if I’m the one to have to break it to you, there is a disturbing outgrowth of the internet that believes that these computer programs that we are developing pose an existential risk to humanity, extinction risk and does the word extinction mean something? Is this a metaphor? No, no, we’re talking literal existential because of the death of AI. And no substantial counter-argument has been offered to the existential risk. No substantial counter-argument has been made that Harry Potter is not fiction, either.
Not serious people. We’ll take that apart in a second here. And it’s increasingly obvious, AGI is very close. In fact I know multiple researchers from top labs, to be clear, there may be a 95% + chance that AGI goes extremely well, but 5% of annihilation is much too high. What’s the big rush?
OK, where do these numbers come from and this is a popular idea of what is your P of doom, probability of deem? My P of doom is 5% and there are actual serious people who are engaging in this discussion, without, I don’t think any serious thinking. Revin Russe is a reporter at the New York times. He says my P doom is about 5%. Where does 5% come from and I hope I’m wrong.
So I want to take this apart. First I want to talk about the word serious, this tweet uses the word serious about three different times, and I understand that, but it’s not honestly clear what serious means in the context of someone who’s equating computer programming with nuclear weapons. These are not the same thing or accuses anyone who did I agrees with the assessment of just vibes. We’re talking about human extinction. I mean, human extinction, can we have a little more rev reverence for our shared ancestry, please? 5,000 years ago or 10,000 years ago, wait a minute, sorry, what? Do you not understand? We colonized this planet from the arctic to the Pacific ocean, Easter Island and you’re worried about extinction? Let’s have some reverence here. So there’s a serious question. Why should we treat this seriously at all? Why should we treat the fear of extinction at the hand of a computer program is so absurd that at some level, it does not merit serious consideration. It is an outlandish claim that is not well supported by anything. now, first of all, one reason it’s treated seriously is it could be a serious concern. Fear of technology is not new and it’s not always poorly placed. Saw an excellent talk this morning about Fukushima, and clearly there are reasons to consider the position of technology and fears can be very legitimate and new technologies can often have unknown consequences so the seriousness does bear consideration but there is another reason to consider this seriously, because those folks who believe in AI extinction risk are advocating for things that are scary. They are advocating for the AI pause. If you read seriously what the AI pause is, it is brazenly authoritarian. It has to be. It has to be.
And the folks that are talking this way about restricting what a computer program can do, which is pretty scary, violating what many people view as natural rights, that is one step away from violence.
And indeed, disconcertingly, this does rhetoric does escalate to violence. Yes, we should bomb their data centers, in fact, we should prohibitively strike their data centers. It’s like whoa whoa whoa whoa, turn it down, we’re already because like a computer program, but no one offers a serious counter-argument. All right, here I am, please don’t bomb the data centers, can we please, please, please, turn it down so this rhetoric is hysteria. You’re scaring the children.
We need to actually talk about it. So let’s talk about this seriously for a bit and then I want to get to actually what this has to do with engineering. So when you take apart these AGI extinction risks, these fears, where are they coming from? Well, you could just any fear for getting ahold of nuclear weapons, and in fact, nuclear weapons, it turns out we actually have a lot of people thinking about that. It’s a nuclear weapon, actually it does a lot of damage, it’s pretty clear. Really serious people spend a lot of time on nuclear arm proliferation. You’re not allowed to say the computer is going to control nuclear weapons.
Well then we’re going to actually we are we land on two things. Welling, I think a computer program can make a novel bioweapon. That reflects some kind of misunderstanding of how complicated the bioweapon is. Would encourage people to read a book called biohazard about the Soviet Union’s bioweapons program. I think a novel bioweapon is actually a lot harder, or the computer program is going to Nell envelope molecular nanotechnology. This again! If you’ve ever heard of the term gray goo. Was a idea from the late ’90s, about how we’re going to have these nano bots and the military gets sexually excited when they hear all this, gray goo, and put a and a cow takes grass and turns it into — and it got people very, very excited and as a young technologist, molecular nanotechnology sounds interesting and I am embarrassed to say that I read an entire book on nanotechnology, before I realized that none of it had been reduced to practice. Actually nothing had been built at all. This was all just like could we? Maybe? This was all effectively hypothetical. As it turns out there are a whole bunch of reasons why molecular nanotechnology are really, really hard. It’s in water and immediately takes off. So is it wet chemistry or dry chemistry? I think it’s dry chemistry. Oh, you’re going to invent a new branch of chemistry that we’ve been work everything on 400 years, that’s good to know. Oh, Jesus, nanotechnology is back again. So when people talk about extinction risks, I think it’s a bit ridiculous, but they do have something in common that’s really important. And it’s this idea of super-intelligent engineering. The AI will make weapons. That is how it will slay us. It will lead to our extinction and actively killing not just some humans, but all of them. You have to kill all of them. That’s a lot of humans to kill. So you, AI, you program, you’re gunning to make weapons, and let’s leave aside the gazillions of questions that that assertion would give rise to. How about means of production? As my daughter is fond of saying whenever this comes up about AI, it has no arms or legs, but that’s a deeper point than you might realize, it has no arms and legs, and it’s like the lack of arms and legs becomes really load-bearing when you want to kill off all the humans. And also, also, also, let’s leave aside human attackability. And I mean can you imagine, honestly it’s kind of fun to fantasize about. Things feel so fractured and we’re constantly in all these tribes as humans, it’s sort of — can you imagine if we are all united by a cause of fightping the computer program? It would be awesome. It would just feel like — it would feel like unequivocal. There’s no war crime against a computer program. Let’s go to town on that thing. We would take all of our significant human powers and focus it on the computer program. Now I am getting excited. Let’s ignore that. Ignore the gazillions, I want to focus on one super-thin thing and I want to focus on what it takes to actually do eng tearing, which is to take to take the constraints of the physical world, mathematical reality, and make things. That is engineering. And the thing about engineering and the question you should be asking, if we’re afraid of super-intelligence, and super-intelligent gent engineering, we have to ask ourselves, is engineering an act of intelligence alone? Is that what makes an engineer? Hyper-super-intelligence in hm, OK. Despite what I already said, I really can’t speak to novel bio weapons, I can’t speak to molecular nanotechnology other than someone who was kind of burned by it as an amateur in the 1990s.
And this computer company that I started. I started a computer company four years ago, I’m actually really glad that Mike Olson is in the room, because Mike and Stephen shared a very important property when I first pitched them on Oxide. They both independently, I don’t know if they talked to one another, maybe they collaborated on this so I can fairly say that this happened. Their reactions were hard belly laughter when I said what I wanted to go do. And it’s hard to emphasize this. This was not derisive laughter, it was kind of a like, glorious laughter, like of course you are. I mean you delightful fool! This is great. And it was, I mean it was because what people were doing — we were the first computer company in really a generation trying to deliver the innovations to the mainstream enterprise with the hyperscaling of — and this is an actually a tweet from our engineers at Oxide, and this is a literal map of what we had when we needed to figure out what the network was going to look like. There’s no cabling in here, because all the cabling is on a cable back plane and we went from this to this and it works and it boots and it’s awesome, and you might look at that and say, that’s awesome, that’s great. It’s right there, it’s working and yes, because that glorious machine hides from you the pain, the near-death experiences, the failures, over and over and over again, on the way to getting this. Because there is so much failure that it took to get here, and — so now it was more than an act of intelligence, so this is hard if I can say this, this is a new computer, so our own network design, with did our switch, with we did our our back plane. It is easier than a bioweapon and it is easier than molecular nanotechnology, and I also think it’s something that the AI would like could have et. One of my favorite first replies when we tweeted out our first rack and we tweeted out the rack, my favorite reply was, report it for pornography, which I thought was great. And I gotta believe that the AI, our AI overlords were looking, yeah, I want that rack. So I think that the AI is going to want to actually build one of these for its own and I got some news for t it’s going to be really hard. So I want to explore — and with my apologies in advance, especially to Norma — I want to go deep for a moment into some of these failures, into some of these — that these whizzed over here, I want to show you a bit of what the complex would look like. First of all, the CPU it’s a very delicate task to beam it out to make sure that you’ve got connectivity where you expect it, no connectivity where you don’t expect it before you power on. And the CPU knocked it out to reset and this is what it needs to do. CPU needs to do work, knocking out a reset a serious problem and it’s in a black hole. We don’t know where it is. After 1.25 seconds of inactivity. It would reinitiate its power sequencing. And the power is somehow — the power looked really, really good, and actually it found places where it’s only like really good and not extraordinary, maybe that’s the problem. Made the power extraordinary and it still didn’t reset.
But we were all looking at our design, looking at what’s happening, it’s like, yeah, this CPU should be coming out of reset. Where is it? And so we started doing these almost blind experiments. And these experiments were really hard, because they basically boil down to taking some of these non-connected pins and connecting them and that’s really painful. So this is a photo that I took of one of these experiments. This is a wire here, that’s a dime. Fortunately in the age of Venmo and whatever, I don’t if people know what a dime is anymore, but whatever. And this is fairly large. This is a keyboard reset signal and taking that out to ground, even though we knew it was nonconnected for us, this should not affect it and it wasn’t it, and it wasn’t it, and it wasn’t it and we kept on looking and this is weeks to desperate debugging, we have no company. And we actually discovered that our voltage regulator had a firmware bug and in particular, the CPU asks for voltage to be adjusted, and we have a terrific regulator and it sets the voltage, exactly what the CPU needs, but then neglected to send the CPU the packet that says, “I did it.” So the CPU is like, I don’t know, what’s your problem, I’m going to reset after 1.25 seconds. It’s like no, we did it. Look at the votage, it’s so good, why are you … But it didn’t — the CPU and importantly, AMD’s tool for verifying this, SDLE, a super-cool tool plugs into a socket and shows you Saul the power margins, this is like your power looks great. And like yeah, but would you mind checking for the — and boy, when we discovered that and we corrupted that firmware, but I mean ironically, one of the very few pieces of firmware that we did not write, but had a bug in and it came out of reset. And we’re going to live. Well, we’re going to live until we get the we extensive, extensive validation, again, same thing, going through the design, vendor is looking at it, being like, I don’t know, it should work. One of my EE team. I have a terrific EE team, as a quick aside, we are a child of the pandemic from a company would he haven’t been able to get the best EEs, I myself am going to go to CETE. Because if I go to GETC, God is going to punish me. But we’ve got these former GE engineers who are terrific and one of these engineers is like, I’m going to rebuild. This is one of the most optimistic people I’ve ever met in my life, superlative engineer and we worked on it for entire week and cannot get a reset. In conclusion, you should be coming out of reset. It should be coming out of reset. As an act of desperation, we started taking a working add-in card and took out parts and we discovered that if you change one of the pin strap resistors which changes its behavior. And the vendor says you have a 2Kohm resistor we needed to have a stronger resistor, reworking that with resulted in a — we’re going to live, we’re gonna live. Until you we get to the next one.
Transiently failing to train all PCIe lanes. The NIC is coming up some of the time, but not all of the time. And it’s coming up with some of the lanes, sometimes all the lance, sometimes the frequency is right, sometimes it’s wrong and this is happening way too frequently to have a product, grueling, grueling, bug, ultimately we were able to code the Ling status and training state machine, what we ultimately discovered is that this card needed a second reset, PERST to PCI reset. It needed to be reset twice and then it would operate. How does this work on anything else? Well, as it turns out, we are sadly the — the hardest problem that we have solved is not having a bias. We do all of our own lowest-level plat many enablement. As it turns out that legacy bias has had a double PERST in it for several generations and probably from a device from literally the ’80s or ’90s, Chelsio verified that they have had this in their NIC for 19 years. Gonna live. So now things are going well and we get a new revision of what we call our shark fin. This is a board, and nothing works. and obviously, Glen being our poor layout engineer. Glen is like, I swear the changes I made in the layout are like trivial and everyone is like go into the layout, go into the schematic, what did we change from the C rev to the D rev and very quickly determined that no, no, actually what’s wrong is we’ve got the wrong card in it. So there was a bad real was loaded, and we actually had the card that was laid down in verter, the part is correct in a different design that needs that card in that way, but what ultimately adds up to is, the speed at which — by the way these others took weeks, this one was in debugged in like 80 minutes. It’s amazing and we had to rework 1200 cards in 96 hours, huge credit to our card manufacture there. That was a huge team effort to get all those cards out and reworked
And finally not to pick up on hardware specifically, data corruption week, they had a reigning Monarch of data collection week. And so we did something actually when he you have a bug that’s that vexing, absolutely no idea what this could possibly be, one of the important thing that I have learned over the years, is add a half measure to the system that will allow you to it’s like oh, the problems all over the place, everything is corrupted all of the time. It’s OK, the higher floors, to make sure that the blocking the notes, but by doing that, we saw — why there are rampant corruption and this was a very scary problem and a great example of what it means to get a team of debuggers, and in particular what we were seeing is that the mapping of this particular page would flicker, the mapping would change from one moment to another, and it absolutely made no sense, and it actually took someone who’s a little bit less of an expert to be looking. And these addresses are all kind of like similar numbers. I don’t know, probably not. And wait a minute, that’s probably definitely something. So it turns out there’s a chip some would say bug, AMD would say it’s a not a bug, but the chip has a different belief. I’m trying to use I statements and trying to make a way of — kind of the question is, what should the architectural ramifications be of and in particular the chip would see this mapping through which we never do a load and it would get super-excited of oh, you might do a load, I’m going to load through that mapping. Which is like fine, keep your pants on. You gotta be cool like this. What it did do is it would load allocating in the TLB. It’s extremely bad. It completely upends your idea of reality. It actually fixed the problem and then AMD, we had some conversations about our differing expectations. And what do all these have in common? Each one of these issues — by the way, this is like such a small sampling, there are so many of these issues that pose an existential risk for what we were doing. For the artifact. If you don’t fix this bug, you have nothing. Now, it’s not to put it in context, like, Tokyo is not in danger, to kind of contrast it to the Fukushima. I want to be clear about this, Tokyo is fine, everyone is going to live, but you, Oxide, might die. And so they were existential risks for us. The documentation was actually incorrect. The documentation would mislead you because what you were looking at was an emerging property and the break through was something that should work, something that a hyper-intelligent super-being would not suggest, because it would not work.
If it needs to be said, intelligence alone does not solve problems like this.
This — our ability to solve these problems had nothing to do with our collective intelligence as a team. We’ve got a terrific team, but it’s a lot more than just intelligence, and in particular, for these problems and so many like them, we — and I’m sure for you and your internal organizations — we had to summon the elements of our character, not our intelligence, our resilience, our team work, our rigor, our optimism. Maybe AI — I don’t see AI having teamwork, though. We talk about super intelligence. We absolutely needed teamwork, curiosity, why is that wrong, that data corruption problem started because someone saw something that looked wrong, that data is not right, and it was their curiosity that led them to this burning coal fire underneath the surface. So that curiosity is really, really important. These are human attributes.
And these values are so important to us, these extra intelligence values are so important to us, that we’ve codified them really explicitly, in fact, we use our values very explicitly as the lens for hire for us.
And of course we’re seeking like, yes, intelligence, for sure, we’re intelligent people, intelligence is not enough. It does not take you there alone. In fact, intelligence in someone who lacks these other values. Can you imagine, like, no, no, no, teamwork is terrible, it has no sense of resilience, but the intelligence is off the charts. That does not sound awesome.
And can you imagine doing hiring based on like, an exam? I mean that’s ludicrous, that’s absolutely ludicrous, or by the way any other linear measure of intelligence, this fascination with intelligence comes from people who don’t get outside. They need to do more things with their hands. Go for a hike with your kids. Intelligence is great, it’s not the whole thing. In everything we have around us, there is humanity present in our built environment, in the machines that we use. You don’t see that humanity, because it’s not physically present in the artifact, but it lives in the engineers who know better about that. This is the soul in Tracie kidder’s soul of a new machine. It kills me to quote from Edison, because I’m sure he lifted it from someone else. Computer programs lack this humanity. They don’t have the willpower, the desire, the drive, let alone these deeper human qualities that are required to do the experimentation necessary to actually engineer. And so AI can actually be useful to engineers, but it cannot engineer autonomously and we do a disservice to our own humanity when we pretend that they can engineer autonomously. They can’t. We humans can engineer and we can use it as a tool.
now, this is great, so there’s no existential risk so AI is fine. I should be it’s unfortunate that people don’t offer the counterargument to the fear of existential risk oh, by the way, I think AI is going to save humanity and we should take the training wheels off and do whatever we want. No. Of course not. There absolutely is risk associated with AI. Bad news, it’s the risks you already know about. It’s the mundane stuff. It’s the racism, I’m so sorry, it’s the racism. It’s the economic dislocation. It’s the class — all the problems that we’ve been grappling with as humans for our eternity. AI acts as a force multiplier on those problems. And we need to treat that really, really seriously, because AI will give you, it already is giving you, AI ethics is exceedingly important, just because AI is not going to lead to the end of humanity, doesn’t mean that we should be taking, especially when these decisions that AI is being used to inform decisions that affect people’s lives, and by acknowledging that — I actually think that part of the reason that I don’t like this existential risk of AI, it allows us to ignore these problems. Because — I get it. It’s like that larger fear. It feels comfortable. Be like, oh, no, no, we’re all going to be extinct anyway, so we’re all going to be in the post singularity after-life. That’s the actual world. It’s what Emily said about no, we’re going to be on another planet. No some of us actually care about this planet and this life and this world. This is the world that we live in, and we should not let fear — unspecified, nonspecific fear prevent us from making this world better, and in particular, if you still harbor those fears, like you know what, I’m hearing you, but Stephen says, you haven’t quite convinced me. You had me at times, you lost me at times, I don’t know, I’m still not convinced. OK, you’re still not convinced, I’m still concerned that AI poses an extinction risks. Seems a little strong. But good news. Because we actually pose an extinction risk to ourselves by that same metric and we’ve got walls, we’ve got regulations and we want to enforce and expand those existing regulatory regimes, we have regulatory regimes around the handling of our bioweapon research. We have regulatory regimes where this impacts safety and we have regulatory regimes around self-driving cars, let us enforce them and take your fear — I know it’s boring.
So if you want to — and again, I’m so sorry, I didn’t know any of this stuff and now I can’t unknow it. I’m really sorry. I do think anyone concerned about molecular nanotechnology really needs to read the debate by Richard Sm alley and Eric Drexler. Smalley passed away. This kills me to be recommending these four people on the slide.
I just — so these are not definitely — I’m not advocating — you know, when you’re like, yes, everybody, listen to Mark Andreeson. You’re off the rails! I mean this is part of why we need more people to sit up. This fear of extinction is this place, because if I don’t stand up and defend humanity, we’re left with Marc Andreessan to do it.
Eliezer, I don’t know, he’s kind of growing on me. And Logan Bartlett, but you should — Logan is actually a very good interviewer and Yudkowsky is kind of the Oprah of the AI journalists and it is worth listening to this only to understand what other people are listening to, because it is troubling and Logan Bartlett did a terrific interview with a subject that I would have had a hard time interviewing coolly.
And. And the podcast, all of the failure of Oxide and all the humanity that we have in our industry. And with that, thank you very much.