WendEll Wallach Interview Transcript
[music]
DR BRUCE MCCABE: Hello, and welcome to FutureBites with Dr. Bruce McCabe, your global futurist, where we try and explore some of the pathways to a better future. And our guest today, my guest, I'm sitting in his home, is Dr. Wendell Wallach. Welcome Wendell!
DR WENDELL WALLACH: Thank you.
BRUCE MCCABE: Let me briefly introduce some of your credentials and why we're talking today. And it's a long list, but I just want to start with a couple of big highlights. So Wendell, you're the Emeritus Chair of Technology and Ethics Studies... Of the Technology and Ethics Studies Group for the Yale Interdisciplinary Center for Bioethics. But you also wear other hats, important hats, including being Co-Director of The Artificial Intelligence & Equality Initiative. Now, I've got this right, the Carnegie...
WENDELL WALLACH: At the Carnegie Council for Ethics and International Affairs.
BRUCE MCCABE: Perfect. [laughter] Which is why I really want to talk to you today because, and we've just had a preliminary discussion. We're both deeply interested in where things are going with artificial intelligence and some of the problems that it brings, as well as all the wonderful things we've been hearing about in the research. So yeah, hopefully we can have just a little conversation about some of your thinking, about where you see the biggest dangers. And I know all this work you're doing in equality, that fascinates me particularly, because everyone talks about existential threats and I think neither of us are particularly enamored with that kind of thinking about the dangers. There are much more pernicious and important ones.
WENDELL WALLACH: Much more near-term considerations. And actually, a whole array of them. There's also a lot of talk about how AI can be for good and how we really have to utilize it for the many things from curing diseases to helping with climate change and in other areas. And my concern is, that isn't just masking the fact that the trajectory is not necessarily healthy at the moment with AI. The leading corporations would like to emphasize the good part, but meanwhile they are amassing a fortune and inordinate control over the future.
BRUCE MCCABE: We have a huge commercial engine driving this at warp speed, don't we? And that could take us in all kinds of directions we don't want because it's moving so quickly.
WENDELL WALLACH: It seems, it seems. They talk a lot about AI ethics, for example, but my concern is that a lot of it looks to me like ethics washing. But when push comes to shove, the corporate world doesn't necessarily do the right thing.
BRUCE MCCABE: Yeah.
WENDELL WALLACH: So we're in the ChatGPT or the generative AI moment, where that has become the driving force of people's attention. And I'm happy to get into that attention, but my interests are much broader than that. They go into nanotech and biotech and a whole flock of other areas that I think collectively are comparable to climate change in terms of the impact they're having upon humanity. But I mean, here we are, we're in this generative AI moment, which has suddenly got everybody's attention. And the question is, where is that taking us and where does the corporate world want to have it take us?
The point I was going to make, which I lost for a moment, was ChatGPT was released by OpenAI, but OpenAI was not a company. OpenAI initially was a nonprofit that was created to ensure that future AI would be safe.
BRUCE MCCABE: Yes, that's right.
WENDELL WALLACH: Yeah. And in the process of working towards ensuring the future AI would be safe, they had to kind of scale up the technology. So it was dealing with bigger problems. In that process, they in effect created—they weren't the only ones who were working on this, but they created these new framework architectures, these generative AI applications. They realized they had something that was quite of considerable value.
BRUCE MCCABE: And suddenly commercial ...
WENDELL WALLACH: And suddenly they're a commercial company. Meanwhile, Microsoft, a company that for years has been telling us for years that ethics is in their DNA, they jump on board with OpenAI. They have now integrated ChatGPT and other technologies into Bing. And they knew at the time they were doing this that these technologies weren't quite ready for prime time. They didn't have the appropriate protective mechanisms in place. And yet the opportunity to lead, to honor the fiduciary responsibility they had to their stockholders, took over, and they compromised this mantra that we have ethics in our DNA. So to me, that's the concern. The concern is, even the good guys can't be relied upon.
BRUCE MCCABE: Yeah, and it's that classic Silicon Valley. Just do it, break windows and just hold some funds aside for the litigation to come. And then there's damage. There's people that get hurt in that. There's writers that get hurt.
WENDELL WALLACH: And that was certainly true of Google and Facebook and there were other companies that were more forthright about that. Google is now saying that they will defend people who are charged with liability for utilizing their tools and so forth. But that wasn't true of all the corporations. In the, what I call the AI Oligopoly, Microsoft and Apple were sort of the ones who were trying to set a standard for being the good guys.
BRUCE MCCABE: Okay, yeah. Yeah. So, given the auspicious start to this era of artificial neural networks, I mean, it's really 10 years old now, really, this revolution, and it's only just now gathering big steam, and I'd say we're 1% into the journey ...
WENDELL WALLACH: Well, how long the revolution is depends upon where you're counting from and what the particular concerns were. So, I mean, AI goes way back to Alan Turing's paper...
BRUCE MCCABE: Sure, of course ...
WENDELL WALLACH: In 1950 and then the 1956 conference at Dartmouth, where most of the fathers of AI except for Turing attended, and in the invitation to it, John McCarthy said, we're going to work on, over the summer, on artificial intelligence. So, that's where this unfortunate term has come from. But it obviously started much slower than they anticipated. They thought that we would have a computer program that could beat a grandmaster in chess within 10 years. Took from '56, took forty [years]. They thought we were going to have within 10 years a natural language interface that you could just talk with and it would respond coherently to you. We're just starting to get that. Some of the voice introductions into the GPT large language models are giving us that. So, they were unbelievably naive. And in the development of AI, we've had winters and summers, we've had progress being made. But yes, we are 10 years into what I will call the deep learning breakthroughs, the machine-learning breakthroughs that have led to now this generative AI moment. But that's still just kind of one trajectory within a broader universe of AI applications.
BRUCE MCCABE: Absolutely ...
WENDELL WALLACH: And we actually live in that world of that broad universe.
BRUCE MCCABE: And I call it less than 1% of what's coming in the very near future. Looking at all the labs that I visit and all the things they're working on, the generative AI is such a small part of actually what's coming. So, if we make an assumption that we're getting, that the tsunami continues, with such an inauspicious start what are your particular fears socially, particularly around equality and those sorts of things? I'd love to get into that because everyone talks about the existential things, which I disagree with, but the social upheaval is very significant.
WENDELL WALLACH: Yeah, let's talk about the social upheaval, because I think that's, that's particularly critical. And I don't... And the thing about social upheaval is that it depends upon what you do to write the trajectory. And it's not just this deep learning generative AI cycle, which has made AI particularly lucrative. The whole digital economy has been an economy that is structured in a way that it's a winner take all. So that's why it's so important for the corporations to get into something first. There's a lot of social media out there, but nobody is on a scale of Facebook.
BRUCE MCCABE: I see. Yes.
WENDELL WALLACH: There's a lot of search out there, but nobody is on a scale of Google.
BRUCE MCCABE: Yeah.
WENDELL WALLACH: So that's created an economy where productivity gains go straight to those who are the owners of capital, meaning any of us who own stock, but we all know that most stock is owned by 1% of the public. And increasingly that 1% is those who either own tremendous amounts of stocks in digital corporations or are running the digital corporations. And to make matters worse, every time we automate a job, 50 years ago, if you had productivity gains in our society, half of those productivity gains went to wages. They went to workers and half of the productivity gains went to the owners of capital. We're now approaching 70% of all productivity gains going to the owners of capital and 70% of that going to 1% of the public. So we are creating an increasingly top-heavy society and we are becoming increasingly dependent upon a leadership where the exacerbation of inequality is in their interest. So there have been 200 years of fears of technological unemployment, going back to the Luddite rebellion.
Every time there's a new technology, there's this grand fear that that technology is going to take more jobs than it creates. And in the short run, it usually does because it eradicates some fields, but it tends to also create new jobs. We are now getting to a point where the combination of the nature of the technologies, they are more thought-centered than labor-centered. And the fact that we have this top-heavy economy, and it's not only top-heavy, but those at the top are in the position of shaping the trajectories of our society. I think we are in real trouble in terms of this exacerbation of inequality.
And it's going to mean then an onset of technological unemployment, it will affect the global south in devastating ways, but it's also going to affect all kinds of professions we have today, whether that's the legal profession or now it's the script writers for Hollywood or the actors and so forth. So we are in a pretty dire moment, or at least a moment where we can begin to see the trajectory and see if we can make some significant moves, take significant actions to write it.
BRUCE MCCABE: So you see this as different, not just because of the speed and the scale of impact of AI, but because the underpinnings are already different where the profit, the capital is going to a few already. Is that fair? So it's different from previous technologies for both those reasons?
WENDELL WALLACH: Well, I think the imbalance is there and the imbalance is there toward the profits going to the very same people who are the oligopoly that controls the technology and therefore in who's interest it is to automate the jobs. So I see that as part of the problem. It's a whole... It's a constellation of different forces that have come together at this moment that are very different than it would've happened in the past. You've always had your 1%. You've always had... I mean, for hundreds of years we've had our 1% who controlled the economy and 99% of people, at least a few 100 years ago, none of them lived all that well. And that's a little bit different now. Many of us who are certainly not close to being in the top 10% of earners are living really privileged lives. So we're in that sense, we're in a different universe, but we aren't, if we move more and more in this direction of decimating work without creating new ways in which to get capital to people. Because work is how most people make their living and it's not going to work to just have it a paternalistic thing where we're all suddenly dependent upon the largesse of Microsoft to fund the UN, an international peace or fund the welfare of those in Africa who are getting decimated because grains from Ukraine aren't flowing that way.
BRUCE MCCABE: Yeah. Yeah. Can I add another one? You mentioned this word dependency, which I've been thinking so much about in the context of AI. And you may or may not agree with me, but I think about technologies such as obviously cell phones, search engines, Microsoft Office, all these things that have become so universal that you cannot really operate without them. You utterly depend on to function and to compete on an equal basis in the modern economy, you cannot do without them. So here comes AI and it is going to impact every aspect of every job.
WENDELL WALLACH: It's already doing that.
BRUCE MCCABE: Yeah. Every aspect, every job in every industry, universally I see the dependency becoming particularly pernicious here because also the nature of the algorithmic exchange going on, I need this. AI will supply it. AI will supply whatever you want. It's almost a seduction going on now. And when you have that universal dependency, even if you're charging cents on the hour for access to AI, everyone's a customer. Everyone. I mean, yeah. And so you've got the economics of that, but you've also got this slave to the machine aspect of it in terms of the power imbalance. Whoever runs the AIs is running everything.
WENDELL WALLACH: And you aren't just the customer, you're actually the product.
BRUCE MCCABE: Indeed.
WENDELL WALLACH: If you are not paying for a service, then you are the service. And what kind of service you are, you are creating the data that the whole AI universe functions off of. And part of what it also does is learn enough about you to turn you into the consumer that it wants you to be.
BRUCE MCCABE: Yes.
WENDELL WALLACH: You know, or to have the political attitudes that it wants you to have.
BRUCE MCCABE: So wealth inequality and power inequality are definitely two dimensions.
WENDELL WALLACH: Two dimensions. Yeah. I mean, inequality is something that has all kinds of dimensions to it. To the extent that the internet is creating access to information for people, it is a potentially leveling tool from an educational perspective. It's not really a leveling tool that's, but it does mean that any young person with the will to learn can find a way to learn.
BRUCE MCCABE: But it's funny how that's the promise, and always has been, and this is the thing that I struggle with, and yet even with the internet, you have control structures that move in. So China controls access, corporations control access, and money seems to still flow to the big guys, even though you've got a great “leveling” technology [laughter]
WENDELL WALLACH: Well, leveling in the educational sense sector.
BRUCE MCCABE: The educational sector.
WENDELL WALLACH: That doesn't necessarily mean you've got a leveling in the economic sense. But yeah, yeah.
BRUCE MCCABE: Yeah. And then there's democratic issues. Do you think a lot about that? I mean, with AI, I think there was—I’m trying to get his name here—Neil Postman, who was a philosopher some time back. I haven't read all of his stuff, I've read summaries of some of his stuff, but he had this wonderful phrase, “amusing ourselves to death.” And he was talking about TV. And the introduction to television and how it kind of distanced us from reality. And there was a danger to that. And I think AI, if I'm thinking deeply about the, you know, 10, 20 years, since we're on a the journey towards our artificial butlers and our little assistants and our chat bots choosing to give us whatever we are pleased with. Chunking down...
WENDELL WALLACH: They're also telling us what we should be pleased with.
BRUCE MCCABE: Yeah. Which is worse.
WENDELL WALLACH: I mean we have also moved into the world of disinformation and AI and deep fakes and all kinds of things are going to raise that to a different level, to the extent that the technology, usually we blame that specifically on social media, has broken us down into our particular little silos of interest. They know how to feed us, you know, what we want to hear, but they also know how to feed us what they would like us to believe. When I say “they,” I don't know who the “they” is [laughter], but I'm just saying that they can be either political or economic in terms of what their concerns are. To the extent that we're getting free technology, we're getting free technology largely because the business models for those companies like Google is advertising. So they then is, well, they get kind of an inkling of what we're interested in, and then they feed us ads all day long about... From people who are paying them. You know, as to where we can get those products.
BRUCE MCCABE: Yeah. So and if we look at the subjects we've talked about, or there might be others, but what are your, what are the things that worry you most?
WENDELL WALLACH: Well, there's an awful lot of things that worry me. [laughter] I'm going to... We, for this interview, we're going to stick with AI. And I think I...
BRUCE MCCABE: Yes. We can talk about biology and CRISPR and other things for a whole other conversation. [laughter]
WENDELL WALLACH: Yeah. Those are different. And the convergences between all of these things is you know, that's a totally different thing. I have a slide that has the 13 big issues with AI.
BRUCE MCCABE: Oh, really?
WENDELL WALLACH: Yes. You know and some of them we've already touched upon. Exacerbation of inequality. We’ve basically created surveillance societies. Misinformation, disinformation and weaponized narratives. Lethal autonomous weapons and other ways of militarizing artificial intelligence. There's of course issues around bias and fairness, transparency, privacy and property rights. And we can go on and on. But the ones that I have focused the most upon that these days, at one point or another over the last 20 years, I've actually been engaged with all of these issues at different times, because my history really goes back to when there may have been a couple hundred people in the world who even cared about these subject matters. With the deep learning revolution in AI, we suddenly have, I don't know, 4,000, 5,000, 10,000 people who are now AI ethics specialists ...
BRUCE MCCABE: Experts, yeah. Experts. [laughter]
WENDELL WALLACH: … in one form or another. But the things that really, that I what I say still try and put some energy into, is the lack of accountability. Lethal autonomous weapons systems. And whether we can put in place effective governance structures, particularly on an international scale.
BRUCE MCCABE: Now, before you go into governance, because I do want to get into that, let's touch on the autonomous weapons one, which we haven't touched on. The militarization. Have you got a simple way of sort of conceptualizing where we see that going? When I think about it, I see lots of autonomy on the battlefield already. Being introduced without humans in the loop, necessarily, at the lowest level. But generally, if people are going to die, the danger is we are taking humans out of the loop, basically. And as we move into more strategic weapons where people start to worry about existential risk, I see far less threat of that happening. I see [it] mainly down at the tactical level. But then there's the weaponization in terms of cyber-attack which is another layer as well. I mean, how are you conceptualizing those threats to us, the things we shouldn't be doing?
WENDELL WALLACH: Well, I think that, my concern is that for all of the talk about how smart the machines are getting, there's areas in which they're very dumb. But the argument for them is they aren't dumb in the same ways that humans are dumb. Therefore, you know, we'll be better off if we rely on these machines than on humans. And I think, well, there's some truth to that in a general sense that we humans have certain kinds of cognitive flaws or mistakes that we made. But usually when you have a well-trained person in a position, they are sensitive to the context and ways that we have no idea how to make the machines sensitive. So, I mean the future military should be one where there's a better dialogue between the military leaders and the sources of information for them, or the technologies that they have available to deploy them.
But I'm afraid we are speeding up warfare in a way where that will be a luxury, and so I got involved in limiting, actually, I mean, I argued years ago that lethal autonomous weapons systems or weapon systems where there's no human in the loop in real-time violate international humanitarian law. And that's before we even had a campaign to discuss whether or not these technologies should be banned. I would've liked to have seen something like that. The difficulty is here we are today where international humanitarian law is being violated on a wholesale scale. This week it's the Near East, but it's been in the Ukraine for quite a while now. And the Ukraine, you know, I mean, I sat in a lot of meetings where the Russian ambassadors would get up and talk about, you know, their total commitment to international humanitarian law. So again, it's like the corporations. Do they really follow through when push comes to shove? So you get into this weird kind of naivete between, if you're going to be realistic about what countries are going to do, then does it make any sense to try and ban technologies that you may not even know whether they're being used or not? Most of the drones being used in the Ukraine today are human directed. There may be a few that are actually picking targets and making decisions in real time...
BRUCE MCCABE: Yeah, not many.
WENDELL WALLACH: … But that's still being debated, how much that's what's taking place. I mean the fascinating thing about the war in the Ukraine is it's closer to an old-style World War II kind of war than our futuristic notions of what these technologies are going to do. It's still about, artillery and missiles and …
BRUCE MCCABE: Yeah, trenches.
WENDELL WALLACH: Yeah, it's trenches and stuff. I mean, landmines and so forth. Another thing that ostensibly has been banned, but that isn't really what's taking place right now. But I do think the difficulty is if we speed up war and have more and more technology in there that can make the decisions, well, it can't always be relied upon. And therefore, then you open the door to exacerbation of existing hostilities in ways that nobody ever intended.
BRUCE MCCABE: Like the military version of a flash-crash on the stock market. Things could go way out of control really fast. That's interesting.
WENDELL WALLACH: So when the convention on certain conventional weapons, that's where arms control agreements are made and followed through on. When they asked me to first speak in their first three years when they were looking at this issue, they asked me to talk about predictability. And basically what I said, there is no predictability in these systems. It's not that they aren't relatively reliable, but you're talking about complex systems being introduced into complex socio-technical environments and the actions they're taking you can't always predict. And if you could predict them, it would still be probabilistic. So as soon as you're in the world of probabilities, well, most people guess totally wrong how many times, let's say, you will have six heads in a row if you have a thousand and twenty-four flips of a penny or something like that. After they get two or three heads in a row, they say, well, now we are going to hit six heads... No, no, no, no. That's going to happen once at least. Every thousand flips probabilistically. It may not happen in the next thousand, but then you may get two and... So the problem is we're getting into a the world where low probability events, they always occur, but you're getting involved in a world where you have low probability events that can have high impact. So when you start talking about weaponry, the impact is maybe more important than the probability. So a low probability event that has a high impact, well, how about a lethal autonomous weapon that can either set off a nuclear weapon or act in a way that somebody else has started setting off a nuclear weapon?
BRUCE MCCABE: Yes, if risk is a scale of disaster multiplied by the probability, either part of that equation can be bad.
WENDELL WALLACH: Exactly That's the equation.
BRUCE MCCABE: So can I attempt to add something to your list of 13?
WENDELL WALLACH: Please. We're just in a conversation.
BRUCE MCCABE: So this came out of a wonderful visit I had with a guy called Professor Hod Lipson at Columbia University. So, he's the head of their creative machines lab, a very deep thinker, very funny guy, but very deep thinker. And he shared with me his greatest fear out of it all—and he said we have a blind spot with creativity, and this is before the whole chat-GPT thing exploded, he said people don’t think these things can be creative and they're going to be very creative—but he said in particular what worries him, and ever since I've never been able to get it out of my head, is the effect on human relationships. He says, we worry about our kids having online friends. What happens when they have synthetic friends?
So he's drawing a direct line here to loneliness and distancing. Now we can see it today in social media with the artificiality, the shallowness of online friendships and social media substituting for the deep and rich interpersonal relationships that are offline, and we can draw a line to that, and depression rates in young girls in particular, and suicide rates. So there's definitely correlations. He's saying, what happens when we have those relationships?
And I always think of the movie Her as a lovely metaphor for where we could be going. You've got “Replika” out there and the controversy with people trying to have relationships there. You've got the Chinese chatbot "Him", which was offering relationships with the chatbot.
But if you keep drawing a line on capabilities emotionally, they're going to be very deeply engaging and maybe people will end up spending more time with their AI and little less time again with interpersonal relationships. And it seems to me a very great—broad, it's a death of a thousand cuts, right?—But a broad danger to us.
WENDELL WALLACH: It is. It's a complicated danger though. because we don't fully know. In other words, I mean, when I talk with kids about their relationships and their friendships and so forth, I'm very confused. I mean, it's definitely, again, exacerbating kids who have a tendency toward being alone or isolated and so forth.
I think we're very confused what having introduced social media and what having to introduce digital companions and so forth means, because we never know. Correlation is not causation, and we don't necessarily know when these technologies are just exacerbating pre-existing tendencies within young people and when they are creating them. And I mean, I'm amazed at young people who are remarkably social and remarkably engaged, and I say, well, how did that happen? I thought you were all living with your digital friends. But I think it's real in the sense that we are changing the context of childhood and we don't know what that means.
BRUCE MCCABE: We don't what it means.
WENDELL WALLACH: Yeah.
BRUCE MCCABE: So the unknown. At least we can agree, there's an unknown. Yeah.
WENDELL WALLACH: Well, unfortunately there's a lot of unknowns, and you know, and in a certain sense that's good, because it doesn't necessarily mean the trajectory is totally set, and we can talk about the forces in motion. Climate change is in motion, there's no question about that. And the news doesn't seem to be good, and the evidence isn't there yet, you know, that humanity is going to rise up in any way to adequately address it, but this is the same with these emerging technologies.
I've been asking a question of audiences for quite a few years now, and I basically say, how many of you believe that the benefits of AI are going to far outweigh the risks and undesired societal consequences?
BRUCE MCCABE: Great question.
WENDELL WALLACH: And then I ask the opposite question. You know, how many of you think the risks and undesired consequences are going to outweigh the benefits? I've been getting different responses over the years, and it has a little bit to do with what audience you ask.
BRUCE MCCABE: Yeah.
WENDELL WALLACH: So when I'm at [an] AI in the military event, I get a preponderance of the undesirable effects.
BRUCE MCCABE: They're more pessimistic?
WENDELL WALLACH: Yeah. But it's always, it's usually meetings that people are looking at lethal, autonomous weapons and so forth.
I asked this a few weeks ago at a conference called AI for Good, which has been going on for quite a few years by the International Telecommunications Union in Geneva, which is part of the UN system. They have a few thousand people show up. And I brought this up in a closing conference and there was still about 600 people in the room at that time. And when I brought it up, since the conference is called AI for Good, we got maybe about 150 people or so, put up their hands. They said the benefits are going to outweigh the risks.
Then I said, how about that the risks will outweigh the benefits? Got about 75 hands went up. And then I said, what about if it could still go either way? 400 hands went up, you know, there's an ambiguity still about...
BRUCE MCCABE: Nice.
WENDELL WALLACH: Whether we're going to reap the benefits versus the risks. Yeah. And I think what I have seen develop over the past few years is both of those groups are expanding at a rapid rate, both what the beneficial sides of these technologies could be, but also what the risks could be...
BRUCE MCCABE: Interesting.
WENDELL WALLACH: Whether they drive us towards surveillance economies or toward authoritarianism, whether we have armed conflict with military weapons, whether disinformation undermines democracies, whether there will be vast technological unemployment without a redistribution of capital in any way, shape or form, they're getting to be more and more of these ways in which the technology can create accidents or be used for essentially political and economic purposes that are not healthy.
BRUCE MCCABE: And again, it gets back to the scale of the risk. That ambiguity is okay when the risk is small, but the risk potential is large here. So that ambiguity can be dangerous.
WENDELL WALLACH: It's not a secret to anybody that we're in a pretty tenuous moment of history.
BRUCE MCCABE: Yeah.
WENDELL WALLACH: You know, it's not and it's surprising that we're in this moment, given that I grew up when you hid under desks because you are afraid the Russians are just going to launch ...
BRUCE MCCABE: Duck and cover.
WENDELL WALLACH: … the missiles or something like that. I mean, in a certain sense this is even more unstable than that period of history would be. And yet I mean, there's a few wars in motion at the moment, but most of us aren't directly touched by them, and yet we all understand at least intuitively, if not intellectually, the entanglement of all these forces and how easily things could just slip away in a truly tragic form.
BRUCE MCCABE: Yeah. Well, can I now flip to what we can do?
WENDELL WALLACH: Sure, let's do it, yeah.
BRUCE MCCABE: What we can do, because you are attempting one of the hardest things in all of the time you spend on these committees looking at governance, I had a similar conversation with Jennifer Kuzma on this podcast about governing CRISPR use and ethical use of CRISPR in biology, but regarding ethical use of AI, how do we nudge this in a more positive direction? How are you trying to do it? Because I know it's so difficult when the technologies are so accessible, and we are often dealing with the artifacts of the AI – we don't know people are using AI behind the scenes. You know, it's hard to detect, it's hard to police. The idea of everyone signing up to an agreement and then obeying it, we've already torn that up. How are you trying to nudge it in a more positive trajectory and what are you learning?
WENDELL WALLACH: Well, I mean, I think first and foremost is citizen education. Each of us need to become a little bit more self-aware of what we're engaged in. We need to understand the technologies, we can't buy into the AI experts telling us it's too complicated for you to understand in any way so they're going to make better decisions than you're going to make. So, just get over it and, hand the future over to the AI.
BRUCE MCCABE: To us, so we can make money.
WENDELL WALLACH: Right, but basically, that's the subtext, the subtext is we're still behind this technology and until we have a singularity that pushes us aside, it is about us, but if we can get you afraid of that singularity pushing, afraid of that singularity, then you will also defer to us in coming up with the solutions to ensure that it's safe.
BRUCE MCCABE: Or you'll be sufficiently distracted by the long-term future to let us do whatever we want in the short-term future.
WENDELL WALLACH: Right right. And that's what really concerns people who see gender bias and racial bias and so forth in it is the existential risk is sucking the air out of the room and it's not letting us address the near-term things that we could actually address. One of the things we could address is just making the bloody people who deploy these things accountable for it. But in America, for example, we have a clause that 230 that basically gave to all corporations during the era when social media was created, we gave them no accountability and that therefore gave them total largesse in what they do. And all that was done in the name of spurring innovation, so spurring innovation is particularly in the US. We have corporate capture, and the argument that you don't want to you don't want to do anything that slows down innovation, particularly because we're in competition with the Chinese who we can't trust in any way, shape or form.
BRUCE MCCABE: Another fear campaign to get the behaviors aligned.
WENDELL WALLACH: It's a campaign, exactly that, you know. And so I think just education where, the public doesn't buy into this thing and demands a degree of accountability that we have a right to sue these companies if they misuse our data, they don't get off Scot free and they can't tell us every time that, oh, well, we had a hacker who broke in and now your DNA is being sold to the highest bidder out there. That's not acceptable, but that takes that takes an educated public and that takes governmental structures in place that can enforce these things or at least have the. I mean, how much of government actually enforces anything, but the threat of enforcement at least gets rid of the most egregious players, and others are because of that are tamer in their actions. So I think that offers an awful lot of hope.
I also want to say something, if I can just step back a little philosophically here for a moment. There are certainly forces in motion that most of what any of us do wouldn't make a difference, and it's not that you shouldn't cycle, recycle your garbage and stuff, you should, you should. But the point is, most of us aren't in a position that we're going to, stop climate change tomorrow or that we can see...
BRUCE MCCABE: Sure.
WENDELL WALLACH: Exactly what we could do. But I have a view of history that's a little bit different. And it's about inflection points, that there are moments when sometimes a small action can make a big difference. So you're on a particular trajectory, oftentimes early in that trajectory, if you see it, just a slight nudge can change your trajectory just a little bit, and that change in trajectory, just the angle of it, just a little bit would take us to a very different destination over time, so I am particularly interested in what I call Salt March moments. In history, there was the Gandhi's Salt March where he marched to the sea and they sold salt crystals basically without collecting a tax and suddenly this is happening on every street corner in India. Well, the fact was the British had been collecting taxes, it was like the Boston Tea Party in America. That Salt March, and it also culminated in Gandhi's nonviolent followers marching on an effective salt warehouse and being clubbed down all day long and never rising or violent, even though there were skulls cracked. In that moment moral superiority moved from the British to the Indians.
BRUCE MCCABE: Sure.
WENDELL WALLACH: So at the right moment, a particular action can change the course of history. So I think we are still in the possibility of those moments of occurring, but we can't just hang out and wait for what we hope will be a right moment.
BRUCE MCCABE: Yeah, or wait for wise laws to somehow metamorphose out of our Congress or out of our parliaments.
WENDELL WALLACH: So governance of technology has been be-deviled by two problems. One is known as the pacing problem – that the pace of scientific discovery and technological deployment far outreaches our ability to put in place ethical legal oversight. But the other one has been what is sometimes referred to as the Collingridge dilemma. And David Collingridge said in 1980 that it's easiest to shape a technology early in its development, but early in its development you don't necessarily know what it's impact is going to be, so you don't know what to do. And by the time you do know what to do, it's so entrenched you can't do anything, that's pretty much the story of social media. But I think something a little bit different is available to us now, if we get a little bit better educated, we will see those moments more quickly. We will see the inflection points, we will understand what will be the impact. We know today that the impact of Generative AI is going to be an onslaught of fake news particularly by the next election.
BRUCE MCCABE: A tsunami of effluent.
WENDELL WALLACH: I've been saying over and over again, we are going to have video after video of Joe Biden falling flat on his face within days before the election.
BRUCE MCCABE: Of course, yeah.
WENDELL WALLACH: That can change the history of humanity, if people don't understand this. This is just guaranteed. Any little kid can make this.
BRUCE MCCABE: Yeah, we know it's coming, yeah that tsunami of effluent or call it auto-generated effluent. So, okay, so education is the bedrock of all of this. The more we know, the more we might have social reaction or action at a small scale that ends up making a big difference. That's social policing. It seems to be the biggest one.
WENDELL WALLACH: Or that we will know that we will recognize when we're being manipulated, maybe well I don't really need to buy this, this ski helmet or whatever they're trying to sell me today, and I have just been manipulated and recognize that in a moment and not let yourself be pulled in. I mean the amount of empowerment that happens every time we have a moment of self recognition and don't let the propagandist or the marketing people have their way, I feel grand about that. That's one of my great rewards in life.
BRUCE MCCABE: We need to start that education at the kindergarten level. I mean, it's got to be real early I think, before it gets entrenched. But I like that. Are there any other levers in terms of incentives or consequences besides education? I mean, you are doing top-down work, you're trying to work with Congress and others, right, where you can influence law?
WENDELL WALLACH: Well, there's a lot of people working with Congress right now, so I'm not doing much of that directly. I mean, I do all kinds of things indirectly just because I have the colleagues who are doing this or that, so I have the conversation with them, and then, let them go on. And I hope that I've said something to them that's animated them enough, that they will care.
The thing I'm focusing most on right now is whether we can put any international governance for Artificial Intelligence in place. I think we need new models of governance, it's not hard to see that the institutions we've created cannot respond quickly enough to the technologies that we are putting in place. And we have a UN system, which does wonderful things when it comes to refugees and hunger and so forth, but it's dysfunctional when you're talking about interstate relations, so we are going to need new institutions, that's not going to come overnight, that's not going to come easily, but I've been trying to use the present moment to see if we can start to put in place new mechanisms that are a bit more responsive, for emerging technologies starting with AI.
BRUCE MCCABE: I've started to try and capture ethics frameworks out there, there's only about a hundred of them on AI, and that's fine we're all starting to...
WENDELL WALLACH: According to my friend Yi Zing who is at the University of Beijing, he's one of the leading AI ethicists in China he counts more than 200.
BRUCE MCCABE: There you go, you don't have a standout one that you can point us to and go, that's a really good one?
WENDELL WALLACH: Well the two main ones of course are UNESCO and OECD because people have signed onto them. But Yi and another colleague of mine were on the committee that drafted the Beijing principles, and they're very similar to the ones you see everywhere else, the main difference is they, rather than human rights being one of the things they commit to, they commit to harmony, which is the ancient Chinese principle and though the Chinese government says that, it is a signatory to the International Declaration of Human Rights, they just don't want human rights being defined by western liberals, so they're always a little bit shy on that, but again, it's pretty similar to the other ethics agreements that are up.
BRUCE MCCABE: And it seems that they're, like, if we were to point listeners to something as a starting point to embody in the organizations, they are reasonable places to start those ethical frameworks. At least there's a list of considerations to educate workers, colleagues, managers on.
WENDELL WALLACH: Well, it's gone way, it's actually gone way beyond that. So, I mean only a few of those principles actually are AI specific, but what happened is when they decided they needed an AI ethics principles, they started thinking about what kind of world do we want to create? So there's commitment to the sustainable development goals of the UN, there's all kinds of things that find their way into that aren't really about AI as much as where we're going at it, but now we've been in this stage of operationalizing those principles, so that's been professional organizations and the IEEE and, ISO standard setting bodies, trying to see well, how do you operationalize those for particular fields for particular issues or concerns. So for example, I'm part of a committee at the IEEE that's putting together a document on governance, when we're talking about governance, we're not talking about national government, we're talking about corporate governance, what should a corporation be doing if it's going to implement AI systems to make sure that the AI systems don't make the corporation disappear tomorrow, and that's an existential risk to corporation, but that's only because it potentially could do something that they are totally liable
And do they even have governance structures in place where they're evaluating that when they're putting the technologies in place or is there any mechanism that if there's a real problem, it finds its way to the board of directors? So those are all things that we can be doing. If you are a business manager maybe you need an AI officer and that officer sits between the engineers that are introducing the new systems and the management and the board of directors, if in case something's happened that they need to be aware of.
BRUCE MCCABE: I like …. Sorry, go on.
WENDELL WALLACH: No, no, I was just going to say, I mean, you can introduce ethicists into the engineering planning for these things, meaning people who are sensitive to what the societal impact of bringing that system in, and what it might be in terms of the impact on the identity of your corporation, so these are all things we can be doing.
BRUCE MCCABE: Yeah, and as a social... With education comes a social pressure for organizations to actually be seen to be doing that as well. So yeah as part of the thing. I like what you said earlier about accountability, and this protection that exists around, what is it, 230?
WENDELL WALLACH: So 230 was a kind of oh, an exemption from liability, basically.
BRUCE MCCABE: But it's extended beyond social ...
WENDELL WALLACH: It's been put in place as an incentive to innovation, but now where does it end?
BRUCE MCCABE: And so when does it end? How close are we to turning that around?
WENDELL WALLACH: Well this is what we're trying to figure out right now, because in the EU and Australia, Singapore and everywhere, China, I mean, AI regulations are being proposed or brought in, they're least likely to be acted upon in a very effective way in the United States, but even in the United States they all go to Congress, the heads of all the corporations, and they say "Yes, we all want to be regulated." Now, whether they will allow though a collective regulation that gets in the way of their oligopoly I would be surprised, I think some of what they think regulation will actually create is barriers of entry to anyone other than them to manage some of these technologies. So there's regulation and there's regulation. Part of it is, again, being built around the long-term existential risks. Now, whether they're long-term or not, depends on who you're talking to. I'm a little bit of a skeptic about the whole existential risks argument, but I think it also exists in a realm that nobody really understands what the hell they're talking about.
[laughter]
BRUCE MCCABE: Well, that's probably a good way to bring it to a close. Are there any other messages that you think you'd like to get out there, or me to amplify when I'm speaking to decision makers?
WENDELL WALLACH: No, thank you for this conversation.
BRUCE MCCABE: No, thank you..
WENDELL WALLACH: Unfortunately we didn't have enough of a conversation and I talked a little bit ...
BRUCE MCCABE: No, no it's good.
WENDELL WALLACH: … too much, but I'm thrilled to have the opportunity to share some of these thoughts.
BRUCE MCCABE: Well, it's an honor to meet you and a privilege for me to hear those thoughts. We'll have to have another conversation on all the other things, we can get into biotech and have another three hours, I'm sure, just on “editing all forms of life on this planet” and see where that takes us. [laughter]
WENDELL WALLACH: Well, I think right now I think people are starting with AI and they're starting to realize some of the issues that are at stake, but I think it's not just AI, it's this whole world of emerging technologies and how it's writing our futures. I've often said the self-driving car is probably an apt metaphor – that technology is moving into the driver's seat as one of the primary determinants of humanity's future.
BRUCE MCCABE: Well, one of the big positive messages I'll take out of this is that at least, paraphrasing what you said earlier, but this is much bigger than AI, and by forcing the pace or forcing people to try and address frameworks and governance around this, you're actually trying to work out what sort of a planet you want to be on, and what sort of behavior you want all businesses to display around all emerging technologies. And that's a good thing, if we can get some of that happening.
WENDELL WALLACH: Perfect.
BRUCE MCCABE: Thank you so much, let's end it there and let's have another conversation in the future.
WENDELL WALLACH: Thank you.
[music]