The Intersection of AI, Ethics, & Humanity, with Wendell Wallach

[ad_1]

I thought it was very important that a broad swath of the public understood what the issues were and jumped into the debates, that these debates are much too important to be decided either by policymakers, who often are among the worst informed, or even those leaders in the academic and industrial world who are creating these new technologies. Again, they were often silo-ed. They were perhaps excited about their research and wanted everybody to believe that their research was what was going to actually make the world a much better place, but that did not mean they necessarily had the depth of appreciation for how those technologies were going to interact with workers, with the environment, and with the whole sphere of human activity.

What shall I say? I wanted the average intelligent reader with a little bit of science already under their belts to get a sense of what was going on, what the key concepts were, and what the key ethical concerns were so that they could start to jump into these fascinating philosophical and societal debates we are having about what to adopt, for what reason to adopt it, and what perhaps we should be regulating. That was really the driving force behind all of this.

Given the speed of technology, when the question came up of republishing the book, I sat back for a moment and said: “Well, how relevant is it? It has been a few years since the first draft went to press.”

As I thought about it I said, “While it’s true that a lot of the technologies have changed”—we certainly did not have large language models or generative AI at all. Deep learning was just coming into focus as a transformative technology, Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR)-Cas9 was just coming into focus as a transformative technology, so, yes, the technologies have changed, but the foundational issues are pretty much the same, and some of those foundational issues such as technological unemployment, whether automation will put people out of work, have been around for hundreds of years.

I realized that most of what the book was about was introductory in that sense, and although people might have to update themselves a little bit on what has happened in artificial intelligence since I wrote the book they were going to find that nevertheless it fulfilled its original function, which was to give them the language, the concepts, and the sense of how integrated all of these issues were, that it is not just one technology but is actually going to be the fusion of all of these technologies that is going to come to bear.

SAMANTHA HUBNER: Absolutely. As you are describing that it reminds me of one of the parts that I enjoyed the most about the book, which was your inclusion of a lesser-known story about a small group of scientists who were fighting to prevent the Large Hadron Collider from being turned on for fear of it creating a black hole. In addition to stories like this, the visibility of technology, ethics, and public discourse, we are seeing it becoming further amplified by major pop culture events, the Oppenheimer film, the controversy surrounding AI-generated music, social media’s ongoing battle to defend against deepfakes, and even the Black MirrorTV series to name just a few. Your book points to the necessity of having ethicists and social theorists being part of the design and engineering process to integrate those philosophical questions into the development of innovative technology and tools.

In your view are we seeing any progress on catalyzing more dialogue and action around these issues from the average reader?

WENDELL WALLACH: It is confusing. I would say yes and no. I think the average reader is starting to get a sense that these technologies are taking place. I have found that I have given many more talks to senior citizens than I have in years, but it is largely about AI. They see all the articles, and they want to know: “Well, is this a good thing? Is this a bad thing? How are we even going to make those judgments? What are the issues and where am I just dealing with hype and where am I dealing with serious considerations that are going to come up?”

That has been fascinating to me. I think AI at least has brought in a large swath of the public, particularly since the advent of ChatGPT in November of 2022. Up to that time we did not really have, let’s say, an emerging technology that was accessible to the average citizen, and suddenly we have these tools that are not only accessible to them, but they are hearing that their children and grandchildren are submitting assignments for school that were not written by them but were written by this computer program that they just put a question to. On that level we have got people thinking, we have got people starting to think, Well, is the overall impact of AI really good or is it dangerous?

I think as far as some of these broader questions—the societal impact of radical life extension and the societal impact of seeding the stratosphere in an attempt to slow down the rate of carbon production and the heating of the earth’s surfaces—are still a little out there and still pretty much academics, particularly young academics, and a few people who are fascinated by these subjects getting into them. The problem is, like AI, they are coming down the pike much more quickly than the public is getting informed, and therefore when these critical decisions get made I am afraid those who are going to make the critical decisions are going to be those who have the most to gain from the decision going in one direction or another direction.

SAMANTHA HUBNER: That is an interesting point, and it reminds me of a previous conversation that I believe you had with Colin Allen, wherein you made an interesting note about a landscape of new juggernauts emerging from the creation of machines, particularly machines meant to exhibit some sort of moral dimension. Given the way AI is developed, trained, and deployed and used across numerous use cases affecting the public, it seems apt to integrate observations from another installment of your work, Moral Machines: Teaching Robots Right from Wrong, into the conversation. In that book you wrote about the hubris inherent in some of the innovations that have radically transformed the world. You also speak of operationalizing machine morality, implying that robots or machine-based systems can be programmed with values and moral behavior.

Can you explain what that is and perhaps guide us through the history of machine ethics to allow our listeners to better understand both how the field of machine ethics has evolved over time and how the field of machine ethics converges with modern developments in emerging technology?

WENDELL WALLACH: Happy to do so. Many people who know very little about me will know Moral Machines. It has had quite an impact and been cited very often, though I think the challenge we put out in terms of machines actually making even basic moral decisions—the kind of moral decision where a high schooler knows if you encounter situation X you do Y. That is very different than a moral decision such as, “Should we be deploying various forms of advanced technology, and if so, how do we regulate and control them?” That is a different thing.

Machine ethics has a lot of prehistory, but what most people think about is Isaac Asimov’s novels. He has the Three Laws of Robotics, and later on he adds a “zeroth law,” but what happened was Asimov changed the whole trajectory of science fiction about robots. Nearly every time there was a story about robots it was largely about robots going bad and eventually overthrowing humans or not succeeding in that in one way or another.

Asimov posed this thing: “Well, if you could program all robots and ensure that they had these three laws,” which were very basically do no harm, obey humans, and engage in self-preservation. They were logical laws. One followed from the other.

People said, “Well, that makes sense,” but actually in the end Asimov wrote dozens and dozens of stories—actually, I think he wrote more than 80 robot stories in the end, most of which pivoted around these laws—and in one story after another he showed that even with this basic kind of rule-based morality the robots would get confused or would not function properly. What do they do if they get orders from humans that are in conflict with each other? How would they know that a medic wielding a knife over the body of a soldier on the battlefield was not there to harm the soldier?

That got us into the weeds of moral decision making. The thing I think Colin and I were most happy about with Moral Machines—at least the thing I am most happy about—is that up to that point Ethics 101 was kind of a basic reliance on rational thinking as being the basis of human moral decision making, and what was happening was that there was a lot of research in moral psychology.

Of course we have been talking about the role of emotions in moral decision making for thousands of years, but there are other things like the theory of mind, empathy, and being a social being, all of these things that we take for granted within human beings that had not really been thought through very well in terms of what was their role in decisions, when were they helpful, and when were they problematic. We got into that, and I think in the end we came up with a more comprehensive view of human moral decision making than had been expressed up to that time.

What has happened since then? That is a bit confusing because in the middle of all this there was a small community of philosophers in computer science who were working on machine morality, and by 2002 you had the first breakthrough research in deep learning, and by the middle of that decade the Future of Life Institute had brought together a bunch of us, leaders in artificial intelligence, who were starting to worry again about the “singularity” because of the breakthroughs with deep learning.

They had a conference and brought 89 of us together. Three or four of us were actively involved in ethics, and the rest were basically involved as computer scientists. There were a few lawyers also at that gathering. They got some money from Elon Musk, who showed up, to finance some of the research that was supposed to make us more safe in relationship to computer systems.

I received one of those grants, and with that grant, together with Stuart Russell, one of the leading AI researchers, Bart Selman, and Gary Marchant, we held a series of three workshops over three years. The importance of those workshops was that was the first time it got people breaking out of their silos. The computer scientists had never talked with the philosophers and the engineering ethicists and vice versa. It was not like there was a “never” in it, but we had all kinds of people who knew each other’s names but had no idea what each other was about.

We had another person who died just a few weeks ago, Daniel Kahneman, the Nobel Prize winner, was among the people we had there. I do not want to just drop the names. If you know the fields, you will come to appreciate who—we had star-studded workshops.

The computer scientists I would have to say were much more focused on the singularity, and they were much more focused on what they could solve technologically than the philosophers and the machine ethics people, who felt that a lot of this has to be managed socially. The computer scientists started putting forward ideas like value alignment—that was the second time I heard about value alignment; I had heard about it a little earlier—and suddenly value alignment became this hot buzzword within the AI community, and it pushed all the philosophers aside, at least for a while. That is breaking down now as the AI alignment people, when they get to practical projects, start to realize, Oh, I don’t know that we are going to get value alignment without also bringing back some of the ethics.

That is kind of where we are now, and what we see right now are a few people who have taken large language models and looked at how good they would be at making basic ethical decisions. I am not talking about devilishly difficult ethical decisions, but basic ones, and it is pretty mixed. It is mixed enough that you would not want to rely on a machine that only had a large language model, even GPT-4.0 or its competitors, and feel that you would be secure if it was left to make decisions without them being reviewed by a human who had basic moral intelligence.

SAMANTHA HUBNER: That is fascinating, Wendell.

Something else that has certainly changed a lot over the last year in particular is the international regulatory landscape of AI with different regions taking different approaches toward managing the risks brought forth by AI in a number of different sectors. Your colleagues at the Carnegie Council AI & Equality Initiative (AIEI), such as Anja Kaspersen and Elina Noor, have characterized some thought-provoking trends comparing the relationships between regions’ regulatory approaches, but I am curious, based on the arguments you put forth in this book and your mention just now of values and alignment, what do you make of the recent regulatory efforts across the globe?

WENDELL WALLACH: To be honest, I don’t know what to make of them. It is wonderful to finally see people actually talking about them, and there are a lot of good and serious people who have developed different kinds of expertise, but I fear that we are creating more confusion and maybe even intentional confusion than actually putting in place any kind of effective governance of particularly AI, and the focus is largely on AI. There was focus on synthetic organisms, cybersecurity, and other areas of research separately before this explosion of interest with AI came along.

On one level I see some proposals being put forward that sound fairly good to me, but then I see they do not necessarily have teeth. At other times I see, for example, the Biden Administration put forward what on the surface seem to be balanced proposals, but it is not clear whether it can get anywhere beyond them being presidential initiatives, which means if a different president comes in they can disappear tomorrow.

We have a problem in terms of whether the American government will actually step in and put in place anything with teeth. I frankly think we have at the moment what I will call “corporate capture,” where the corporate industry is able to confuse the legislators and also buy them off—because they all need money to run in this day and age—in a way where the corporations look like they are going to get control of who and what gets regulated.

We did a podcast just about a month ago with Elizabeth Seger that I will refer some of our listeners to if you have not already listened to it, and that podcast was specifically about whether these frontier models should be only available to the corporations or whether we should have the equivalent of open software. They are not software in the same way as open software is, but people know the concept at least, that anybody can jump in and create their own models.

My hope is that open large language models and open generative AI would democratize AI. On the other hand, the fear is that it opens the door for bad actors to get easier access to them and that the corporations will use that argument as a way of ensuring that we don’t democratize generative AI and ensure that they have ridiculous profits going forward for the next hundred years if not longer. I don’t think generative AI is the only game in town by any means, but it is actually the one getting the most attention at the moment. Therefore that is a concern for me.

Let me go one step further. I have been one of the few people who has been promoting the international governance of emerging technologies for years now. We put forward proposals, we organized an international congress for the governance of AI that unfortunately was scheduled to occur a few months after COVID-19 hit the world. That truncated into a relatively small event.

It seemed to me that we needed to start working on that years ago. We are starting to work on it now, but I cannot figure out whether the large countries are going to be cooperative when it comes to work on international regulation, even good, strong communication and a degree of coordination, not to talk about anything that is potentially enforceable, which is what they fear the most.

I hear mixed messages, particularly from the United States and China. I hear both parties say in one ear that they want strong international governance of these, and I hear things that both parties have done in closed rooms that make me feel: No, they don’t. This is just for public consumption.

That is probably my biggest fear, that the public is not going to know what to believe. They are not going to know whether to believe these things that get said over and over again or whether to listen more closely and to hear what is not being said.

Let me give you just one example of that. How many times have you, Samantha, I would say in the last week heard that the benefits of AI are wonderful, are mind blowing, and will create trillions of dollars, maybe more than $100 trillion in capital growth in the world? I am not expecting you to answer that.

The problem is, all of us who have been in this conversation hear that over and over again. I look at it and say: “You haven’t convinced me yet that the benefits are going to far outweigh the negative consequences. It is not that there are not great benefits, but you aren’t doing the safety and security research to convince me that you have any way of stopping what I see as this growing list of negative consequences.”

SAMANTHA HUBNER: It is exactly those cross-cutting relationships and things like cybersecurity and data security, those things that point back to an earlier statement you made that is just as important to spotlight, that AI might be in the public’s mind, but there is much more beyond just AI to consider with regard to these questions.

Your book talks about specifically nanotechnology to biological advances as well as new breakthroughs in neuroscience. When you think back to what you were discussing prior with regard to the human-machine decision-making relationship as well as these trends that you are seeing in international governance, are there other aspects of the reflections captured in your book that you feel resonate especially well today with regard to technologies beyond those of AI?

WENDELL WALLACH: I think it is probably in all of them; it is just that they are not as profound at the moment as AI is.

Let me give you an example. The book talks a lot about synthetic biology and synthetic organisms and how you can create a new organism that then gets introduced into an environment and then collapses the environment. That is pretty frightening, but the fact is that we are already doing the engineering of new organisms because there are some areas in which it looks wonderful. We can stop mosquitos from carrying Zika or yellow fever. Who wouldn’t want to do that? If we could stop locusts from swarming and destroying massive amounts of crops in 40 regions of the world, wouldn’t we want to stop them swarming?

The problem is that even with these beneficial things we do not necessarily know what the secondary impacts are. The odds are that the secondary impacts of locusts not swarming are insignificant relative to the benefits of stopping locusts from destroying croplands for poor people in regions all over the world, but we have not done the research.

Unfortunately we now have technologies that are available to everyone in a way in which people can do gene editing—I have heard of high school students who have used CRISPR-Cas9 to edit a gene, usually for something frivolous but nevertheless something they learn from, like dissecting frogs when I was a kid. These are the experiments you do today. That is all fine and good, but I am afraid we are going to see an explosion of these kinds of tertiary researches suddenly becoming a real concern.

Let me use synthetic biology as an example of that. As A Dangerous Master was being written, CRISPR-Cas9, which again goes back to 2012 as its basic discovery, was coming to the fore, but immediately, by 2013, 2014, and 2015 there were suddenly these corporations coming into being that were bringing in industrial-level research on gene editing. We only began to see industrial-level research on even healthcare in the 1930s in Germany. This is relatively a new thing.

The point is, it generally takes about ten years from the onset of a technology to get out of the laboratory and into the world, and we are reaching that point with CRISPR-Cas9. We are talking about a technology in which not only the industrial-level laboratories but the little home laboratories and laboratories in academia have probably done millions upon millions of gene edits, most of which are probably meaningless or don’t mean much more than experimentally, but nevertheless a significant amount that are going to have profound impacts. How many of those are going to be beneficial and how many of those are going to be about people releasing new organisms into environments where they have never really researched what the impact will be? That is an example.

Another example is geoengineering, trying to alter climate change through technological means. That is fine if I am planting trees. That is a technological means to pull carbon out of the air, or painting my roof white or something like that, but when we talk about seeding the environment or making clouds so that they reflect light back into the atmosphere, when we talk about these large-scale projects we have no idea what the impact is going to be.

When I was writing about that just a few years ago it was still being treated as a crazy idea, but some of the progenitors of it were arguing, “No, when all of our other tools for climate change fail”—and they are failing; let us not kid ourselves; the amount of carbon and the amount of carbon-based fuels, for example, that we are using continue to go up regardless of all the environmental measures we are taking—“when they start to fail people are going to look at these potentially more dangerous wide-impact technologies.”

I hear it every day when I look through largely journals and articles that are not necessarily getting to everybody’s attention, but the point is it is no longer a crazy technology right now. It is something that people are considering because they are deeply concerned. The same is going on with neuroscience. I could go on and on, but each of these technologies is moving forward.

What is fascinating to me, though, is how few of the problematics were invisible to me when I first wrote A Dangerous Master. Very few. Certainly I was not thinking that technological unemployment would come in such a dramatic way when writers can no longer make a living.

There are definitely some new things that the technology brought away, but I think what surprised me as I looked back was that we knew most of these things were going on ten years ago. It may have been within a small collection of scholars, transhumanists, or security planners, but it is not like most of this is new. What is new is that technologies that were more speculative are starting to be realized.

SAMANTHA HUBNER: You allude to it beautifully. This is exactly where I was hoping we would get to, a point in our conversation where we could talk more about inclusion. The very name, the AI & Equality Initiative, that you work on with Carnegie Council, separate from this work here and this book, is a huge part of the conversation writ large when it comes to what brings forth new value to this book.

When you think about inclusion as it pertains to workforce or even just the way that these technologies are being designed and built are there any new observations from your book that you would foot stomp on even more? Are there new observations that you allude to that you think maybe should be even further called out perhaps?

WENDELL WALLACH: I would say that the book is not as good on inclusion as it could be. It was still a reflection of its time. I was not thinking about the geopolitics of inclusion then the way I am now.

I think I was already talking about AI oligopoly and so forth. I would write a few additional chapters. I looked at how much I was going to go back with the book, and I thought in the end, no. Most of what it does it does fine, and if I go back and add on it gets too long, all of those different things. I said, I am just going to write an introduction which will make this a little more accessible to people who are coming to it for the first time and give them enough of a feel for what has happened since the book was first written.

What I did not write about enough was this intersection of power and technology. I think we are on a trajectory right now where the technologies are largely about power and all this about inclusion and the Sustainable Development Goals and so forth is largely lip service. I am not saying that there is not stuff going on. There are researchers in Africa and in America who think they are going to solve this problem or that problem, maybe schistosomiasis, a disease which is more likely to happen in Africa in poor neighborhoods. Therefore even though it is one of the most prevalent diseases we don’t even know it exists in America. It is that kind of thing.

From what I can see, everything going on, particularly in AI but not just in AI, is making inequality worse. It is undermining inclusivity. I just got a copy of the latest book from Mark Coeckelbergh, which is about how AI is undermining democracy and inequality.

Democracy is wound up with equality in a complicated way, but at least it gives some redress, and from what I can see going on is that more and more power is accruing to the elites or for that matter for those of us who are owners of capital. I own some stock. I am an owner of capital, minuscule compared to any number of people we can mention in this world, but I think what we are seeing is that we have created a trajectory into which the control of more and more capital will go to the top and everyone else will be under increasing pressure to find a niche for themselves in this new tech economy. That is nice for those of us who are college educated and maybe have science, technology, electronics, and mathematics (STEM) backgrounds or are privileged in the ways that I know I have been, but that is not going to be great for the rest of the world, and I don’t see anything happening yet that makes me think that exacerbation of inequality is being addressed.

I was at a meeting at the International Telecommunications Union, which is part of the UN system. They do a conference every year, AI for Good, and they invited me to come back last year. I was in a luncheon with many of the state leaders who were in Geneva at that time.

I am not going to mention what the corporations were at the moment, but there was a vice president of one corporation getting up and talking about all the trillions of dollars that AI was going to create and examples of what some of the goods were, and then there was somebody from another corporation getting up and telling us, the whole luncheon, probably about 150 to 200 people, about how AI was going to improve energy efficiency.

I got up and said: “I don’t care how many trillions of dollars AI is going to produce in world gross domestic product. It doesn’t mean anything unless you ensure that a significant portion of that is going down to the most needy of us, and right now we are all seeing what is happening. It is the owners of capital, particularly the 1 percent, for whom that profitability is going. It is not getting spread out. In fact we have lots of statistics to show that the profitability isn’t even going to wages anymore, or at least the amount going to wages is decreasing.” I said, “As far as the efficiencies that AI creates, don’t tell me about the efficiencies in energy use that AI created when AI is becoming one of the biggest consumers of energy on the planet.”

This is where I get quite disturbed. We ain’t doing it. We have set ourselves on a trajectory which is not healthy as far as most of the citizens of the world are concerned. It will be okay for the most privileged of us, and I do not know how large that group is going to continue to be, whether it will be larger than it is now or smaller. It may be great for some of the transhumanists who feel that their technological utopias are being realized, but this trajectory is not overall healthy.

SAMANTHA HUBNER: For more than 20 years you have been a gadfly on what challenges we ought to prepare to confront in technology and ethics, some of which have certainly come to fruition in recent years. Based on what you are seeing happening now, how would you describe your perspective today? Are you more optimistic or pessimistic?

WENDELL WALLACH: I am very mixed today. It is clear that I have spent a lot of time pointing out the more pessimistic side of what can go wrong, therefore some people see me as a techno pessimist, but I actually love science. There are so many things I would like to see realized, and some of the things that are being worked on excite me.

DeepMind solved the protein-folding problem. I know that is going to sound a little wonkish to a lot of listeners, but that was a problem that bioengineers had trouble solving for years. They have now looked at, I don’t know, 20 million proteins or something like that and given a basic structure for that.

There are going to be breakthroughs for the next few hundred years based on that research alone. I think if anything should have won the Nobel Prize, that should, but I don’t think it is AI that wins a Nobel Prize. I would like to see it go to Demis Hassabis and his team at DeepMind because I think they designed the machine that could then answer the question.

We get a little caught up in thinking about AI and not understanding that AI up to this point is a social technology. It is a technology that does things that people try to figure out how to make it do. The idea of autonomy is exciting but potentially more dangerous than what we want it to be.

Let me get back to whether I am an optimist or a pessimist. I just want to give that as an example of something that happens to really excite me about work going on in emerging technologies, and it is work at the interface of AI and biotech. It is not one or the other, but it has given us the clues to move forward.

I get very excited about science progress. I get very excited about the history of science. I love it, and I want to see on one level what we can realize, but I don’t want us to be stupid.

In the beginning the Industrial Revolution made lives worse for people moving into the cities, but we talk as if the only thing the Industrial Revolution ever did was raise the standard of living. No, not true at all. It took perhaps a hundred years before you even had cities thinking about the germ theory of medicine and about how they could put sanitation systems in place.

This is my concern, that if we are going to go down this path of ever more development in science, what are we going to do to ensure that it is safe, that it meets a broad array of people’s needs, and that it is not just something that is giving more and more power to the richest and most privileged among us.

At the moment I am thrilled to see the excitement that AI ethics and governance is precipitating, that it is finally getting this attention. It is nice to work on something—going to back to when you may have been one of only a hundred people in the world who cared about it, and now you in a universe where tens of thousands of people working on these things. Most of them don’t know your name or anybody else who goes back to that earlier period. That is not the point. The point is that it is wonderful to see this excitement.

But I am also afraid that there is too much obfuscation going on in which the benefits are being put forward as if the risks are being addressed, and they are not being addressed. For all the talk about AI safety and security, hardly any money is going into that research compared to the billions that are going into expanding, for example, large language models and other generative AI.

What has happened in contemporary America, or at least what I have seen over the last few generations, is that there has been a dilution of responsibility and accountability, and we do not have either governmental systems or ethical systems in place where people take accountability seriously, where they say it is not just what I can benefit from but what are the downsides of the research I am moving forward and what is my responsibility to ameliorate those other conditions?

For years now I have stood up in front of audiences and said, “Well, do you think, let’s say, the benefits of this technology or that emerging technology generally will outweigh the risks?”

I get different responses with different audiences. Some think the risks are very great, particularly when you talk about military robotics, and others think that the benefits are truly great, but sometimes with these audiences, after I ask, “Well, do you think the benefits or risks are greater,” I ask a third question: “How many of you think it is not just clear yet?” That generally gets the most hands going up.

I am both a tech optimist and pessimist. I am a pessimist because I think it has been my job to underscore what is not getting addressed, but if those issues get addressed and taken seriously and we are accountable to the downsides, then I think we will reap great benefits from technology. If they are not addressed, then god save us all, or at least the next generations.

SAMANTHA HUBNER: I am sure our listeners have appreciated these new insights on A Dangerous Master, but I do want to make sure I ask, is there anything else you would like to share with our listeners, particularly as you gear up for this re-publication?

WENDELL WALLACH: I know when I look at books about technology I look at when they are written, and therefore I dismiss the ones that were not written in the last couple of years. I have come to understand that that is not the best way to look at it, and therefore I would say, yes, this book was written a few years ago, but I have come to realize that it is not only more relevant today than it was when it first came out—in fact, it may have just come out a little too early, before its time—but it has already influenced other work on all of these areas of research.

Even in this past year a couple of very informed people have told me that they have just read the book and that it has held up really well. I think I can say that to our listeners. This is probably not something to shy away from, and what you are going to learn about is not necessarily the latest and greatest technology, but you are going to learn a little bit more about how to think about technology and its impact on society.

There are all these concepts out there like complex adaptive systems. That sounds so wonkish, but the fact is that we live in a world of complex adaptive systems, and they are affecting us. Some of them on occasion can act in ways that have a high impact but there is a low probability that that will happen. Nevertheless it will happen.

There are trade-offs, and every time you make a choice there are detriments that you are not attending to, and it is not good enough that we just choose the greatest good for the greatest number. We have to think about what the impact of these technologies is going to be on those who have less access to resources than we do.

There are issues around how entangled these technologies are with the social systems we are moving through. This talk about the technologies as if they are something that exist in a vacuum—that has never been the case. It is not the case now. The automobile was not just a machine that could move fast; it was a machine that totally transformed modern life. We lose sight of looking at what all the tanglements are, where the benefits are going to be, where there are actually going to be some losses, and whether those benefits are really worth it to us in terms of what we are giving up.

It is this question of how to think about these technologies, not just what they are from a scientific viewpoint or what their historical precursors were, but how they are going to become entangled in modern life and how we are at an inflection point that is going to change what society is so radically that most of us if we come back in 20 or 30 years will not be able to imagine today what we are going to encounter.

Will it all happen that quickly? Will it all be as radical as I say? Not necessarily. I have been listening to prophecies my whole life.

Twenty-three years ago I went to a transhumanist conference, and they all thought we were ten years away from little nanomachines running through our blood vessels, repairing bad cells or repairing strands of DNA in our systems and that we were all going to live to be 150 years old and that one person who had already been born was going to live forever. The speculation was out there, speculation about whether we are going to create a technological Singularity that devastates us all.

You have to be a little careful listening to the hype and listening to the prophecies. A lot of things are not being realized. A lot of these technologies are much more complicated than we thought, and therefore adapting them is not going to be easy. When the Human Genome Project was on they expected to find 200,000 to 300,000 genes. They found 20,000 to 25,000 genes.

On one level that seems simpler than 300,000. Actually it is much more complex because you do not have a specific gene for every aspect of who you are. Those genes are creating proteins, and those proteins are interacting in countless ways, and we do not know if we change one characteristic that we thought was beneficial that it will not give rise to all kinds of other characteristics or what we have called genetic mutations that we don’t want. There are only a few diseases that are gene-specific, so this idea that we are going to have designer babies where you get exactly the kid you wanted is not going to happen or is certainly not going to happen any time soon.

I have lived through generation after generation of hype. I don’t want us to believe everything. I want us to be a little bit more skeptical about whether the benefits of AI are going to outweigh the risks, but really what I want is that we all become fluent enough in the language of emerging technologies that when a debate comes up about whether you, for example, want a drug that has been shown to have a positive effect on, let’s say, the SAT scores of a certain percentage of kids, it’s something you want to give your kid. I am not saying you don’t want to, but you are going to need to ask some hard questions, and you are going to need to look at some intricate relationships.

You are going to need to think in a new way, and I am afraid that both in ethics and emerging technologies more broadly we have taken the easy way out. We do not want to think deeply about these questions, partially because sometimes we know we cannot answer them, but that doesn’t alter the fact that thinking deeply about them is going to sensitize us to a whole flock of aspects of the world we are moving through that we may not be aware of otherwise. I thought that was what education was supposed to be about, but it is not clear that that is what it is about anymore.

SAMANTHA HUBNER: Personally I cannot think of a more compelling statement to push our listeners to go read this book if they have not already and to really sit and contend with the observations you pose in this episode as well as in the book. Thank you, Wendell. This has been a deeply scintillating and enlightening conversation.

[ad_2]

Source link

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *