The Doorstep | Protecting Cyberspace, with Derek Reveron and John Savage

[ad_1]

NIKOLAS GVOSDEV: Welcome, everyone, to this edition of The Doorstep podcast. I am your co-host, senior fellow at Carnegie Council Nick Gvosdev.

TATIANA SERAFIN: And I am also a senior fellow here at Carnegie Council, Tatiana Serafin, welcoming in a moment Derek Reveron and John E. Savage, whose new book, Security in the Cyber Age, we are going to discuss, as well as important issues of cybersecurity.

I wanted to mention, Nick, that we have heard through some reporting that Russia has really ramped up cyberattack efforts against the United States ahead of our quite contentious election. What are you hearing on this front before we go to Derek and John?

NIKLAS GVOSDEV: What I am hearing is that Russia, China, and other actors are looking for vulnerabilities in our doorstep infrastructure—the power grid, water systems, so on and so forth—and this is a way to either exert pressure or create disruption—particularly from China’s perspective in the event if there was to be a conflict—but also to exploit vulnerabilities.

The more we have turned over the day-to-day aspects of our lives to systems that are based in cyberspace, that we expect a robust digital network for us to be able to shop and communicate, even to do things like this podcast, we create new vulnerabilities for ourselves. I think it is not to be unexpected that adversaries are going to try to take advantage of that.

Whether or not it is shutting down a water system or letting a critical system not be monitored, or the fears that ships traversing our harbors could be subject to a cyber-disruption that could lead to accidents of the type that happened in Baltimore last month—I think that these are all worrying aspects because it is like air: We depend on our systems based on our phones, we don’t really think about it until something is wrong, and when something doesn’t work, then, and only then, do we really begin to pay attention.

I think we are having these warnings coming out to get people to focus on why this matters, which is why I think it is timely for us to have John and Derek joining us to help our audience understand what is at stake.

TATIANA SERAFIN: Absolutely. And these vulnerabilities for me, Nick, are deepfake and misinformation online from a journalism perspective. We will hear more about all of this in a moment.

NIKOLAS GVOSDEV: Thank you, Derek and John, for joining us. Cyber is one of those issues that seems to perennially be in the headlines. Whenever there is an accident, a problem, something goes wrong, cyber immediately is trotted out as the potential cause or likely culprit.

I was wondering if you could give us a sense of maybe distilling all of the chatter about really what are our vulnerabilities when it comes to the cyber realm? How vulnerable are we? How disruptive is it likely to be? And the next time when we turn on a power switch and the power doesn’t come on or we can’t access our bank account, to what extent should we say this is a deliberate cyber intrusion by an adversary or a criminal organization?

DEREK REVERON: Thanks, Nick and Tatiana, for having us, and we are happy to talk about our new book Security in the Cyber Age.

That is very much what we call it. We say we do live in the cyber age and everything that we do in our society has been digitized. We no longer talk about e-banking or e-commerce, I don’t really understand Cyber Monday anymore around the holidays. We just shop, we just bank, we stream. Americans and our society are deeply embedded and intertwined with cyberspace.

So I think the first question anytime something goes wrong—you know, maybe 20 years ago the question was, “Is there a nexus to terrorism”—I think today the question is: “Hey have we been under a cyberattack?” whether it is by a criminal group or a foreign power.

TATIANA SERAFIN: Speaking of news, here at The Doorstep we try to find how we can connect issues internationally with what we are feeling at home and connect the two.

I just read “Russian hackers claim cyber attack on Indiana water plant.” I started to get nervous because this isn’t getting wider press. This was a specific cyber-focused news channel that I was reading, and in that channel they have many more examples of attacks on infrastructure.

This was a water plant. I know in the past you guys have talked about attacks on the electric grid. I want to bring to the fore that it is not just cyberattacks when we are shopping or banking, but there is critical infrastructure that is vulnerable. I wonder if we can elevate the conversation about these vulnerabilities here at home.

JOHN SAVAGE: In the news recently has been a series of penetrations of our critical infrastructure attributed to a Chinese team called Volt Typhoon. When Microsoft made the first announcement that this was underway, they also said that they understood that this was preparation on the part of China in the event that the United States were to go to war with China over Taiwan.

The good news about this apparently is that the U.S. government has taken down a lot of the Volt Typhoon penetrations into our critical infrastructure. But, as you can see, critical infrastructure has now become an important target for nations against the United States. So we have to be on our guard.

DEREK REVERON: Yes. If I could just dovetail a little bit—and that’s why I love the name of your podcast, bringing foreign policy and national security to the doorstep, because what got me interested in cybersecurity about 15 years ago was in fact this question.

Historically, the United States has been able to sort of promote national security or engage in its foreign policy across the Atlantic and across the Pacific and, with just a few exceptions, historically Americans have been largely immune to conflict.

Cyber changes all that. In fact, John Savage and I met on the Rhode Island Cybersecurity Commission. Then-Governor Gina Raimondo of Rhode Island, now our commerce secretary, was the first in the country to create a cybersecurity commission because she understood this about 15 years ago. That’s really what intrigued me about this.

So what you raise on critical infrastructure—and it is not just the Russians or it is not just the Chinese, on a small scale, organized crime, ransomware, has become the scourge I think of many industries, hospitals in particular.

You can go back about three years ago when the second or third real cyber crisis in the Biden administration was the Colonial Pipeline ransomware, and on the East Coast that disrupted fuel deliveries and complicated things for about 10 days until that was resolved.

NIKOLAS GVOSDEV: Derek and John, can I ask you for the benefit of our listeners to walk us through exactly what a ransomware attack is and how it does have an impact? Usually, you hear it said, “Well, the data was locked up, you can’t access it,” and usually there is some indication that maybe a payment is made.

But really what does it mean, again in a concrete doorstep way? If I am going to have to go to, say, Newport Hospital or South County Hospital for a procedure and ransomware has occurred, how does that impact me? And what exactly does it entail? Obviously, as you said, it is not masked physical intruders coming in, holding people, and rifling through file cabinets and the like. But what exactly is the structure of a ransomware attack?

JOHN SAVAGE: It begins typically with a social engineering attack where a hacker attempts to persuade a person who has access to the computer system, at a hospital for example, that they need access, and they somehow acquire access, and, once they do, they will typically move within the organization, within the computer network, until they find someone who has the highest-level credentials, meaning they can access any portion of the network, and then they proceed to encrypt some critical files without which you cannot continue operations, then they typically put a banner up on all the computers stating that you have to pay ransom if you want to have your system unlocked.

So the question one can ask is: How do you protect against this? Well, you need to have a good security operations center. You need personnel to be trained, you have to train not only the security operations center, you have to train all employees not to visit dangerous sites. You also have to educate the leadership of the organization, and that means people all the way up to the top, because there may be a need to make some decisions about whether you are going to continue operations and, if so, what kind of operations. They are going to be the public face of the organization, so they have to be schooled in this as well. It becomes very quickly a very complex situation that needs to be addressed.

I could continue by saying that the leadership as well as all the employees need to be schooled. If you look over my left shoulder to the book that Derek and I wrote together, it is designed for a lay audience, and it is I think a very readable book. It is I think going to replace the current leading book on this topic.

DEREK REVERON: Since we are doing book plugs, let me give a plug. If it’s not obvious, John is a computer scientist, I am a political scientist, and I will say it was a very rewarding experience to write with someone from outside the discipline. I think it forced us both to make sure that our language was understandable by both communities, and it is perfect for those who want to learn more about this.

TATIANA SERAFIN: My book is on the way.

But I want to ask—technology changes so quickly. I just read that headline. I am a journalist, speaking of background. Could ChatGPT be the next big cybersecurity worry? What do we do with this increased technology change? How do we deal with that? Everybody is talking about ChatGPT and everybody is talking about the threat of AI. Do you see these as threats or as opportunities?

JOHN SAVAGE: They are both. AI is a technology which has generated as we know a great deal of enthusiasm, but it is new and, as with all new technologies, there are risks associated with it. I could run through a set of very serious risks that, if you understood them, you would be very careful about deploying artificial intelligence. You would not employ it in a situation in which a life could be at risk, a business could collapse, or your automated drone flying with a dog-fighting AI system, which apparently the U.S. Air Force is considering. You would not rely on this technology unless you were absolutely certain that it was going to perform as expected, and, if not, then you need to put in some safety systems that can override the AI system.

That is not a subject that has been discussed anywhere that I know about, but we know that in the petrochemical industry, where you have very high temperatures and pressures when you are distilling petroleum, they do use safety systems which have the power when it appears that the parameters are out of line to shut down the distillation operation to prevent an explosion.

I think we need to treat AI carefully and with intelligence and with the aid of expertise from people who are really truly informed about the subject.

DEREK REVERON: One of the things we do is in our chapter on AI we begin in ancient Greece and talk about some early robots and automation, automatons, and then of course pick up the thread in the 1950s. The promise for AI has been around for a very long time. These large language models and applications like ChatGPT—in fact, our book cover was generated using an AI tool. So it’s extraordinary.

I would emphasize John’s point that technology is neutral. Tatiana, with your background, you can think about the printing press. Good or bad? When that was invented, some saw it as great and others saw it as a threat.

I think with AI it will be done very similarly. At least we try to lay out, as John explained a bit, that guardrails are important. It is also highly decentralized in terms of who develops AI not only within the United States but around the world. I saw at least moving through Congress is a bill to try to put in some national standards for AI. I know the Europeans have been doing something similar. We’ll see how that develops.

But clearly it is a marvelous technological feat that will make some things easier and some things harder.

JOHN SAVAGE: Also, I’ve looked at your report on the trade-offs of AI and diplomacy, and I think that is very well written and it is some excellent advice that should be followed with respect to diplomacy, and some of those same ideas that are articulated there I think are relevant in any application of artificial intelligence.

NIKOLAS GVOSDEV: Artificial intelligence and just the larger revolutions that are occurring, Derek, as you know, with automation raise the possibility that we can begin to reduce the number of humans that are needed. I’m thinking here specifically both in military and in civilian vessels at sea, that you say we don’t need so many people onboard to run a ship, whether it’s an aircraft carrier or a cargo container.

But with the recent accident in Baltimore, with the near miss in New York, with problems that we have seen with navies that are not able to fully staff vessels and therefore rely on automation to try to take up the load, what are the risks there? Are we on the verge perhaps of where we might have a situation where using a cyberattack a system is compromised which then creates a major accident or where we lose control of a system because the AI does not have those shutdown protocols that allow a human to override it?

Again, technology may be neutral, but as we start to think of these technologies as labor-saving devices, what are the risks that we are opening ourselves up to?

DEREK REVERON: It’s a great question, and again hotly discussed and debated, and there are probably two camps—there are the boomers and the doomers.

The boomers do see this as a way to revolutionize productivity, redirect human labor to uniquely human things, and let the machines and automation do the dull for us, whether that’s moving packages or—I never understand about this obsession with robotaxis. I like driving and so I don’t quite understand why getting rid of driving is a good thing, but that’s me.

And then there are the doomers who really have more visions of like Terminator in mind, or the more plausible example that you kind of raise is a modern containership might only have a crew of about 20 people compared to a destroyer which is about 300. A lot of the reason for the crew of 300 is not only to do the mission, but I would say damage control when things go wrong, just like in a modern aircraft there’s a pilot and a copilot but they have autopilot. So we have had various forms of automation that are there.

To John’s earlier point, when do you decide, if at all, to take the human out of the loop? That’s when I guess it becomes dangerous because people make different assumptions. I have had several students who want to write about AI and this great future—so the boomer side—where you have commander data or C-3PO in a positive AI robot helping humanity become better.

I always ask, “Why do you think the machine will do it better than a human being? The machines or these large language models are being trained by everything that human beings have created.”

That is one of the things that John and I keep talking about: “Yeah, AI is certainly wonderful, but it has all of the bias that has been used to train.” John has a great example of how easily deceived AIs can be. They are really good at pattern recognition, but when you give it something it doesn’t know, it gets confused or makes mistakes.

John’s stop sign example is kind of what I had in mind.

JOHN SAVAGE: The stop sign example was created at Berkeley by students of Professor Dawn Song. They trained a visual recognition system camera on street signs and then she allowed her students to put two strips of black paper and two strips of white paper on the stop sign. When they approached it at an intersection, it was read as a “45 miles per hour” sign. So when you see that kind of mistake, you realize that this is dangerous.

The other example I like to cite is on automation in general. Margaret Heffernan in a TED Talk a number of years back said that if you automate you need to really understand what situations are likely to arise. We are living in a complex world, meaning small changes can have large effects. If you do not, cannot, or don’t think you can fully anticipate the environment in which this automated system is going to operate, then you are going to have consequences, and if they involve people, it means high impact even if it is low likelihood. What that means is you must have in those high-impact situations a human in the loop.

TATIANA SERAFIN: I certainly hope that we keep people in the loop. I think that is really important.

In our discussion so far we have mentioned I think three different layers where we could potentially get frameworks to put some of those safety measures you discussed in place. There are multinational and multilateral frameworks, the European Union is working on something, there is our U.S. cyber ambassador who just got some hefty money from the Biden administration to do some stuff hopefully, and then you mentioned your work where you began at the state level at the Rhode Island Cybersecurity Commission.

We can discuss all three, but I wonder if we could start at the top. I have heard you speak about the differences that some countries are taking—in Russia or China trying to control and manage and not have open access, versus the American model which is more open access. How do we find the middle ground? Is there some sort of middle ground being worked on? And then, what are we doing at the national level and at the state level since there is so much tension these days between federal and state here in the United States.

DEREK REVERON: Let me start. I often think about the “Three Bs” to at least talk about the international: the Beijing, the Brussels, and then there is “business: for the United States. I guess I could say “Boston,” but I know Boston is not quite the center of American commerce, though it once was.

From a Beijing perspective, they really promote these ideas of digital sovereignty, putting borders in cyberspace and ensuring that the Internet and individuals do not pose a threat to Chinese Communist Party rule. Decisions on information technology are really made with how to protect the party in power in mind.

Then we take a Brussels perspective, which I would say is really how do you protect the individual. The individual rights that exist for EU citizens are the best in the world. California in fact developed its own California Consumer Privacy Act based on the European Union’s 2018 General Data Protection Regulation (GDPR). So if you are a Californian or a German you have great Internet privacy. Those governments really want to protect individuals.

Then you have the case of the United States, which is all about the U.S. government’s laissez-faire approach to business—that it is Microsoft, Meta, Amazon, and web services making the standards and letting their systems develop with their own internal ethics.

Those “Three Bs” are really playing out.

In the U.S. case, I am generally pessimistic about passing laws in the United States for a lot of reasons, but at least when it comes to cyber it has a lot to do with the general supposition that the U.S. companies are leaders in cyberspace and AI because of very limited regulation. There is generally a fear of regulation in the United States and I think even more so information technologies.

You referenced our U.S. Ambassador at Large for Cyberspace and Digital Policy Nathaniel Fick. He talks about—in contrast to the Chinese or Russian approach, and I would even say India, where they promote digital sovereignty—Nate Fick is really trying to promote the idea of digital solidarity where the United States and like-minded countries—Japan, South Korea, Australia, European countries, Canada—come together to recognize that what we are talking about generally is an open forum where data moves easily and freely and there should be no restrictions.

I think trying to put in safeguards now to protect individuals, like the European do, is probably the next hurdle. I know California does it. Some cities have outlawed the use of AI, for example, in criminal proceedings because the facial recognition has been proven to show bias.

So we will see all those things aggregate, and at some point the big U.S. government, national government, will probably get closer to giving citizens rights in cyberspace much like the European Union has.

JOHN SAVAGE: I think cybersecurity is not just a domestic problem but it is an international problem, as we have said here.

I think that the right approach to bringing nations to the table on common understandings needs to be done through their interests.

When I was at the Chinese World Internet Conference in 2015, President Xi gave the keynote address in which he argued for digital sovereignty, meaning he did not want any nation to interfere with their management of the Internet within China.

The Peace of Westphalia defined the concept of sovereignty, but unfortunately today nations are intruding in the politics and in the safety and security of other nations. When those attacks are made against critical infrastructure, in my opinion that is a violation of sovereignty and that could justify use of force in response I think. Derek can better comment on that.

The point I am making here is that we are in a very dangerous time because these other nations—the Russians, the Chinese, the Iranians, and the North Koreans—are engaged in what I consider to be very dangerous behavior and it needs to be regulated.

DEREK REVERON: To bring it back to the doorstep, there are four members in my household and three are avid TikTok users. I am not – not for any political reason, I just have a limit to my social media consumption.

Right now Congress is considering—I think they passed—legislation that if the Chinese company ByteDance does not divest of TikTok in the next six to 12 months then it must be sold. We have seen this before. We saw it four years ago when the Trump administration attempted to ban TikTok and divest. Other countries, India for example, banned TikTok without much controversy.

I was trying to think about the doorstep issue, and this is one of these constant conversations I have in my household about is TikTok good or bad.

I say: On one level, up at the international level, I see the TikTok ban as a question of reciprocity. China bans Facebook and other social media, and so I see it as a trade issue, as a reciprocity: “Okay, don’t ban Facebook and we won’t ban TikTok.” That is one level.

I think on the other level you look at the competitive system. Certainly, TikTok was I think one of the most downloaded apps last year, so if you are an American company like Meta, this concerns you, so you are lobbying government and you are using this issue to say, “Look, you need to protect American business.”

And there are probably a couple more social dimensions that I could speculate through. One is in the China case they limit consumption of social media for children and the algorithm is tweaked in China to really point more people to podcasts like yours, to learn, rather than the funny cat videos that we are seeing in the United States. So there is a fear that China is trying to make Americans dumber through TikTok, which again is a little silly because we’ll find our funny cat videos on YouTube or we’ll find them on Instagram reels.

Finally, probably two things in a negative national security way. We got a preview of this about a month ago when TikTok sent a message to tens of millions of its users encouraging its users to go lobby Congress to prevent a TikTok ban. This is a case of the Chinese Communist Party potentially telling a State-owned industry getting direct access to Americans to lobby the government on an issue that is unfavorable to the Chinese Communist Party.

And then maybe the most obscene thing I can think of is if TikTok is on 170 million Americans’ phones and devices, they could potentially in the event of a conflict disable those devices by having malware embedded in TikTok. That is one possibility, but I would also say any other app that is sitting on our phones could probably be compromised.

I don’t know if it is easier because they are all China based—we certainly have enough examples that foreign governments compromise U.S.-based technology—but that to me is one of the doorstep issues on this question about TikTok. I don’t know how anyone else feels about it, but like I said, I am usually the losing vote in my house about TikTok.

TATIANA SERAFIN: We will see what happens with TikTok.

We would love to continue this conversation with you and our audience online and we will look forward to getting your book, Security in the Cyber Age. Thank you so much for joining us today.

[ad_2]

Source link

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *