Unlocking Cooperation: AI for All

[ad_1]

JOEL ROSENTHAL: Good morning. Welcome, everyone. My name is Joel Rosenthal, and I have the privilege of serving as president of the Carnegie Council for Ethics in International Affairs.

To begin, I want to recognize the entire team at the United Nations University (UNU) for their partnership in organizing today’s event, especially Rector Marwala and our moderator today, Dr. Fournier-Tombs. I would also like to express my sincere gratitude to one of our panelists, Vilas Dhar, president of the McGovern Foundation, who is not only a visionary leader on questions of ethics in artificial intelligence (AI) but also a great friend of the Council. Thank you for your support.

We are also pleased to welcome to the Council Doreen Bogdan-Martin, secretary-general of the International Telecommunication Union (ITU), along with the co-facilitators of the Global Digital Compact (GDC), UN Ambassadors Chola Milambo of Zambia and Anna Karin Eneström of Sweden. Thank you all for joining us.

I should also mention that we are being joined online by over 500 people in 75 countries, and that is thanks to the work of Kevin Maloney and the entire Carnegie Council communications team. Thank you team, thank you, Kevin, and thank you all out there for watching.

Guided by our mission to empower ethics in international relations and informed by the upcoming Summit of the Future, today’s event is part of Carnegie Council’s “Unlocking Cooperation” program, a series of convenings exploring new pathways to revitalize multilateralism. Like the Council’s founder Andrew Carnegie, those who crafted the UN Charter were both realists and idealists. These visionaries understood that in the practice of international politics power and ethics are inseparable and must be considered together. Only then can we create the conditions necessary to tackle shared global-scale challenges. Despite this trying moment for geopolitics it is my strong belief, as evidenced by the very individuals in this room, that the United Nations can still serve as a powerful force for good in addressing the moral imperatives of today and tomorrow.

With that, it is my privilege to welcome to the podium the Under-Secretary-General Tshilidzi Marwala, the rector of the UN University. Welcome.

TSHILIDZI MARWALA: Good morning, excellencies, ladies, and gentlemen. This is actually my second time coming to the Carnegie Council. The first time I came was to introduce myself to Joel, and of course as part of the introduction we decided that we were going to do a joint event together. I am quite happy that a few months later we are here.

As we stand on the eve of the Summit of the Future we gather today to explore how AI can be a powerful tool for addressing the world’s most pressing issues. AI has the potential to bridge the gaps, transform industries, and contribute to sustainable development in unprecedented ways.

This distinguished panel, including ambassadors and our leader, who has been a pioneer for AI for Good—and it has to be for good and it has to be for all—can ensure that these technologies benefit all people and not just a privileged few. Today’s panelists, all leading voices on AI governance, remind us that AI should not exacerbate existing inequalities but must serve as a tool for global inclusivity. I believe that through AI for All you must address the digital divide, particularly between the Global South and the Global North, where data gaps can undermine fair and equitable applications of AI.

It is especially fitting to discuss AI for All here at the Carnegie Council for Ethics in International Affairs, a place dedicated to advancing ethical discourse on global challenges. As AI continues to shape our future forums like this are crucial for ensuring that AI development and governance are aligned with the principles of fairness, equity, and justice that are enshrined in the Declaration of Human Rights and the UN Charter.

AI literacy is absolutely crucial and education is key to ensuring that AI is used responsibly and ethically. If decision makers and practitioners understand both the potential and the limitations of AI, they will be better equipped to use it as a tool for solving complex problems from climate change to poverty and not as a tool for destruction.

At the UNU we are dedicated to educating the next generation of AI leaders by equipping students with the knowledge and tools necessary to engage with AI in a responsible and inclusive manner. In this regard we are going to be launching our new AI Institute in Bologna. By offering research, policy recommendations, and degree programs, the UNU is helping to build the capacity needed to navigate the opportunities and challenges posed by AI. This work is essential to ensuring that all nations, regardless of their level of technological development, can meaningfully participate in all the good things that come out of AI, if I were to paraphrase Doreen.

Global cooperation is critical for governance of AI. Addressing the governance challenges of AI requires cross-border collaboration. It requires education and changing behavior. It requires creating incentives so that AI is used for good. It requires building institutions that are going to govern AI. It requires policies and regulations, and ultimately it requires new forms of laws. This idea of educating the policymakers is actually not a luxury. It is absolutely a necessity.

At the heart of our efforts to responsibly manage AI risks is obviously the International Scientific Panel on AI. I see that at last their report is out. I think the big challenge is actually implementation: How do we ensure that all of these good things that Member States have endorsed can play a crucial role in ensuring that absolutely no one is left behind?

It is therefore imperative that we prioritize development of robust, globally accepted methods to measure AI risks, ensuring that we understand the full scope of AI’s impact from economic disruptions to ethical concerns. While this is a complex and challenging task, it is not one that we should shy away from. As the Summit of the Future approaches, we must focus on how AI can help achieve the Sustainable Development Goals (SDGs), whether those be for climate action, gender equality, or for peace and security.

As far as I am concerned, the issue of data is very important. Cross-border data that has bearing on sustainable development must actually be allowed. Of course, the safety concerns must also be addressed.

The responsibility to guide AI development in positive directions falls squarely on the shoulders of the international community. We must take a proactive stance to ensure that AI technologies are developed in ways that uplift humanity, protect human rights, and promote global well-being. The Global Digital Compact offers an opportunity to create shared principles and norms that can guide AI evolution toward ethical and equitable outcomes, and these processes at the international level must actually be harmonized with what is happening at the regional level. The African Union has just adopted the African Digital Compact while we have the High-Level Advisory Body (HLAB) on AI, and the European Union has just adopted the AI Act. If these initiatives are not harmonized, we actually lose the opportunity of governing this technology efficiently.

In line with the upcoming Summit today’s conversation should inspire us to think critically about how we can collectively ensure that AI empowers all nations, fosters equality, and addresses the global challenges of our time.

Lastly, if you are in Tokyo, please come and visit the United Nations University and let’s have a conversation. Thank you very much.

ELEONORE FOURNIER-TOMBS: Good morning, everyone. It is genuinely an honor to be here today to welcome you and have the opportunity to moderate this dialogue on the future of artificial intelligence, taking place right now on the threshold of the Summit of the Future. Today’s panel will discuss the nuances and the impacts of artificial intelligence on global governance as well as recommendations to global policymakers today as they make critical decisions on the future of AI.

We are very fortunate to be joined today by four very thoughtful and experienced global leaders, each of whom have played a key role in shaping the current approaches to mitigating AI risks and harnessing its opportunities. Today we will have one, maybe two, rounds of questions for our panelists to consider, after which we will open the floor to questions from the audience.

Without further ado, let me turn to our first two speakers, Ambassadors Eneström and Milambo. Your excellencies, as co-facilitators of the Global Digital Compact, you have had unique insights into the potential of AI to revive multilateralism. As we approach the Summit of the Future, do you still feel optimistic about the future of international digital cooperation, and what positive or negative effects do you think it might have on global multilateralism more broadly?

ANNA KARIN ENESTRÖM: Thank you so much, Eleonore, and it is a pleasure for both of us to be part of this panel. This is actually a very crucial day as we are approaching the Summit of the Future on Sunday, and I think most of us will be busy from tomorrow with ministers arriving. We are in the last stages hopefully of agreeing on the Pact of the Future with the annex of the Global Digital Compact. Please send your good wishes to all of us during the crucial days that remain before the Summit.

I also want to say that I think our work has been really facilitated by the fantastic work that has already been done within the United Nations, especially by Doreen and the ITU.

We have spent a lot of time traveling. We have been to Geneva many times, and Doreen has actually been a supporter through our process, so I just want to make it clear from the beginning that there is already a lot of work being done in the UN system, not least by the ITU. I think our work is very much to pull these threads together and see that we put together something that is a comprehensive framework for adoption by all Member States.

I would say, yes, I am optimistic because I know that there are such strong forces within the UN system and elsewhere. I am optimistic because this has been a process that has been going on for 18 months, and it has been a truly inclusive process. We have had so many discussions with a range of multistakeholders from the deep dives that we did with 500, 600, or 700 participants, where civil society, private sector, and the UN family were on equal footing with the Member States.

I think it has also been a very positive process because we feel there is a real need to have this agreement. The SDGs were mentioned, and of course we know—and I am always using what Doreen says: “If we use digital tools in the right way, we can push the implementation of the SDGs by 70 percent.” This is where we are. We are at a crucial time where we can choose if we want to have a more equal world or if we want to broaden the inequalities in the world. We need to use digital tools, including artificial intelligence, to bridge those gaps.

I am optimistic. I agree with the rector that the challenge will be implementation, but again I am very confident with Doreen at my side and the UN organizations and of course other multistakeholders that we will have to push this implementation, especially when it comes to artificial intelligence and the governance of artificial intelligence, which will go to the General Assembly for further discussions and decisions.

CHOLA MILAMBO: Let me add to what Anna Karin just said. Let me thank the Carnegie Council, the UNU, the ITU, and of course the High-Level Advisory Body. A lot of work we look to is actually from the report itself. Congratulations on the launch that took place this morning.

Positive, yes. I do join in the positive outlook of the GDC and in multilateralism because of this moment. We are at a very critical moment. If you look at just the way technologies are advancing, whether it is in quantum, AI, or even on the infrastructure side—look at the size of microchips now; they are really becoming “micro” chip in a real sense, talking about sizes in nanometers—and that is going to change a lot, so we stand at an inflection point of technology and development.

If you look at the world over the past 1,000 years and especially the last 200 years, our level of development has been tied to the level of technology. It defines the level of income that we have in different countries. On that basis, you can also argue that inequalities in technologies across countries also defines the level of inequality, so we have an unequal world, a fragmented world.

At the same time, we do not have a unified system that governs these technologies more at the global level, and the AI platform is one of the first that looks at it as a truly global intervention on an aspect that is absolutely critical, so it is a very important moment, like Anna Karin just put forth.

In terms of the process, like she said it was inclusive and transparent, but it was difficult, but the difficulties do not compare to the level of the challenges that are out there. We are talking about countries like mine that are lagging behind. We are talking about 2.6 billion people who are not connected, and that does not compare with the differences we perhaps have in this room. On a net basis, we feel positive about the outlook now.

When it comes to the issue of multilateralism, to a large extent it sits on the inability for us as various countries to close what I call the “empathy gap.” We need to have more empathy among countries about our various situations. That in itself rests upon a deficiency in appreciation and understanding of each other’s cultures and situations.

How do you close that, and how can digital help to close that gap? From my perspective, if you look at it in terms of a reaction function or theory of change, you are asking, could digital collaboration create greater awareness of what is going on across countries, could it enhance understanding between cultures, could it create a smaller gap between empathy, understanding and exchanging information on how we live or how they live—and that is not we and they; it is actually us; it is our world?

When you close that empathy gap I think in terms of foreign policy you tend to find it easier to understand and find common positions if you have greater empathy on the other side. I think digital can be a force for greater empathy.

An important issue is the caveat that the professor put across, that we need to have the safeguards in place to ensure that this is used for good and for all.

ELEONORE FOURNIER-TOMBS: Thank you, your excellencies. It is heartening to hear optimism from both of you and in a way your trust in this very inclusive process and to think that AI also has the potential for reducing global polarization and increasing our faith in universalism, which is something that is really at the heart of the United Nations. Thank you for that.

I am turning now to Secretary-General Bogdan-Martin. ITU has played a leading role in shaping the global AI governance landscape from the AI for Good conference to global standards and in numerous technical and policy initiatives in Member States. What is your perspective on the future trends in AI developments, and what challenges might the UN system address after the Summit of the Future?

DOREEN BOGDAN-MARTIN: Thank you and good morning. Thank you, Joel and professor. It is always nice to see you. It is great to be here this morning among such a distinguished panel. I understand we are sort of the warmup act. After this we are going to get deeper and deeper, and it is going to be an exciting couple of days.

I think, picking up on what Ambassador Anna Karin just mentioned, I too am an optimist. I think AI can be for good for all and can benefit humanity. I like that Churchill expression, where he says that, “The pessimist finds difficulty in every opportunity and the optimist finds opportunities in every difficulty.” I think we have huge challenges, but we have incredible opportunities.

When we look to the future, AI is going to keep moving faster than we can, so we have to be able to balance and make sure that we are not pushing for over-regulation, that we are not stifling innovation but are finding ways to balance risk and find ways to manage safety concerns. That sort of speed is going to be a trend that will continue.

The other trend that we are starting to see is how AI is also converging with other technologies, so if it is 5G or 6G, the Internet of Things, or if it is quantum, and that also brings new opportunities but new challenges I think we will have to face. Again, thank you for using my 70 percent. Really a trend, and this also comes to the point about implementation, definitely we see that AI can be used to help accelerate every SDG and 70 percent of the SDG targets, so I think we have to be investing there.

Post-summit, I think your point, professor, on implementation is critical. We have to be focusing on implementation. Again, that is where I am optimistic because there is a lot there that we can build on, but we have to double down and tackle the gaps. Of course the biggest gap is the access gap, so whether it is access to computing resources, whether it is access to infrastructure, the digital divide being 2.6 billion people who are not connected, or whether that gap is the skills gap, which is absolutely critical, and we see in so many cases that we are connecting people, but without the basic skills and without the more advanced skills it cannot be leveraged, so I think we have to be focused very much so on the skills piece, whether it is capacity development, also for policymakers as you mentioned.

We are launching and are quite excited about an AI skills coalition, and we are trying to tap into that policymaker gap as well as the public-at-large gap, so we are hoping to bring a number of partners within the UN system but also outside the system to address that gap.

I think the other piece, and this comes to harmonization, which was mentioned by the previous speakers, on the standards front. Standards is the basis to build guardrails. We need to do more on harmonizing standards. This is a space that I think ITU has well occupied for many years, not on its own but in collaboration with other standards-making bodies. We have something called the World Standards Cooperation with the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC). We have a specific effort focused on multimedia authentication and deepfakes. This is a big issue, and we are very focused on that space.

We also have our first-ever AI standards summit, which is taking place in a couple of weeks in India, the margins of our world standards assembly, so we are looking forward to strengthening that standards piece because standards has to be the basis as we look forward to further strengthened guardrails.

I think the other piece we have to be looking at when it comes to balancing innovation with safety concerns and also making sure that we build in—I look at my friends now, more than one, in the back of the room from the Office of the United Nations High Commissioner for Human Rights (OHCHR)—let’s remember to keep the human rights piece also at the core. I think that is absolutely fundamental.

ELEONORE FOURNIER-TOMBS: Staying in the vein of optimism, Mr. Vilas Dhar—I am thinking now also about your role as a member of the AI Advisory Body, which just launched its report this morning—you have always been very optimistic about the potential of AI to address the SDGs, and we have spoken a lot about them. The AI Advisory Body makes a number of concrete recommendations that should be and need to be considered by global policymakers. Could you tell us a little bit more about what is at stake in terms of adopting these recommendations?

VILAS DHAR: Thank you so much, Eleonore. Good morning, everyone, your excellencies, dear friends. What a joy it is to begin to kick off this intense week with people you admire and respect so much—Doreen, your excellencies, of course, Professor Marwala, and Joel—in an institution that I think is thinking about not just AI as a technology but about the ethical considerations of it.

This morning we released the UN’s High-Level Advisory Body’s final report on AI, and I assume I could tell you plenty about what is in pages of documents that are built on tens of thousands more, but instead I would love to share with you what I learned this year. One of the great privileges of my professional role is that I get to spend a lot of time in communities with policymakers and technologists but also always carving out time to meet with students, community members, farmers, philosophers, and thinkers. I have been struck by the fact that in my conversations just this year from Spokane to Spearfish, South Dakota, from Vientiane to Ulan Bator, from Lusaka to Stockholm, every conversation that I have on any topic at some point will have a question about AI. It has captured the public imagination. It has created a universality of a linguistic construct, a space for us to ask questions about AI, which inevitably become questions about access, representation, participation, ownership, and about opportunity.

Young people will say: “I am in school and it is all of the things you might imagine about a rural last-mile environment, yet I went on YouTube and learned about what a large language model might let me do. I am interested, and I have an idea.”

There is something about this that is easy to forget when we think about policy as a construct of legal mechanisms or governance. When we think about technological innovation as a construct of code or compute, at the end of the day this is a moment in time where, at least in my lifetime, I am able to engage with a topic what everybody wants to be a part of. That starting point then gives us an opportunity to come back to all of the topics, not merely in the Global Digital Compact but in the Pact of the Future, to ask questions about how we actually design a future that works for everyone.

We sit in an institution that talks about empowered ethics. I say often that we too easily attempt to define ethics as a technological mechanism. We talk about ethical technology, responsible technology, and technology for good. To me none of those hold weight. Ethics is a human function. The decisions we make about technology require human responsibility and ownership, and the outcomes that we define have to be defined by human aspiration.

I share that with you as framing as we go into the next few days to bring a civil society context, a human context, to the work we all do. There are very important milestones ahead, from the ways in which we in a multilateral context begin to come together to, as we continue to say, “constrain risks and explore opportunities,” but in much more fundamental ways to think about how we reenvision what a process is by which we make decisions about a set of tools that will fundamentally change the human experience. In that frame, in setting that moral compass, in setting an ethical outcome for a participatory way to define this, we then come to, what are the tactics necessary in order to get there?

Here the High-Level Advisory Body report puts forward some constructs. I think in many ways it is very important to note that those are built on the incredible work of institutions across the UN system and across the planet that have been doing this. Doreen, I look always to your visionary leadership at AI for Good, starting well before AI entered the public discourse in the way that it has, to institutions like the United Nations Educational, Scientific, and Cultural Organization (UNESCO) that I think under the leadership of incredible folks like our dear friend Gabriela Ramos have created mechanisms for global majority participation and broad-based ownership of building national AI capacity, and thinking about the new institutions that are necessary.

The report puts forward a number of constructs and forums I think with the intention not that they are definitive requirements for the future but crystallizing moments that will ask fundamental questions: How do we bring together a deep, nuanced understanding of the scientific possibilities of these technologies in a way that can inform all of us, not merely policymakers but broad public discourse? How do we create forms and functions that normalize and harmonize the incredible ecosystem of activity that is happening around the world? How do we create coordination mechanisms that say that even as this work will happen at scale there are elements of this conversation that will require the participation of multilateral international institutions, that say that if governance is not our moral aspiration it is still a necessary outcome and that that governance cannot happen without participation from technological innovators but also from communities on the frontlines of this work.

I will close with this. You started with a question that I absolutely love and I think all of us on stage here own, the idea that we are optimists, but optimism does not happen in a vacuum. To me optimism is guided by the capacity and possibility of technological innovation, deeply informed by our ethical norms and the ways in which we make decisions, and finally and maybe most importantly a belief that human systems, human resilience, and human aspiration can guide the many, many possible outcomes to those that actually create a better world.

ELEONORE FOURNIER-TOMBS: Thank you very much. It is interesting that you start with the importance of keeping in mind the human experience. Having worked as a data scientist for the last ten years, it is in the last two years that AI has become mainstream and something that is at the heart of what many people around the world worry about when they think about climate change and the economy, and they look toward AI to hopefully help with this and not to increase inequalities and add additional risks. That is what is at stake I think for us now.

You spoke also about coordination. We have had intense discussions at the United Nations about other complex governance mechanisms, such as the Intergovernmental Panel on Climate Change and its impact on developing a common global understanding of the state of the climate. In parallel we have in the GDC the International Scientific Panel on AI, which also would aim to develop continuously a common understanding of AI, its risks, its opportunities, what is coming up, and new trends and challenges. What do you think might be the best form and function for this panel and maybe what pitfalls should we try to avoid? This is a question for all the panelists.

CHOLA MILAMBO: Thank you very much for that question. I just want to ask from the anchor of mechanisms that was raised very well.

On the issue of implementation it is going to come down to having robust mechanisms, having the right mechanisms in place, and it is important once the GDC finally goes through—we are very hopeful it will go through in the next few days and we want many stakeholders to endorse it—the next thing to look for is the competence in there, which are clear commitments, and the GDC is very clear on several commitments: What are the mechanisms, and how can we translate those commitments into active mechanisms?

One of the mechanisms is clearly the recommendations that come out of the AI report, to have an independent panel on AI and to also have a global forum to engage that and possibly some areas for implementation in terms of capacity building.

Some of the pitfalls we must not fall into are whereby we have insufficient representation in these mechanisms, the need to be truly global, the need to reflect all of the voices that are impacted or contributing to AI. They must reflect even those developers who are not yet developers for AI. Large language models must represent all the voices, so it is an issue about the language. There should be no gaps. It is those key issues that we usually see in the existing mechanisms that we have in other sectors including finance whereby representation does not reflect the true and correct diversity of the world. For me that is one of the biggest issues, the governance. The independence of the panel is going to be a very important thing so that the information that is coming out of their recommendations truly reflects the voices of many, especially those who are not sufficiently represented.

ANNA KARIN ENESTRÖM: I agree very much with what Chola said, but maybe just a few points. I think first I want to congratulate you for the report and also say how important that report, the work you have done, and the contacts we have had with you all through this process has been for our work, and of course we build very much the proposal—it is not yet agreed—that is there on the Scientific Panel very much on your work.

I think what Chola is saying on the inclusivity is extremely important. You said that AI comes into all conversations. We are looking into challenges for the future. Of course AI brings with it a lot of opportunities, but there are also challenges, and there needs to be an inclusive process in taking decisions on the future for everyone when it comes to artificial intelligence.

I say that the Scientific Panel, which will of course need to be decided in the General Assembly—and there will be a lot of discussions about that—has to be truly scientific and inclusive, but there also has to be a link between what is being discussed and what are the policy decisions by Member States and others. I think that link will also be very important. That is of course in your department.

DOREEN BOGDAN-MARTIN: I totally agree with the ambassadors. I guess I would just add that in terms of independence impartiality I think is also critical for this Panel. I think it will be important that the Panel is multistakeholder, that it is representative of all sectors as AI cuts across all sectors, so multisectoral. I would say also multidisciplinary is going to be critical, and it has to be globally representative so that we do not exclude certain groups. I think that kind of composition will be critical.

I think when it comes to pitfalls, coming back to your question, we should be careful about constructing something that gives the appearance of “one size fits all.” I think there need to be elements of agility. I think we also have to be careful not to reinvent things. There is a lot, as has been mentioned before, that is happening that should be leveraged for this purpose. Our Interagency Working Group within the United Nations that the ITU has the pleasure of co-leading with our friends from UNESCO has made important contributions to the Panel and to our co-facilitators that they could continue to offer inputs to such a Panel, leveraging what is there and being careful not to construct something that is not representative of the real needs that we see today.

ELEONORE FOURNIER-TOMBS: A follow-up question on that, the idea that you don’t want it to be “one size fits all,” which of course makes a lot of sense, and I think ITU and the United Nations have been supporting the development of local AI ecosystems and so on. How do you see that translating into the outputs of the Panel in terms of reporting, research, and so on?

VILAS DHAR: A couple of things I think are worth noting here, Eleonore. The first is, let’s acknowledge the social construct of where we are getting information about AI today and recognize that despite the distributive centers of excellence that are all across the world, where new models are being developed and new foundational systems are being set up, we are still subject to the whim in many ways of a small set of voices who happen to be the loudest in the room and who are motivated as much by commercial outcomes as by scientific research in what they say about what AI can do.

This cannot be how we make good policy at scale. Let’s start by acknowledging that science sits apart from the puffery of commercial needs. Let’s also acknowledge that as dominant as parts of the world are with their great little names attached to them, with “Silicon something” attached to it, there is incredible scientific capacity across the planet that needs to be part of the global conversation. In many ways I am just saying again what our excellencies have shared with us, but the idea of inclusivity here is not a moral value; it is a practical necessity. It is a critical need to get good scientific information. What do our current research agendas on AI empower us to do today, and what are the expectations we might have of what comes next?

To be honest with you—I am a computer scientist by training and I spent nearly two decades in this work—today I don’t even know that I can go to a single institution where I feel like that incredible comprehensive scientific research-driven view is easily apparent to me as an everyday citizen. That is not to say that it has not happened, and there has been incredible work from standard setting, use cases and exemplars, and from great scientists who have stepped forward.

What an incredible opportunity then for the United Nations to take on the role of saying, “Let’s actually bring the intersectionality of human innovation in science to policymaking, let’s do it in a multistakeholder setting that says, ‘This isn’t a conversation merely about AI,’” but as you said, Doreen, “about AI as it intersects with experience, scientific inquiry across any number of domains, and build a structure by which we can build a credibility base for public discourse all across the world.” That I think is the underlying intentionality here.

As always, we come to execution. I think to try to do this from whole cloth would be an absolute mistake. We know that the mechanisms exist where we are able to source this kind of expertise and build multilateral forums for engagement and discussion of standards, techniques, and mechanisms. We have to figure out a way to integrate what that incredible landscape looks like into something that is cohesive and productive. I think that is the challenge ahead.

ELEONORE FOURNIER-TOMBS: Thank you to all the panelists. I think the theme today is optimism and inclusivity certainly, inclusivity in policymaking about AI and inclusivity in the AI economy, and I think we all have felt firsthand the opportunity also of AI to reach people who may not be part of the global conversation, for example, through translation algorithms or through moderation, deliberations, and things like that.

Also having a broader group of people participating in the AI economy means that tools that are developed can be much more unique and much more appropriate and adapted to the Sustainable Development Goals than if it is just a small group of people working on them. I wholeheartedly agree.

I think I will turn now to our first intervention. We have our colleague here, Scott Campbell, from OHCHR.

SCOTT CAMPBELL: Thank you, Eleonore, for giving me the floor. It is a pleasure to represent the UN Human Rights Office here today. Thank you to the Carnegie Council and UN University for organizing this event. I must say it is a pleasure to be here amidst other optimists and realists. On the realist front I tip my hat to the two ambassadors, the co-facilitators of the GDC process, and send them best wishes in getting us to the finish line on that process.

I just wanted to pick up on three points the panelists touched on that I think are of utmost importance. The first one is implementation. We now have a report from the High-Level Advisory Body on AI. We will hopefully have a Global Digital Compact very soon. We have these documents, but how they are actually implemented and how they are translated into change in people’s lives across the globe is the challenge before us, and I think so many people around the globe are poised to access technology and AI and use that to reach the SDGs, and to realize their rights is a huge challenge. The ambassador from Zambia mentioned the 2.6 billion disconnected. I think of many more who do not have access to electricity. I think how we implement the recommendations in these reports will be crucial in seeing that indeed AI is used for the good and the good of all, as the UN rector said.

Secondly, I am optimistic on implementation, and that is because we have a roadmap. We have the International Human Rights Framework and, as the rector also said, we have an accepted robust framework in that International Human Rights Framework that can be used to establish safeguards in how AI is designed, developed, and used in real life, so I am optimistic that we have language in the Global Digital Compact, we have recommendations in the HLAB report that refer to human rights and how they must be central in the follow-up mechanisms established by the HLAB and hopefully very soon in the Global Digital Compact.

The last point I wanted to raise is partnership. This is such a collective heavy lift that we will only get through strong partnerships. As the rector mentioned, we have a lot of work to inform the policymakers, the legislators, and the regulators on how to integrate human rights into the follow-up mechanisms of these reports. There is so much to be done to apply the human rights frameworks to assess risks and assess opportunities of AI, so much need to ensure diverse participation and a human rights-based approach in the AI dialogue, the AI Panel, and the standardization processes that will follow. I think it is only through partnership, and I salute Doreen at the ITU for the great partnership we have had with them on human rights and standard setting along with my other UN colleagues who are here in the room today. I think that partnership will be crucial to turning these very much awaited documents into the reality of making human rights and SDGs real for all. Thanks very much.

ELEONORE FOURNIER-TOMBS: Thank you, Scott. We now turn to questions from our online audience, and Alex will ask our first question.

ALEX WOODSON: Thank you. Lots of questions in the chat. One asks: “How mature is the data-governance structure in the world which is needed in order to achieve this AI for All vision given that data is the main ingredient for AI? What can be done to ensure and support this foundational data-governing structure, especially from an ethics perspective?”

Another question is on environment concerns related to AI and how that can be addressed.

ELEONORE FOURNIER-TOMBS: Two very good questions. We can address both, so we have data governance and the environmental concerns of AI, open to all.

CHOLA MILAMBO: Let me quickly just respond to the issue of data governance. I think the director put it very well. There is a divide we have in the data space. Not all countries have equal access to data. Some are more users, others are more consumers, so we have a very imbalanced system, and we are at risk of having the data divide translating to a development divide. It is a real thing. AI rests upon the availability of data, so bias in data translates into bias of the model. It is a very important component.

Are they mature enough? I think there is work to be done, especially in ensuring that the various data-governance systems are interoperable, and that is what we talk about in the GDC, to encourage a system that has interoperability among data-governance systems.

I think there is a lot of work to be done. The African Union just rolled out a model for data governance, and I think it helped countries in trying to strengthen governance frameworks because it acts as a template. We see also in Asia that they are making advances linking the Asia region, so we need to have those data frameworks ultimately come together.

Also I think it comes to just data as a data economy. There is a rush by various countries to host data centers, and that is a whole different story. Housing data centers requires the right environment to be there. You need to have the right energy supply. My country is going through a drought crisis and facing 17 hours of power cuts a day, so you have three to four hours of power. How can you attract a data center like that? I think this is an issue that we face. We need to invest in infrastructure. You cannot have AI if you do not even have the data or the electricity to run a computer. This is a real issue.

At the same time, we have a situation where data centers also consume a lot of energy and a lot of water, and this is an issue I think we need to sit down and really look at, how much consumption of natural resources are these data centers taking. I do not have the how per se. Maybe the other esteemed panelists can do that, but I think it is worth highlighting that this is a concern and that we need to address it.

DOREEN BOGDAN-MARTIN: Can I just add to the ambassador’s point, which I completely agree with. We have put together something called the Green Digital Action Coalition, and we are quite excited because under the Conference of the Parties (COP) presidency this year, Azerbaijan, COP 29, they picked up from United Arab Emirates a continued focus on the impact of digital including AI, so they will have a specific ministerial piece at COP 29, and I think this is important because it will bring the constituents together to look at how AI and digital technologies as a whole can reduce the impact on the environment and tackle—the reality is that the impact currently is negative, but it can also be beneficial, so how can we get constituents to come together and make commitments to reduce their impact?

One other piece I wanted to mention, picking up on Scott’s point about implementation. Definitely partnerships will be key. We have our Partner2Connect Digital Coalition. We are targeting $100 billion by the end of 2026. We are at $51 billion in commitments, and it is about connecting the hardest to connect, and that also has a big focus on AI. We have to double down on that.

VILAS DHAR: I will add one additional nuance to that question. We often talk about data governance. I think there is another element we should consider, which is, what is it we are collecting data on? Too much of the conversation focuses on building infrastructure to allow for competitiveness around a model that we have today of commercial data, but let me ask you all to do an exercise: Go online and see in publicly available datasets how much you can find about rare earth mineral deposits in frontline environments, and I promise you that you will be able to find quite a lot, deeply granular data that gets you down to incredible resolution and shows you exactly where they are.

I will ask you to do a second exercise: Go online and in those same geographical environments look for what data you can find about potable sources of safe water at surface level, and you will find a marked difference. You won’t be able to find very much.

There is a model in which we collect data because it is commercially available. We then think about the governance of that data, the availability and competitiveness of national models. We need to have the other side of that conversation, which is the responsibility for governments, public actors, philanthropies, and for commercial actors to invest in collecting the data that actually reflects data about vulnerability and guides us in solutions to the SDGs. That recommendation needs to sit in multilateral constructs but also in national constructs as well. I just wanted to make sure we added that.

ELEONORE FOURNIER-TOMBS: Of course.

ANNA KARIN ENESTRÖM: I agree with everything that has been said. It is easy when you have the discussion, and I think we noticed that in the beginning it is easy to forget the environmental and climate part of the whole discussion about digitalization and artificial intelligence because you see all the advantages and the positive side of artificial intelligence also for the climate issue, but I think it is extremely important, and this is why we have put the environment and climate as one of the 12 principles in the document because of course as Chola said, the energy consumption, climate and environmental footprint is very much there, and this is a true discussion that we need to follow up.

Let me just acknowledge also the OHCHR because we have had very good cooperation with you both here in New York and in Geneva, and of course human rights has been upfront and is one of the principles. Human rights and the human dimension of the GDC is of course key.

ELEONORE FOURNIER-TOMBS: We have a few minutes, and I am happy to open the floor for questions.

QUESTION: Francesco Lapenta, director of the Institute of Future and Innovation Studies at John Cabot University.

This is a momentous moment. It feels we are opening something that we have been waiting for for a long time, but I also feel a lot of people are looking from afar who are very much aware of how complex this process has been. Although optimism is naturally the right mode, I think it is also the duty of being able to scrutinize what has been done.

I am asking as a friend a question that may be personal: What do you think was lost in the process because of the complexity of the negotiations right now? In my view, when I see the different drafts, I have seen language that has been taken out that was fundamental. People are scrutinizing the work and noticing this. My question for you is, what is the one element in the work that you did that did not make it and you thought should have been there?

ELEONORE FOURNIER-TOMBS: Good question. We will take a few more questions and maybe do a round afterward.

QUESTION: Thank you very much. This is very illuminating, and I much appreciate being here. I am Chloe Schwenke. I am the president of the Center for Values in International Development and an ethicist. I am delighted to be in an ethics center here, and I want to ask an ethics question as an ethicist with an observation to frame it.

The observation is that AI does not have inherent moral sense. We still depend on human beings to bring ethical judgment and discernment to the processes and outcomes of AI. How is this going to be structured into the development of AI in such a way that that moral discernment stays a part of this process, not just in human rights, as important as that is? As ethicists we use seven or eight different moral frameworks to find appropriately sophisticated moral recommendations and advice. What do we do to make sure ethics in the broadest sense of applying values is really a part of this whole equation moving forward?

QUESTION: I am Dima Al-Khatib. I am director of the UN Office for South-South Cooperation. Thank you very much for such a rich discussion, very important.

I just want to bring two elements here because we talked about the means of implementation and the importance of bridging the gap. South-south cooperation and triangular cooperation present themselves as innovative mechanisms to address that, so a plea to make sure that as we move forward with setting the means of implementation to ensure that that angle is captured. I will not dwell a lot on what we can offer, but there are a lot of platforms that we can put at the service of making sure that the recommendations get implemented at the country level.

One of those is also an initiative that is called Global Thinkers, and it is a network of policy and thinktanks from the Global South that can definitely influence and support evidence-based support but also support governments at the country and regional level to make sure that AI is implemented ethically in the right way.

The other thing is also let’s not forget the importance and the influence of the UN system on the ground. In every country we have a UN system with technical capacity, and they will definitely be key channels to make sure that this is linked to their national development planning and to advancing that at the country and regional level. In previous assignments we had collaboration with ITU, for example, in undertaking digital assessments in countries which has helped in shaping digital strategies and has helped moving forward on so many other things. The UN system on the ground is definitely a powerhouse to lend support to that process.

Very much with the issue of inclusivity and to bridge the gap North-South is a demand not only in terms of access and connectivity but also to make sure that they are empowered, their voices are heard, and they are part of the whole process. Thank you.

ELEONORE FOURNIER-TOMBS: We are coming up on time and I am very aware of the busy schedules of the co-facilitators and all of our panelists, so what I will suggest is that maybe we go one by one and offer some brief concluding remarks and answer some of the very interesting questions from the audience.

CHOLA MILAMBO: Thank you very much. Very quickly I will respond to the question of what was lost in the process. We started off committing to a very ambitious outcome. I want to say that was a key parameter for the co-facilitators. We really challenged the Member States to keep it ambitious. For us that was a bar we did not want to cross.

Of course the process is a multilateral system and therefore not everybody agreed with the first cut, and therefore as we went through various iterations of the document some things got lost. The level of ambition did come down a bit, but we tried hard to make sure that we kept it strong.

One of the issues I will point out is that we maintained the follow-up mechanisms that are there. We still have high-level review in there. We still have a report in there that the SDG should produce. We tried to maintain some key things in there. What was lost perhaps was something I would call a dashboard. We were proposing a dashboard you could easily track our progress against the various targets, so that is probably what got lost in there.

By and large, I want to acknowledge what you have put forward. Indeed we need to have many organizations come forward and look at the documents and ask: “How can we mainstream these into our processes? How does it align with our preferences as an organization? How can we move to a point where we can actually implement these things?”

Let me close by just acknowledging someone who has worked hard on the Global Digital Compact and multilateralism at large, my co-facilitator here. She is leaving her role at the end of the month, so I want to acknowledge you publicly.

ANNA KARIN ENESTRÖM: Very briefly, and this is the best co-facilitator I have had. I don’t think we actually lost anything except for what Chola said. I think we have kept up the ambition. Maybe what we have lost is that we were from the beginning very keen to have really concrete commitments that were also measureable, and I think some of these we had to let go, but I feel proud that we were actually able to keep the ambition, even if we wanted to have a more ambitious document from the beginning.

I think the ethics and the human dimension is really key. Human control over artificial intelligence is in the document, and that is of course something that goes beyond the GDC. It is very important in the GDC, but it also of course comes into military domains and other discussions that are done in other parts of the Pact of the Future.

I want to agree very much on the importance of national implementation. While we were so keen to have real commitments by Member States in there and commitments that we still think are measureable and that there is, as Chola said, a true follow-up section that we were able to keep, so this is not one document we agree to and put on the bookshelves. I know Doreen and others will make sure that this will be implemented, but the commitments from Member States when it comes to implementing it toward their own citizens are key.

DOREEN BOGDAN-MARTIN: Let me take this opportunity to commend and congratulate both of you. It has been a heroic undertaking sitting in Geneva. None of us have envied your task, which is almost an impossible task. I think I wouldn’t focus on what is not there. I think what is there is important, and the fact that there is recognition about the divides and the need to close the divides, the recognition of the importance of having an inclusive digital economy, the critical importance of safety and security—we see cyberattacks increasing 80 percent year on year—the importance of data governance piece, and of course the need for an AI governance framework. I think it captures those important elements that we have to address as ITU but also as the UN system and I think as the global community.

We saw so clearly during the pandemic what happened if you were not connected, and I think the GDC and what it has articulated tackles some of those big issues. Of course, when it comes to implementation we will all work together and double down and figure out the right path forward, but I think it is important what is there and I thank you both.

Of course the ethics piece is critical. We often push for ethics like by design when it comes to standards making. This has been a big focus of the training that we give our standards groups, to put this at the core as well as the human rights principles, so thank you for that.

Definitely when it comes to in-country, working with the RCs, working with the country teams is going to be critical as we look to implementation.

VILAS DHAR: Let me acknowledge the quality of the questions and comments in this section have set a high bar for the week. I appreciate both questions.

At the risk of being maybe slightly more provocative than I am supposed to be, let me take a step outside of my UN hat and answer your question quite directly. I think all great human ambitions and aspirations start first from the idea of what is possible and then is whittled by human dynamics, politics, and all of the other elements that lead to consensus-based decision making.

I am incredibly proud of what I see as the frameworks that have come both out of the High-Level Advisory Body’s report and what I aspire will be in the GDC, and yet I think a responsibility exists then for all of us to ask the very question you have asked, what was left out and why? What does it tell us about the world we live in? How do we bring accountability to the process by asking why these ideas were not brought forward? Where are the opportunities for civil society, for academics, and for other institutions to take on parts of what was left on the cutting-room floor but also to acknowledge that in a conversation about AI’s many multitudes, conversations about the geopolitics of the world we live, about the world we will create, and about how countries and regions are interacting with each other, there is an opportunity here for great public participation, for people to step forward and to ask exactly the question, why did we end where we did? What does it tell us about how these same engagements may go in the future?

I am still perhaps moderating myself a little bit but let me just acknowledge the opportunity that sits in that space. Let me also tell you that some of my favorite ideas and thoughts maybe are not in the form that you might expect them to be, but to me that does not in any way erode my hope that those possibilities might come about in a different way and a different mechanism.

Let me quickly come to the ethics question because I think you are asking the foundational question we should be talking about. It is the contours for an hours-long discussion and I think really a great study, but let me say one thing, which is that we started talking about how ethics is a human function, that AI and the tools we build should represent those, but let me also acknowledge that in an aspirational sense AI will fundamentally change some of the assumptions that have driven the creation of our moral systems, assumptions about resource scarcity, about how we view abundance as a possibility, about how we allocate resources and how we let people engage in participatory processes.

I think there is a vein of academic and intellectual study of moral experience that asks the question, what happens when those postulates that drive our social systems change? How do we build new ethical and moral constructs that guide us through that process? With zero answers, I simply want to acknowledge the importance of the question that you asked. Thank you all so much.

ELEONORE FOURNIER-TOMBS: Thank you very much. Unfortunately, we are going to wrap up this insightful panel. Thank you so much to our distinguished panelists for your insightful comments. Good luck for the next round of negotiations and panels that you will be on. I learned a lot and appreciated the opportunity to be here with you today.

I also want to thank our audience for your thoughtful engagement, your attention, and your questions. I know you had many more to ask. Thank you to the UNU and to Carnegie for having us.

[ad_2]

Source link

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *