Cybernetics, Digital Surveillance, & the Role of Unions in Tech Governance, with Elisabet Haugsbø

[ad_1]

I guess that is where the philosophical comes in because you have to figure out how you would like this piece of system to work, so the question is not how does it work but how would you like it to work? That is also when the ethical aspect comes into it.

For me I think it was the complexity that was appealing. Also, I loved math, so that helps, and it was a great way to use my expertise to do something good. That was actually what I was thinking.

I went into not only engineering and cybernetics but medical cybernetics, which is not uncommon for females actually. There were not that many females in the field at all, and I wanted to work with prosthetics and prosthetics that were “smart.” One of the problematic things around prosthetics is that it takes a long while to learn how to use them. It is not straightforward. They are not smart, or at least they were not at the point where I started my degree. They were just starting to see if you could make a prosthetic do certain movements by controlling the neuroelectrical impulses in the muscles to help control the prosthetics. That was new at that point. That would really shorten the length of time it would take someone to learn how to use a prosthetic arm or leg.

I have not kept up with that field, so I do not know where they are at this point, but as I see prosthetics they are still pretty simple. It is basically just a hook still, and I suspect that it because it is still hard to learn how to use them. When you feel through your fingers it is very accurate; you know exactly how much pressure you need in order to lift an empty glass, a full glass, paper, something fragile, or something more robust, but with a prosthetic arm you would not know that. You need to learn that.

There was something about this field that was so intriguing. It was complex, and in the end behind all the mathematics and complexity you could do something really good for a lot of people. That was my way in.

ANJA KASPERSEN: It is something that followed you in your career in different fields because this user interface is something I have seen you talk a lot about, not just in terms of medical prosthetics but in how engaged were these complex systems with varying degrees of complexity.

We will come back to this, but it is a nice segue because I have seen you a few times now with people describing you as, and you have used this term about yourself, an “ethical hacker.” Can you share what you mean by that and what your experiences are in actually working—I think at some point that was actually your title, to work as an ethical hacker.

ELISABET HAUGSBØ: I can do the description of what it is or at least what I put into that term. An ethical hacker is someone who uses the same tools and same processes as real hackers do to look at systems or companies but without actually doing harm. So instead of breaking and entering and stealing things, I would break and enter and show people how I did it so that they could go back and fix it and hopefully make it harder to break into the next time.

What happened? Norway has a high production of oil. We are an oil nation. I am a little sad to say that in 2024, but here we are, and we have built our society on the money we have gotten from it. It is a good society, but we had an oil crisis starting back in 2014, so oil prices plummeted, and a lot of companies lost a lot of money and had to do a lot of things differently.

At that point I worked in DNV testing control systems for the maritime and oil industry, so not necessarily cybersecurity yet. It was more like an engineering cybernetics perspective, where you test the functionality of the system—is it safe to use, is it failsafe, how much damage could it do, and stuff like that.

When the oil prices plummeted, they were no longer building stuff like different platforms and ships, which was where we usually did our projects, and we were like, “Well, what else can we do?” When we set up the simulator to interfere with the control systems we also needed to test a lot of the communications, which meant that we knew how all the communications on all these different projects actually goes, if it goes through satellite, how it communicates with different programmable logic controllers, basically everything you need to know if you are going to hack the same system.

We were like, “Well, cybersecurity should be a thing.” For all of your listeners, in 2015 and 2016 cybersecurity was not super-important like it was after 2017, after Maersk had their pretty devastating experience with ransomware.

So that was my entrance into ethical hacking. I started back in 2015, building on my knowledge to that of heavy industry, control systems, and how that actually works, and then we built this service on top of that. It was great fun, not super-fun trying to sell something that people did not understand in the beginning but then 2017 came and suddenly a lot of management was like: “This cybersecurity thing that we have heard of, that might actually be some—do we need that? Can you help us do this cybersecurity thing?”

“Yes, we can help you guys. Just stop calling it the ‘cybersecurity thing.’” And here we are in 2024.

ANJA KASPERSEN: In some ways we are becoming more reliant upon digital tools and cyber structures, which also increases our attacked surfaces, so with reliance we also develop more vulnerabilities. If you are looking at how states can become more cyberaware and how companies can become more cyberresilient and how users—you and I and everyone else listening to this podcast—can develop better cyber “hygiene,” if I may use that word. Where do you stand on this now, having spent all of these years working on this?

ELISABET HAUGSBØ: It has been quite a journey, and I would wish that the maturity had gone a little bit faster.

First of all, I would like to say one thing. You implied that humans are the weak link in the systems. I would say that is the truth but maybe not because if a system was built with security in mind for the users, who are the humans, it would have been built differently. That is where our problems start because most of the systems that we use today are not built with security in mind and not with privacy in mind. They are built for functionality, which is not necessarily wrong. It is just missing a piece of the security that it should have had.

This means that we are now spending a lot of time and a lot of resources making these systems secure. We are trying to patch them, not with upgrades and stuff like that, which we obviously do, but they are just built incorrectly in the first place. That is the first thing we need to acknowledge.

Maturity is a big thing here. I came from the industry and everything was air-gapped in the old days. That is not 2013. I hope I don’t make anyone angry now, but it is like the early 2000s or late 1900s, when the first industrial computerized systems started to be used properly. Then they were air-gapped, meaning that they were not connected to the internet, which meant that they could not be penetrated from the internet, and that was a truth that was stuck with the industry for a long time.

I still hear it, even though we know that our platforms can be controlled from land. It is not air-gapped, and it shouldn’t be maybe because it is a lot safer if personnel can be on land, at least for the personnel. So there are several aspects here.

When it comes to maturity, we still have the issue that a lot of management does not understand technology and definitely not cybersecurity. I also meet a lot of leaders and managers who excuse themselves, saying, “Well, technology is not my field of expertise, so I do not know this.”

I am like: “What if I said the same thing about finance? ‘Well, I didn’t do a degree in economics, so I don’t really do finance. I am just going to control the company, but I don’t do that.’”

We are allowed to talk about technology like something that you do not need to know if you are running a company, which is also not a great idea because all companies are now fully or partly technology companies, which is also very important and a part of the maturity thing that I think is still lacking.

Then it comes to, how do you help them, where do you start? Get an overview at least. Obviously you cannot build all the systems from scratch better because that would cost way too much money. You would have to stop production or whatever service that you are providing your clients. Maybe it is a hospital. You cannot just stop using all the different programs and say we are going to update everything or change it to something better. That is not going to work. In an ideal world maybe that is the way you would like to go.

Then you are stuck with management that does not know technology but is still in charge of the technology company, you have systems that you know are not safe or at least not secure—hopefully they are safe—and you have employees, probably your biggest security risk if you don’t train them correctly. One of the things I am passionate about is making sure that your employees not only know their systems but know how best practices are supposed to be done, and make sure that you have the culture such that they can talk about their mistakes because a lot of the problems here are that we have cultures that if you make a mistake you try to hide it, which is the worst thing that can happen, at least for me as a security expert. I am totally dependent on employees telling me that they have done something wrong or that they have done something that they think might be wrong or raising concerns. If you don’t have that, you are really missing your entire warning system.

I also get a lot of questions like: “What about zero days? What do I do with those?”

“Well, those are the least of your concerns. Don’t worry about the zero day things. Leave that.”

Let me try to explain this in not a super-long way. When you work with security or cybersecurity if you focus on what is the worst thing that could happen for my company, not like a nuclear explosion. What is the worst thing that can happen to my company? Would it be that you lose credibility with your clients? Is that the worst thing, or is it that you are producing medical supplies and there is an error in the medical supply chain, something like that. Is that the worst?

Figure out what is the worst thing that can happen, and then you backtrack. What different scenarios would lead up to this worst thing? That will help you to put security in place or redundancies in place where you need it in a way where you actually understand what you are doing because a lot of questions I also get are: “Can’t you just make it secure? Can’t you just put a firewall up?” Couldn’t we just do things?

I am like: “But why would you do that? Why would that make you safer? What is safe?”

Then they are like, “Oh, maybe I get it now.” It’s tangible. You can understand your worst-case scenario because that is from a business perspective, and then you backtrack down into the technology.

ANJA KASPERSEN: You mentioned earlier, Elisabet, the importance of employees. We are going to dig into that a little bit because in your current role as the president of Tekna, which is a workers’ union as well as a professional technical association for STEM graduates, I looked at a few entries you have done in that context, and you make an interesting emphasis on the role of Tekna.

I think there is a lot to be learned from this. This is a regional organization, but obviously we have a global audience, so that is why I am taking a little bit of time to talk about your current role because you have been focusing on two particular things—well, probably many more, but two that I have noticed—including professional competencies. How do you build professional competencies in a fast-changing world with digital technologies that are becoming more and more important and allowing these various STEM fields, engineering fields, that maybe have been allowed to operate somewhat separately for a long time to convert to new ways, and that requires a new type of competency-building approach?

But you also focus on how companies can create a competitive edge by actually working with unions in all the different technological fields from cybernetics to AI and other engineering fields, so rather than looking at those two opposing forces actually creating a competitive edge by working together. Tell us more about Tekna and also your areas of emphasis. I am sure that are some that I left out that you can add onto.

ELISABET HAUGSBØ: We have a pretty steep incline in the amount of members, which is great. It means that we have done something right when it comes to providing a good union for our members. Our members want to contribute with their subjects to improve the world. It sounds cliché or something, but that is the impression I get from our members. They are highly internally motivated. They want to contribute, and that also means internally in their companies.

Before I start talking about Tekna I would also like to inform about how unions work in the Nordics, which is a little bit different than maybe the rest of the world. We have this three-part cooperation in the Nordic countries. We call it the “Nordic model.” If you Google that, you get a lot of different things, but one of them is actually this way of collaboration between government, companies, and unions. This is actually a very powerful collaboration because it brings those three parties closer together in a way that they can change legislations, they can cooperate, and they can partner so all three are able to become better at the same time. This is a little bit different I think than in other countries and how they work with their unions in the important job of protecting workers’ rights.

Coming back to the competitive edge, in a fast-paced, technology-changing environment as we are experiencing now with AI, cybersecurity, and everything in between, if you have this close collaboration between unions and the company that you work for you are able to talk about the difficult things.

Tekna has partnerships internationally, and that is where I meet a lot of other union leaders, and they talk about technology as something bad, something that is going to steal their jobs and change the world in a bad way. I think that the way we work with unions in the Nordics is fundamentally different in order to create a positive outcome of this new technology, for example. Like I said with cybersecurity, you are creating an environment where employees feel safe to raise their voices and talk about not only what concerns them but also the possibilities. It is the people with the knowhow, whether from academia or self-thought, it is still very valuable knowhow that management certainly does not have.

If you can through unions collaborate with your workers in a way that you together can utilize technology, this will set you apart and create trust, so the workers might use technology faster, better, and more efficiently, and the company as a whole will become better and be able to compete better and deliver better value to their clients and customers just because they are able to utilize this partnership, this collaboration, in a good way. That is something that is important for our members, how they can contribute to a better place not only in the world but within the company. I think that is unique and definitely sets companies apart. Maybe that is our superpower.

ANJA KASPERSEN: My dad has been a long-term member of Tekna, so I have been hearing about Tekna for a long time. I do not live in Norway, so I am not a member, but like you said it is that passionate relationship with Tekna that makes it unique as well.

I want to go from that into something that I know Tekna and you have been very concerned about, and Tekna plays quite an interesting role in the Norwegian landscape, and I am mentioning this because some of the issues we are talking about are of course not specific to one country. These are transboundary issues, these technologies, and both the promises and the perils of these technologies are shared across boundaries and borders.

You have been cautioning quite proactively against using digital technologies in surveillance, the privacy concerns that come from it, maybe the limitations of existing regulations in this space, and also technological immaturity. You alluded to this earlier, this notion of maturity, and where maybe decision makers, whatever part of the system they work in, maybe fall for the promises of efficiency above those of safety. Though you have been focusing on Norway, these are broad issues.

I know some of your advocacy efforts in this space actually triggered the attention of even Edward Snowden, the privacy activist known to most of our listeners, because Tekna was one of the early organizations and yourself one of the early people who put focus on this. Can you talk a bit more about that both in the Norwegian landscape but also your personal views, stepping away from your role with Tekna, on what is happening in this space?

ELISABET HAUGSBØ: It was an interesting happening, something I never would have seen coming, though I focus on risks, think ahead, and stuff like that. Never did I imagine Edward Snowden reposting something that Tekna wrote.

If I may just start off, I would like to remind everyone that privacy is a human right. We tend to forget that because we are so dependent and so thrilled about new technologies, social media, and how we interact with each other. This is all great; don’t get me wrong. But privacy is a human right. When it comes to surveillance, obviously we do understand that governments, policing, and those who keep us safe need tools to do their job, they need tools to keep us safe. Then the question is, where do we draw the line between “need” and “nice to have?”

When we started working with this particular thematics, it was because Norway was looking at writing new legislation because Norway has not had legislation that has given our government a legal right to capture data like this before. What happened before is that they had to have a reason to do so, and then they could go before a judge and get laws passed in order to be able to surveil a group of people or a single person, whatever they needed, but they always had to have probable cause.

They are writing the legislation now—I don’t know how it is in detail compared to other countries—in such a way that all communications, like over the internet or any other kind of way of communicating that is not a handwritten paper letter, as long as it crosses our borders it is something that the government can gather and investigate. This sounds okay because, yes, it is crossing the border, and then it is foreign affairs.

No, because this is technology, and in Norway we don’t have a lot of servers that hold this day-to-day communication. I think the biggest server that we use for communication is in Sweden, so by default if you send an email to your neighbor it will cross the border over to Sweden and be sent back. That is just the way the infrastructure is made.

That is where Tekna started to interfere and say, “This is not the intention.” The intention was that foreign affairs would be investigating “foreign” affairs as their name states, but this would actually surveil all the Norwegian population that uses electronic communications.

That is the case we started with, and nobody cared. Nobody listened. It was like screaming into a black hole. We were like: “Is the mic on? Does anyone care? This is super-important.”

I do not know how, but suddenly I got this message from a buddy of mine, and he was like: “Elisabet, Edward Snowden is re-tweeting your posts on this new legislation,” where we had written down all the bad things that could happen if we do it this way. He was actually writing in Norwegian, and I was like, “No, this is fake.”

They were like, “Yes, but it is from his account, so I think it is the real deal.”

It took like five minutes and then most of the media houses in Norway picked it up because it was Snowden and therefore it had to be important. From then on, we were like: “Oh, we do get attention. It just needs to come from someone else.” From that we were able to join forces with the unions for journalists and lawyers. Several political parties started to show an interest because they finally understood the importance of what we were saying, that this was not just another legislation.

People did not understand how it could affect their day-to-day private lives, and the politicians did not understand it because they were like, “Well, it’s foreign affairs; it does not concern us.” They probably did not even bother reading or paying attention to what these little technology guys and girls were trying to tell the society until they actually listened and then they were like: “Oh, but this isn’t good. This can actually be misused.”

This is where the point is. The government that we have today, the foreign affairs that we have today, the policing that we have today, is not the problem. We do not know what it will look like ten years ahead, 15 years ahead, when we have gathered information for ten or 15 years. That is one thing that is attractive to our neighbors—and I am not talking about the Swedes; I am talking about the Russians or others—because we are sitting on a lot of information from our entire population, or at least the entire population that uses technology, which in Norway is everyone. Even grandparents in their 80s are using smartphones and everything that the technology has to offer. They are very technologically advanced in Norway, at least from the user consumption side, which we can come back to.

It was a lot of fun and a little bit scary to work with. We did not change the legislation, but we got a lot of awareness around it.

Then the question is, should we change legislations in a geopolitical crisis like we are in now? Norway started this work with the new legislation in 2017, so before the war in Ukraine, but we could feel—

ANJA KASPERSEN: The world was unstable.

ELISABET HAUGSBØ: Yes, the world was unstable, and we could feel how the Russians were doing something that they should not do, and we had these constant ransomware attacks coming out of Russia, or at least they were saying they were coming out of Russia. I am not sure if they were actually making all of them, but they like to take credit for other pieces of software too.

Sharing a border with a very powerful nation like Russia is scary. I hope everybody can look up Norway and see how it looks. It is a small country but not really. It is very long. We have very few people, which means that we are highly dependent on people living in smaller cities and we are highly dependent on people living along the border of Russia for us to actually detect if something is happening or detect how security is, how things are now. That affects how the government is thinking about security, legislation, and also the wish that we would have more control of what is going on and defending ourselves from foreign threats, which is of course the wish behind this new legislation, but the side effects might be that the entire Norwegian population is more or less surveilled, which is not good.

ANJA KASPERSEN: I want to bring you to a point that you and I discussed during one of our many conversations, where you said to me something that really resonated. You advocated for what you called “legislative prudence,” that one should not create legislations especially in the domain of digital technologies in times of instability but do it in times of peace because if these legislations are more gut reflexes to protect against a threat we all know that those legislative measures cannot be easily recalled. Create rules in times of peace, not in times of instability.

ELISABET HAUGSBØ: Exactly. That is also because you want to be able to voice your opinion on why you should not do it because you will always have opinions on why you should do something, so why you should have this new legislation, why you should have more open wording on how things should be done or not done, but in times of instability if you voice your opinion, saying, “Maybe we should not have this legislation,” then you are met with, “Don’t you want us to be safe?”

Of course I want us to be safe, but is this safe? When you are in this gut reaction or “We just have to do something and do it now,” you are not able to voice counter objectives. You cannot voice, “What if we don’t do this; what happens then?” because it is unthinkable because you are in a crisis, so you need to react.

I think that is the most important reason to do it during peace because when you do not have a crisis around you or something that disturbs the pro-and-con dialogue that you need to have also when you are working with legislation, then how do you know it was the correct level, correct openness, correct closeness when it comes to legislation? You don’t. There is too much emotion.

ANJA KASPERSEN: Exactly. Too much emotion.

Let’s go back for a moment to cybernetics because I think what you just spoke about, the emotional responses to risks, especially those that concern our safety and security, is interlinked to some of the philosophical underpinnings of cybernetics, a domain that came to the forefront in the 1930s and 1940s, and that was a very particular time of instability as well in the world.

Closely related to the more classical cyber-related risks are of course extended forms of cybernetics, those of autonomous systems and agents, too often perhaps thrown into the AI bucket. What are some of the new dynamics you are seeing with autonomous systems and AI being seen as almost a panacea for any societal ails and the promises of efficiency that they are often sold with?

And, if I may, given your background in cybernetics, one thinker any of us who share an interest in cybernetics has read is of course Norbert Wiener, who founded some of the cybernetics theory. As I was preparing for this interview, I thought, Let me dig back into the cybernetics thing because I wanted you to talk a little bit about AI, I picked up his seminal 1950s book The Human Use of Human Beings. Even the title itself is a description of what one could say is happening, especially in the generative AI space and large language models and some of the concerns entailed in that.

In this book Wiener posited that real danger with autonomous and intelligent systems is, “that such machines, though helpless by themselves, may be used by a human being or a bloc of human beings to increase their control over the rest of the race or that political leaders”—which we just spoke about—“may attempt to control their populations by means not of the machines themselves but through political techniques as narrow and indifferent to human possibility as if they had in fact been conceived mechanically.”

In some ways many of the points that you have raised so far, issues around security and how we think about security—you mentioned earlier failsafe or safe failing of these systems, how do we think about safeguards, who is responsible, how will AI impact our societies, and are we resilient enough to fully grapple with this impact?

ELISABET HAUGSBØ: People tend to trust machines more than they should. They tend to trust whatever comes out of a computer, an answer, a calculation, probability, or something like that. They tend to understand that that is better than what a human being can do. I think that is the first step toward a wrong direction because then you look at whatever it is, autonomous system, and you think, That is way better than a human could ever do, which might be true or it might not be true, but if you always think that it is way better than a human can do then we are moving in the wrong direction because it is this collaboration that we should be looking for.

I have also been working with autonomous vessels. I have not written that much code for that field myself, but coming from that field it is very interesting because you have to think of all the edge cases. Edge cases means cases that might or might not happen, depending on which trail you started walking down, so just trying to think of all the different scenarios that could happen, and in a complex system that is an endless amount of edge cases. An engineer writing code for autonomous vessels that is written in the old ways of doing it from engineering cybernetics, actually has to program all of these edge cases, which means that there will never be a system that has all of these edge cases.

With AI that might be a little bit better. I am not saying that it will be better as a total truth, but it might help because a human being is not good at thinking of all the million billion things that could happen, but a system that is designed for that might do a better job. That’s what I mean when I say you cannot automatically think that a system is better than a human, but it is this collaboration that should be better because the human still needs to understand how the system works in order to secure it better because that is where the evaluation of what is good and what is not comes in. In my opinion a machine cannot really do that because now we are talking about proper ethics, like what is the ethical thing to do here, which is a philosophical question and not a yes-or-no question, which machines are pretty good at.

That was very simplified on a difficult question. For me it is keeping the human in the loop, not necessarily as in, “Okay, we need to have a captain on this vessel forever,” that is not what I mean, but a human in the decision loop, either understanding how the decisions are made or overlooking how decisions are made or used. That is where we need to be in my opinion, especially with systems that have direct effects on human lives.

We have decision systems in place for deciding if you should get beneficiary support or not. I guess it is a famous example from the Netherlands, where 50,000 families lost their support because an artificial intelligence was used to make the decision, and the caseworkers trusted the system. They did not ask questions because it was the system, and the system is a machine, “which is better at math than I am, so it is probably true.” You tend to trust something that you might not trust that is maybe not trustworthy.

Maybe we should discuss that: Is the system trustworthy, yes or no? How many times have they made the right decision based on what question? That is also something that is interesting to go into because what is the right answer to a particular question? I think it is a collaboration we need to go into, and in order to do that more people need to understand how these systems actually work because how would you raise questions, how would you challenge the answers that you get from these systems?

I think that is a sad story as well. I don’t know how it is with the rest of the world, but in Norway we see a fast decline in people wanting to educate themselves within the STEM subjects. I don’t know why, if it is just a lack of coolness or if it is something else, but it is very worrying because if we do not have a population that understands technology how would we ever collaborate with these systems and ask them the right questions and criticize them when they need to be criticized? You cannot do that unless you have a basic understanding and competence within these subjects. That keeps me up at night. I have to admit that. I think knowledge is the way out.

ANJA KASPERSEN: The late Daniel Dennett went as far as to talk about the “epidemic of counterfeit humans,” which of course is a related phenomenon to our ability to create deepfakes and use generative AI technologies this way. What are your thoughts on this?

ELISABET HAUGSBØ: Many. I was actually on Norwegian television talking about deepfakes. It challenges our conception of what the truth is because we are so used to using our eyes to tell us if something is true or not because if you can see it, then it is true, and if you do not see it, then it is not true.

ANJA KASPERSEN: Or hear it.

ELISABET HAUGSBØ: Or hear it, yes. “But I heard it. I saw it.” Again, we tend to overassess or overevaluate our own way of figuring out what is truth and not, and with deepfakes you cannot use your eyes anymore.

I talked to a politician about that last week. He said, “There is no way of figuring out if something has been deepfaked or not.”

“Yes, there is. You just cannot use your eyes.” The eyes are useless because it is so good, and then you come back to knowledge: Why should you trust something or how can you trust something? Well, you need to do these and those steps, and using your eyes is not one of those steps anymore because the technology has evolved, and then our human knowledge needs to evolve as well.

Deepfakes are a concern. Obviously they can be used for good as well, but they can also be used for very, very bad things, not just pornographic imagery or videos, threats, or harassment and stuff like that, but also affecting political elections, which is something that we have a lot of in 2024. Is it 60 different countries that have elections this year? I might be mistaken, but it is a pretty high number, and the concern is that deepfakes or other ways of influencing how people vote or think based on something that is not true will overthrow some of these elections, and then technology is really interfering with democracy, privacy, our day-to-day lives, and our freedom. That is not what I want us to use technology for. That is not the society I want.

ANJA KASPERSEN: As we wrap up, Elisabet, what would you advise those listening to this podcast? How do you navigate this difficult ethical landscape but also a landscape that requires, as you said earlier, knowledge or if not knowledge at least a deep sense of being able to question how these technologies impact on you? What would be your top advice to people on how to navigate this and not feel alienated by these new and emerging technologies in our lives?

ELISABET HAUGSBØ: I would say that no task is too great. Sometimes I feel like we are giving up before we have started because the tasks feel too big, the tech companies are too big, and we can’t fight it.

I think that is fundamentally wrong. Yes, the tech companies are big, and they are run by the monetization of our social activities, but it is our social activities, so we as the users actually have power, but as long as we are just individuals we cannot use this power for something good or for something that could fight back. I think that might be my fighting spirit, that as individuals we cannot do a lot, but we need to use our power as a team, as a party, or as a union and fight back, fight for our rights. To do that we obviously need competence, but we also need to do something because I think with a lot of things we are like: “Who am I? I am just one person. I am a small person. I don’t know anything.”

But, yes, you know things. Ask questions, learn things, voice your opinion, and do it together, and then remarkable things can happen.

ANJA KASPERSEN: That is the value also of having an association with other professionals, so joining a trade union, whatever your technical profession is.

ELISABET HAUGSBØ: Exactly. You got my point, Anja.

ANJA KASPERSEN: Thank you so much, Elisabet, for taking this time to talk to us and to share your insights with all of our listeners.

ELISABET HAUGSBØ: Thank you for having me.

ANJA KASPERSEN: To our listeners as always, thank you for joining us for this insightful exploration. Stay connected for more thought-provoking discussions on ethics and international affairs, @CarnegieCouncil. I am Anja Kaspersen, and it has been an honor to host this dialogue. Thank you to the team at Carnegie Council for producing this podcast and to all of our listeners for the privilege of your time. Thank you.

[ad_2]

Source link

Geef een reactie

Je e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *