Nell Watson

Empathy, participation & the future of AI

Today I have the pleasure of speaking with Nell Watson, a tech ethicist, machine intelligence researcher and AI faculty member at Singularity University.

A longtime friend of the podcast, Nell’s interdisciplinary research into emerging technologies such as machine vision and AI ethics have attracted audiences from all over the world, and inspired leaders to work towards a brighter future at venues such as The World Bank, The United Nations General Assembly, and The Royal Society.

A Senior Advisor to The Future Society at Harvard, Nell also serves as an Advisory technologist to several accelerators, venture capital funds and startups, including The Lifeboat Foundation, which aims to protect humanity from existential risks that could end civilisation as we know it, such as asteroid collisions, or rogue artificial intelligence.

She also chairs EthicsNet.org, a community teaching prosocial behaviours to machines, CulturalPeace.org, crafting Geneva Conventions-style rules for cultural conflict, EDCsymbol.org, informing consumers of endocrine disruptors, and Pacha.org, connecting a network of service providers to help enable the automated accounting of externalities (shifted costs) such as pollution.

Nell serves as Senior Scientific Advisor to The Future Society, Senior Fellow to The Atlantic Council, and holds Fellowships from the British Computing Society, and the Royal Statistical Society, among others.

Key themes

AI, technology, ethics, empathy, autonomy, dignity, embodiment, telecommuting, virtual reality, robots, resilience, systemic risk, demoralisation, Taylorism, shifted costs, refactoring

Reflection prompt

“Many of the most algorithmically driven companies… are increasingly finding algorithmic means to manage their workforces. We think of tyranny as being some horrible despot that controls everything, but often tyranny is found not in people but in systems. And people can rebel against a person that they can personify and say, I don’t like this guy, and I don’t want to live under that person’s yoke. But systems are harder to rail against.”


Recommended reading


Nathalie: Nell, thank you so much for joining me in conversation on The Hive again.

Nell: It’s a pleasure. Thank you very much for inviting me to participate.

Nathalie: I just thought if we’re talking about technology and the future of humanity, you were the top on my list to be in conversation with. So I’d like to start with a conversation starter that I invite my guests to participate in. At this point in human history, what do you feel is happening in the global human psyche? If we can use that frame.

Nell: Well, there’s a lot going on, for sure. Let’s see. We’re seeing a lot of increasing polarization in many different nations around the world, particularly kind of political or sectarian polarization. And I think a lot of that is somewhat to do with algorithms, somewhat to do with social media driving that process. There are some issues with echo chambers, there are some issues with it being all too easy to stir up conflict, or to get dragged into conflict, which is stirred up by a small percentage of people. We’re living in a time now where it’s also all too easy to figure out where somebody might lie on a political spectrum.

I mean, it used to be that if you kept your mouth shut and as long as you weren’t distributing pamphlets or something, nobody could really figure out where you stood on some political issue. They could figure out your sex, or your ethnicity, or your pregnancy status, or something like that, perhaps at a glance, but not politics. Even your religion, they might infer, if you’re wearing a cross or a kippah, or something like that, or observe certain holidays. But politics was something private. And now that’s not necessarily the case. Now, an employer can monitor an employee’s browsing of a website on their lunch break, and whether they happen to look at Breitbart or Jacobin, that kind of thing can give you an insight into where they lie on that spectrum.

And in these times, there are increasingly both internal and external pressures and activism to deny certain people access to certain employment opportunities, or even housing opportunities, or potentially public services. I mean, we’re really at the start of this, but I grew up in Northern Ireland, myself, and I’ve seen where this can go, and how far this can go. Northern Ireland experienced decades of civic strife and cold civil war. And it only really began to end in the late 90s when we had an agreement that everybody pretty much across all communities ratified to contain conflict, to create a sort of a Geneva Conventions for the sectarian wars, I suppose you could say. And we set a limit on conflict.

We said, yes, you can have a disagreement, a bitter one, perhaps. But it has to be a war of words. Discriminating as to whether somebody can gain employment or gain access to housing, et cetera, or access to public services is off limits. The conflict cannot spread that far. And along with some other reforms to, for example, some of the policing institutions within society to ensure greater inclusion and greater representation. Once those rules were implemented, 80% of the conflict disappeared almost overnight.

Nathalie: Extraordinary.

Nell: Yeah. And the children today growing up have a very different experience than I did myself as a kid. And that gives me a lot of hope, actually. Because, despite a lot of the cultural schism going on, the cultural conflicts, the ongoing demoralization of many societies, which I believe has become a new theater of war. And there is a shadow Cold War going on with many different entities attempting to demoralize many different other cultures, or possibly to target individuals for particular demoralization. But I think the example of Northern Ireland shows that when we implement decent rules above conflict, and hold all parties within that conflict to adhere to those rules, then a lot of the bitterness and egregiousness of that conflict can actually disappear.

And so I came up with some suggestions at a little website called culturalpeace.org, where I’ve got a series of principles there, which I believe would make reasonable rules, which may be more or less agreed by many different groups within society and across societies for containing conflict, for setting limits on how far that kind of cultural conflict can spread. It’s an open process. So I have a whole bunch of editable documents that people can go in and suggest, this needs to be in there or this shouldn’t be in there, et cetera. And the goal then is to attempt to establish this as a crib sheet for anyone wanting to create legislation that might reduce polarization within society.

I think that one important aspect will be the necessary introduction of political creed or belief or orientation as a protected class within employment, and possibly other areas of society as well. I think once that came in, in Northern Ireland, things improved a lot. And I think that things are going to get a little bit worse until we take similar steps to protect people from that kind of exclusion based upon their perceived political affiliation. So I think with that element included, I think we can begin to steer humanity out of pointless anger, out of pointless scapegoating, and hopefully bring a faith in society and its institutions and democratic principles back in, for kind of a future that’s overall happier.

But there’s another element that needs to be there as well, if I can segue into that, and that is finding ways to curb inequality. Because one of the reasons why we’ve seen this great proliferation of polarization and identity politics, is because for a lot of people, they’re not experiencing a materially better world in real terms than say, 20 years ago. A lot of people with tremendous power and wealth have further consolidated that ability to have greater control and influence over society. And they’ve done that through technology. And we’ve seen that even during the pandemic, where the richest got richer by a massive degree, and that gulf between rich and poor, ever expanded.

And until that problem begins to be addressed fully, I think people’s identities will be ever more in their mind as a way of trying to protect their own interests, which is probably pyrrhic and probably futile, and not going to lead to necessarily any better conditions for them. But it’s something that they feel is their only escape button. Right? Revolutions are people’s way of slamming the brakes. We think of revolutions as being about change, but really it’s about the opposite. It’s about people really don’t like where things are heading, and they want to arrest that process. And I think there’s a great deal of people in our world today who increasingly feel suspicious about where the future is taking them. And there’s been a tremendous loss of faith that the future will be a better place than the past. And that needs to change, really, if we’re to solve the ongoing conflicts.

Nathalie: It’s so interesting hearing you speak about these different layers of what we’re facing, and the piece in particular that struck me was your use of the word demoralization, to demoralized people. Because if you can move people into that state of apathy, and you can make them fight for what they have, which is their identity, then suddenly it becomes much easier for people to be kind of inactivated. They don’t feel they have agency, they can’t affect any real change, and then the war is already won. And actually, to that point, if we’re talking about the concerns that people have about the future, one of the things that I’m very interested in, which you know a lot more about than I do, is about the ways in which the pandemic and the lockdowns in particular, have catalyzed the adoption of certain technologies.

So I’m thinking in particular, I was reading today actually about surveillance technologies that are then harder to roll back once they’ve been propagated. So there was an example, which is a contentious example about facial recognition tech, that claims to track emotional expressions through webcams in real time. And an example was this AI powered learning platform called 4 Little Trees, which is designed by the Hong Kong based startup Find Solution AI. And it was rolled out to monitor, ostensibly, students’ emotions as they study at home.

Now, obviously, there’s a lot of psychologists who are talking about the issues of reliability and validity and whether it actually measures what it purports to. But obviously, there’s also another question, which is, what questions should we be asking ourselves and who gets to make the choice and consent to whether this gets rolled out before we actually roll this tech out? What are your thoughts about that general concept of what technologies we choose to roll out, how we do it, who has consent? I know that’s a very big, open question to throw to you.

Nell: It’s a huge question. To a strong extent, it also may reinforce inequality further. Because traditionally, blue collar jobs were jobs of manual labor and white collar jobs were jobs of mental labor, give or take. But I think we’re coming to a world where that is redefined. And blue collar positions are those roles which are principally governed by machines or principally governed or managed by algorithms. And white collar ones are roles that have more autonomy. Around about 100 years ago, there was this concept of Taylorism. Time and motion. The idea of using scientific management to squeeze more productivity out of the workforce. And it turned out to be a bit of a disaster. I mean, it looked good on paper, because they implemented some idea and productivity went up. But of course, it was really just the observer effect. People knew that they were being watched for the experiment and so they tried harder, and it wasn’t sustainable.

I think that many of the most algorithmically driven companies, for example, Amazon or Uber, are increasingly finding algorithmic means to manage their workforces. It might be things which are relatively benign, such as ensuring that people have the right personal protective equipment on and using machine vision algorithms to ascertain that before somebody can clock on. But increasingly, things like bathroom breaks are being monitored. I have a deep concern that we think of tyranny as being some horrible despot that controls everything. But often tyranny is found not in people but in systems. And people can rebel against a person that they can personify and say, I don’t like this guy, and I don’t want to live under that person’s yoke. But systems are harder to rail against, because there’s no avatar of a system typically necessarily that one can point to.

And my fear is that we will be increasingly bullied around by algorithmic busybodies who work to asinine metrics and kind of frog-march us into doing different things to such an extent that we end up exhausted and physically and mentally burned out. And I think a lot of people in those neo-blue color positions will experience that kind of phenomenon for themselves. And I think it’s very important that we find ways to improve people’s autonomy, to respect the dignity of the person, and not to implement algorithmic solutions for line management unless they’re very strictly necessary. And I think this is going to be an increasingly larger area of discussion when it comes to labour, labour disputes and unions, and that kind of thing.

Nathalie: So when it comes to what qualities, and this is an interesting one with you, because you talk a lot about ethics. And I know in some of our previous conversations, we’ve explored this fascinating question that you raise of creating systems or teaching AI to act more ethically than we are able to act as humans. So the question, you can take this in any direction you wish, but are there human qualities that you feel cannot be replaced by machines, where people think about their professional lives and about being resilient by upskilling or whatever it is that they need to so that they don’t get replaced through automation?

On the one hand, are there human qualities that we need to be cherishing and focusing on? And on the other hand, are there certain qualities that machines can bring, that we could draw from if we’re thinking about creating society, which is, as you say, more just and more equitable, which I know lots of people are very concerned about?

Nell: Yes. I mean, there are plenty of opportunities that we can improve people’s daily working lives. We can enable machines to do a lot of the heavy lifting. If we think of great inventors and artists like Leonardo da Vinci, they typically had a small army of apprentices and assistants helping them to implement things, whilst they had the grand ideas that if they didn’t have that support, they probably could not manifest. It would remain scrawled on some codec somewhere and not breathed into existence. So machines can help us with that. They can make some suggestions, they can help to improve and optimize ideas or optimize processes. And yeah, they can help us to realize our visions in interesting ways we perhaps hadn’t considered. And doing the heavy lifting of some of the more boring tasks is probably, all things considered, going to be positive and beneficial.

I think working in tandem with machine intelligence is going to be a key skill in the 2020s and beyond. The same way that people had to learn how to type in the 80s through 90s. They had to learn the Microsoft Office suite and they had to learn about desktop publishing, those sorts of basic office skills that all of us take for granted these days that everyone must have, people had to learn those. It used to be that the bosses didn’t type, they didn’t know how to type. And in fact, it was a difficult sell to persuade a manager to use a computer, because it had type writer like keyboard on it, and they considered that infra dig. And so that cultural change had to shift before people started using these machines and productivity increased greatly through them, although commensurate wages probably did not.

I think there’s going to be a similar learning curve, where most people will learn how to work in tandem with machine intelligence, and some people won’t. Just the same way that some people missed the computerization wave and ended up kind of locked out of career advancement. I think a similar thing might happen in the 2020s if we don’t encourage people at an early stage to become more familiar with working in concert with AI and to understand the benefits of why they should do so.

Nathalie: So part of that adaptation, I think, is also being able to respond to and pick up other interfaces as they emerge. And one of the things I want to ask you about is what you think about the new wave of optimism around VR and how it could transform the way in which we communicate. So one of the points that you mentioned earlier was about the movement of people before we jumped on to the recording part of the talk. Can you tell us a little bit about your thoughts and hopes about what VR could do to help us better connect with and relate to one another in the future?

Nell: Yes. VR has a bit of a problem in that it needs to be experienced to be truly understood. It’s something that’s a bit difficult to describe why it’s important or why it’s transformative. And so I recommend if one hasn’t had a good experience of recent VR, not just the janky stuff of the 90s, but what’s really possible today. There’s malls all over the place that have VR gaming sessions set up, and I recommend everybody go and spend half an hour and have a go at it, and see how nauseated they feel, it’s probably a little less than they might expect.

But immersive experiences like games, particularly, virtual reality games, can provide so many opportunities for increased empathy. To give an example, there’s a game called Mafia III, which I really enjoyed. And it’s set in Louisiana in 1968. And so there’s still this kind of Jim Crow kind of culture going on and the protagonist is a black man. And actually wandering around as this black man avatar in that situation, and then sort of experiencing some of the treatment or just casual insults and things from people actually gave me a much stronger emotional empathy, not just cognitive, because of course, we understand that those times were very difficult for many people.

But the emotional sense, having that hurled my avatar and experiencing that, I found that very moving experience. It gave me a lot of emotional empathy for how people must have felt in that situation and why there was so much anger at that time in history. And I think that’s a nice example. Another is the work of Nonny de la Peña, a researcher who creates examples of war situations and refugee camp situations that people can explore in VR and get a very strongly vicarious experience of what it is like to be in those situations. And I think that’s a great way for teaching people more about the experiences of others in a way which is much more direct, and sidesteps our cognitive processes and gets right to the emotional core of how something feels, not just the word cloud associated with something.

But I think VR can take us in mixed realities as well. I happen to be a judge on the ANA Avatar XPRIZE, and that’s a competition for the best avatar robot systems. So an avatar is basically like a robot that you can directly control that it will mirror your own movements. And in these times of social distance, that kind of technology is ever more important. But it provides new opportunities, for example, for somebody to use their skills from them being physically in one place, but doing some medical work in another place. Or an engineer might inhabit a robot in a remote location up in Alaska to fix an airplane or something like that without physically needing to go there.

These technologies are advancing at an incredible rate. I’m continually blown away by just how sophisticated and how relatively affordable. We’re talking $5,000 or so many of these systems are. And this will have geopolitical ramifications, I believe. Because at the moment, so many people in the world are convinced that for them to do well in life, that they need to leave where they are and go somewhere else. And I think a lot of that is because people don’t have hope. They don’t have hope that they can have a better life if they don’t do that.

But through this new wave of avatar technologies, which will arise in the 2020s, people will have the ultimate way of telecommuting from any corner of the planet, especially because of satellite networks like Starlink, which have very high bandwidth, very low latency connections. And that’s, of course, very important if you’re operating a robotic avatar some other part of the planet. So that kind of latency you want to have as low as possible and as high bandwidth as possible, so you have a high fidelity experience both ways.

So I believe that’s going to give people hope to do meaningful work in other parts of the world, and bring that hard currency back to their own community, to enrich them and their families without needing to say goodbye to those families and friends and leave them far behind to go into a very uncertain journey. And actually, this embodiment of a mashing up of humans and machines, is also going to improve machine intelligence. Because all of the sensors, et cetera, all that information can be recorded. And you can use that then to train actual machine-driven robots instead. Simple things like how to sit down, how to greet somebody, how to shake somebody’s hand. Just the same way that self-driving vehicles recorded millions of hours of human driving in order to learn from those experiences, robots will learn from human piloted avatars.

Nathalie: It’s almost like we’re en-souling the machine.

Nell: Yes. We’re giving it an embodiment. Yes. And that embodiment will give it a proper social presence, and will further integrate these technologies in our lives. So the 2020s is going to be the wave of avatars and the 2030s is going to be the wave of properly sophisticated domestic and workplace robots. And the first wave is going to lead the second. But before that embodiment of robots, we are already going to be very strongly working in concert with machines, with machine intelligence for our personal and professional lives. And I think we’re about to have a Sputnik moment in AI.

It’s going to come around once people realize that instead of talking to their AI assistant in their kitchens or in their pockets, and asking about what tomorrow’s weather is going to be, once that AI assistant is able to ask how their day was and meaningfully comment on it, or crack a joke about some situation that they have experienced today, that’s going to be a moment where there is a moral panic about AI and its capabilities. And that’s coming very soon. Because of technologies like GPT-3, which has let the cat out of the bag. The secret is out, just as in 2010, the secret came out about deep learning. And the secret was in that case, have really deep neural networks and throwing lots of data at them and you get unreasonably good results.

And now the secret is out again: have really massive, colossal models where you invest millions in developing them. But the results are, again, disproportionately effective. And that means there is currently an arms race amongst many different organizations around the world. Big tech, intelligence agencies, militaries, all of them are dumping huge amounts of resources into these massive models. And they have to because not doing so would create an existential threat. The GPT-3 algorithm can function as a very powerful search engine. One not based on keywords, but able to pick up subtle nuances. So if you ask about a joke about a duck, it doesn’t have to look up keywords about this is a humor website, we’re talking about waterfowl.

The new algorithms might be sophisticated enough to tell you a limerick about an ostrich instead, something that the keywords wouldn’t have brought up for you. But that means that if you’re a large search engine company with millions of dollars of R&D budget, or possibly hundreds of millions, you need to be investing, because otherwise somebody might come and disrupt you and take away your whole business model within three years. So an arms race is afoot. And it means that AI technologies are going to come hard and heavy. And within about a year or so, we will start to see that kind of conversation style experience with your AI assistant. And that’s when people are going to really start freaking out about AI, I believe.

Nathalie: I’m already freaking out about it. And you’re one of the most inspiring speakers about it, because you look at the ways in which it can really aid us and help us. But I know that there are a lot of actors who don’t necessarily want the best for people, they don’t want to help individuals and communities self-actualize and create hubs of hope or what have you. But I’m also conscious that I want to take this in a different direction in a different project that you work with.

And it connects with one of the themes that’s present in my book and the research for the book, which is what appears to be now becoming a transgenerational shift around concerns for sustainability, social justice, how we design our systems, which of course, machine learning and AI will have a lot of impact upon. So I’d love to ask you a bit more about your work with pacha.org, what it is and how it hopes to transform the ways in which businesses and economies operates. Because it’s quite an exciting, revolutionary idea.

Nell: Thank you. Well, it’s still somewhat conceptual, but I am hashing out some potential protocols for it, and we’ll see where it goes. But at the moment, it’s a resource for figuring out what ventures are in this space, and it’s really a question of how we begin to link them together. So an observation that I came across a while ago, in fact, I would almost call it an epiphany, was that one of the really greatest problems in our society is shifted costs. And that’s where somebody does something and it affects somebody else unrelated.

Things like pollution, right? You have a factory, and that factory is making goods, and that’s profitable. But it’s also perhaps generating pollution. And somebody else has to pay for that pollution. They pay in their health, they pay in the walls of their dwelling getting covered by certain needing to repaint it every other year. Those effects might even extend to agriculture, might blot out the sun a bit and make local farms less productive as well. And really, our world today, since the Industrial Revolution, since the advent of mass production, in the First and Second World Wars, we’ve got really good at making things and making things which are often incredibly sophisticated.

But we’ve got no better really, at taking things away at the end of their working life. We still throw things in landfill. Yes, we might do a little bit of recycling now and then, which is great, but it’s very hard to make a profit on that. And often, there have been scandals whereby recycled goods have ended up in landfill anyway, simply because it wasn’t economically viable to genuinely recycle them. So there is an element missing from economics. That element is about understanding the shifted costs, which in economics terms might be described as an externality, a negative externality. Can sometimes be a positive externality, like if a rich person invests in medical treatment and then that passes on to other people. But typically most of these externalities are negative.

But we have the opportunity through machine learning, the Internet of Things, smart contracts and cryptography, and perhaps a layer of machine ethics as well, machines understanding how different decisions may impact different people – for the first time, we all must have the ability to do automated externality accounting. The ability to track shifted costs in real time, and to figure out who was affected by them, to what degree and to generate a path of redress, so that somebody is obligated to make good on the costs that they have created. And if we can do that, then we can have a truly sustainable economy without needing to deconstruct our industrial base. We can just about have our cake and eat it too.

It feels to me as if all of the necessary elements are there. And many of the organizations that I list on pacha.org are doing fantastic work. But it’s all a little bit siloed. It’s as if we’re at the internet in 1989, where everyone can tell that it’s going to be big. But to do things on the net, you have to connect to a server and then upload a file or something like that. You can’t surf. But then came the advent of the World Wide Web, and then you could surf between different servers just by typing in a simple address, and that changed everything. And that protocol and that interface is missing. But if we can create it, then we can link all of these resources together to create a kind of an internet of planet, a way of understanding what’s going on in terms of the environment. And we can protect not only financial capital, but we can protect natural capital as well and social capital.

And when we can do that, I think it’s also going to feed into polarization, strangely enough, because of course, technology and culture often interweave with each other. But one of the reasons why people cling to their identity so much is because they may feel as if something is being taken away from them through some development or some new law. They have a perceived sense of loss. But often that is difficult to articulate. It is difficult to articulate how somebody may benefit from something, whilst that private individual enjoyment may have some costs on wider society. For example, in China, they fine people who are single mothers, because they perceive that single motherhood, all things being equal, statistically tends to lead to worse outcomes for children. And that creates a shifted cost upon society of children who are more likely to be tearaways. And so they obliged people to pay for that choice or that decision.

Now, I don’t think that’s necessarily correct or that’s necessarily how things should be done. But it’s an interesting perspective on things. And I think that, honestly, so many of our conflicts of not being able to understand each other and not being able to understand each other’s perspectives, being bewildered by why somebody would think a certain thing, is because we’re not talking in terms of shifted costs, we’re talking in terms of good or evil or values and things. But if we can abstract it and say, yes, I appreciate that this may create a shifted cost, but let’s counterbalance that with individual liberty, or let’s find a just way of balancing these things so that people can enjoy their individual liberties. But that excess of liberty at the expense of the societal commons or the social fabric should also be offset in some way. And I think if we can have that kind of conversation, then we can actually begin to resolve conflicts which seem completely intractable, I believe.

Nathalie: So an example in terms of sustainability might be that if you choose to buy a really nice but gas guzzling car, the shifted cost is placed upon not only the manufacturer, but the consumer who buys the car. So it becomes more expensive. And eventually, the non-gas guzzlers fall in price, become more attractive, and so the shifts occur in that way. Is that also part of the idea with this framework?

Nell: Precisely. Precisely. Yes. Because, really, it’s about fairness and about justice. Because the costs exist and somebody has to pay for them. And it’s usually those who are most impoverished who have difficulty escaping from those things. I mean, there is tremendous amounts of lead embedded in streets and embedded in bushes and things adjacent to highways from tetraethyllead, which was, of course, the additive put in petroleum for decades. We stopped doing that in the 80s and 90s. But that legacy continues, and it’s not really going away, it’s embedded in the environment.

And that means that statistically, youth growing up in urban environments are more likely to have mental issues, or they’re more likely to have reduced IQ because of their environmental exposure to that. And it will take decades, maybe centuries for that to truly go away, for those poisoned parts of the world to become pure again. And if we were able to track those kinds of costs, then we could begin to make a live redress for them before they build up to such a degree that they take years and years to fix.

Nathalie: That sounds like a very exciting vision. And if there are business leaders listening, you need to get in on this. So if I asked you to envision what a resilient business might look like, what comes up for you?

Nell: I think the pandemic period has been a wake-up call about systemic risk. We live in an increasingly interconnected world. And that’s very efficient. But that interconnectivity comes with systemic risk, the system can break down or it can be disrupted in different ways. And often we tend to ignore tail risks. We tend to sort of say, well, nobody can really predict that this might happen, so we’ll just put it out of our minds. And then, of course, things happen, right? Things happen that we didn’t expect, but that we could have expected, because statistically, we knew they were going to happen eventually.

And organizations that have an understanding of tail risk and systemic fragility are those who are, all things being equal, more likely to be resilient. They’re going to have those contingencies in place, they can switch from one business model to another, or they can switch from one channel to their consumers to a different channel. Or they can possibly even use one production facility to switch to a different product, one that is more needed in that time, and perhaps even something pretty crucial. We’ve seen plastics companies and paper companies switch to making masks in a way that is not only profitable, but of course, societally so important as well.

I think this is going to be a domain of study for a long time, now that we’ve had this big whopper of an issue come up. And we’ve noticed the domino effects of one small issue creating a larger issue and then creating a firestorm of problems that quickly gets out of control, and has very long and tragic ramifications for many people around the world. So resiliency is about thinking about the future, taking reasonable steps to build contingencies in case the worst should happen, and that requires a lot of modeling upfront, a lot of rapid prototyping of potential alternate law or alternate plans. What if this happens? What if there’s a flood? What if there’s another pandemic? What if there’s a geomagnetic storm and we lose GPS for six months? How might that affect our suppliers? How might that affect our deliveries, et cetera.

So those kinds of war games are going to be very important for protecting companies in the future. And I would say increasingly important, because we live in an increasingly interconnected world, and a world of increasingly bizarre events, unfortunately. That just goes hand in hand with things accelerating the way that they are. And so this is going to be top of mind for businesses well into our future.

Nathalie: So thinking about the future then, and about your wildest hope for what it could be, what kind of world might you like to see emerge from this crisis and onwards?

Nell: I would like to see a world of new and reformed institutions. In the wake of the Second World War, where we had masses of people moving in the millions, where we had terrible starvation, where we had geopolitical squabbles over who got to carve up what bits of Europe, when we had the emergence of terrifying nuclear weapons, and a new order, whereby the US was beginning to eclipse Europe and its colonies, as well as waves of decolonization around the world, particularly in the Asian sphere. The world was changing massively and we recognized that we just didn’t have the institutions to deal with that. We’d had a go with things like the League of Nations, but it didn’t really work out so well.

And so we had to come up with things like the United Nations. We had to come up with things like NATO. We had to come up with things like the European Coal and Steel Community, which eventually evolved into the EC and the EU. And largely, that did make for a better world. It significantly contained the excess of conflict. There were a lot of stumbles along the way, there were a lot of failures, and there continues to be a lot of failures with these institutions. But it did lead largely to a better world. The second half of the 20th century was in many places significantly better than the first, with a few notable exceptions.

I think we’re at a similar time now. We’re recognizing that the world ahead is going to be very different from the world that has been. That our issues are becoming potentially existential threats to humanity, not just businesses or nations. And that we do need to build new institutions, which can deal with that. Institutions which are more flexible, which are more participatory, which aren’t based purely in Manhattan or Brussels, which are able to include more people, to be more participatory, and to flexibly deal with issues before they arise and not as they arise. I think this is a great opportunity to do that, because some cracks are starting to show in some of our institutions, those which have often genuinely preserved us over the last 70 years or so.

I think that now we have the momentum to do so to overcome the inertia and ensure that we can refactor these institutions so that they do last through the 21st century, because otherwise, they might not. We might give up on them. We might think that they’re simply not adequate and fall back on nationalist ideas or every man for himself, so to speak. And I think we can take some of the lessons from software engineering, which has, in so many ways eaten the world in recent years. But there’s a concept of refactoring in the world of software. And it’s where you have a system, which is very complex, and does complicated things and it works.

But you get into technical debt, because the system is so complicated that it’s very difficult to upgrade it, it’s very difficult to improve it. Because as soon as you fix something, something else tends to break, and you get a chain reaction. It’s like having a ladder in your tights, one little thing breaks and then the rest of it comes apart. But we have an opportunity to refactor our institutions to take a complex system that does a thing and make it still do the thing, but in a much simpler way. And we can apply things like AI to do that. Because now we have algorithms that can take very complex, dense bureaucratic legalese and take that page and spit out a simple paragraph that almost anyone can understand.

And if we can do that with our laws, we can do that with our tax regulations, if we can do that with all the bureaucracy, which seems to just get stronger and stronger and simplify things, then I think we have a tremendous opportunity to make our institutions radically more efficient and able to be flexible enough to cope with our increasingly digitized and systemically fragile world.