Supercomputing could Solve the World’s Problems, and Create Many More

By Georgia Frances King - 22 February 2019
Supercomputing could Solve the World’s Problems, and Create Many More

Supercomputers have the potential to turn us into superhumans. Our potential and power increase in lockstep with to the tools we have to serve us.

The World Economic Forum’s Global Future Council on the Future of Computingis aiming to shape the direction of that power. Our goal is to define a positive, inclusive, and human-centric future of supercomputing.

What is the future of supercomputing?

Supercomputing has the potential to be the underlying layer to support solutions for many of the world’s most pressing contemporary challenges: global privacy and identity issues, stalemates in medical research, and sustainable supply-chain logistics, to name a few.

As Albert Einstein said, “We can’t solve problems by using the same kind of thinking we used when we created them.” Technology hasn’t created all of those problems – but it could provide some of the answers.

We have a great opportunity ahead of us: designing the supercomputing infrastructure for the next billion people. The first group to consistently achieve this level of computing will have an advantage over the rest of the world; they will unlock all sorts of ways of better predicting the future and analyzing the present. This advantage would likely translate from computing power to economic power.

This race is being driven on two tracks: high-power computing in terms of supercomputing processes (such as quantum computing); and the physical infrastructure that powers them (such as chips and semiconductors). The convergence of these two roads is the path to supercomputing. As MIT professor Isaac Chuang said of the future of quantum computing: “It is no longer a physicist’s dream – it is an engineer’s nightmare.”

That’s the technical side of it. Once we have overcome this technological leap, what will we use these technologies for? It is one thing to achieve exascale – a billion billion calculations per second – it is quite another to wield it.

How will the processing speed and power impact Big Data, domestic privacy issues, and national security concerns? How will it advance computational modeling, neuro-linguistic programming, and encryption protocols?

Exascale computing promises exciting advances in many diverse fields: personalized medicine, genomics, carbon capture, astrophysics, market economics, biofuel, blockchain, and cryptography, to name just a few.

It will enable us to better predict the weather, solve more complex algorithmic problems (like the traveling salesman problem), explore the edges of our universe, and create a more energy-smart power grid.

But it also comes with potentially concerning repercussions. For every way a tool can be used to advance the common good, the same methodology can often be used for menace; a hammer can both be used to cause harm and to build a house.

Super

High performance computing (HPC) will fuel AI. Image: Supercomputing 2018

The not-so-super side of supercomputing

Supercomputing will bring with it a great many positives – but it could also speed up the perpetration of existing biases. Some of the areas that will have to be closely monitored include predictive policing, in the incarceration system, for example, or college applications; cryptocurrency mining – imagine how fast a true quantum computer could mine Bitcoin; privacy concerns, of which Spectre and Meltdown – the hardware vulnerabilities that allow programmes to steal data – are just the beginning; and supercomputing's potential to cause significant job losses, due to automation en masse.

We also run the risk of exacerbating the current racial, gender, and socioeconomic inequalities that are reflected in the technologies we create. These issues will compound much faster under the auspices of supercomputing. How can we prevent ourselves from embedding bias into the foundational level of exascale computing?

Then there is the additional environmental impact of supercomputing. The amount of water and energy required to cool these mammoth machines might all but reverse whatever sustainable outcomes they could process. A 1,000 petaflop computer uses the same amount of energy as your average coal plant, which is enough to power San Francisco for a year. How can we weigh the technological benefits with the environmental issues?

These are just some of our known concerns. However, if we want to advance into the future, we need to reimagine the problems that could be both created and solved by supercomputing. What do we not know, and how can we open our minds to ask the right questions to bring us there? Some of those include:

⦁ For whom should we build, manage, regulate, and develop these new computational systems?

⦁ How will we power them in a sustainable, environmentally conscious, and cost-effective way?

⦁ How do we make this computational revolution inclusive and beneficial for all regions and citizens across the globe to create a more egalitarian society?

⦁ How do we avoid unintended consequences that could arise from the computational systems we are building?

⦁ How do we meet our needs without impinging on future generations and other communities?

⦁ How do we make sure that innovations in AI and supercomputing are aimed at productive ends and do not result in a new arms race?

⦁ And who organizes, who pays, and who benefits?

Telling a responsible story

When trying to answer these questions, we can use two predominate visions of humanity: utopia and dystopia. Both visions are equally important when considering any exponential technology.

Utopian visions give us a glistening best-case scenario to run towards; dystopian visions give us a murky worst-case scenario to run away from. Despite their different impetuses, both perspectives force movement in the same direction – the outcome is the same, but their velocity comes from a different energy source. Are we running towards the future we want to see, or are we running away from a future in which we are fearful to live?

But unlike traditional computing, the future is not binary; it oscillates between both states depending on context and personal perspective. We therefore need to learn how to hold both our utopian and dystopian ideals in each hand at the same time – kind of like how a qubit simultaneously holds the capacity to be both a 0 and a 1. It’s the superposition of the future.

Whatever puts the wind in your sails, both imaginings are only useful when they translate to real-time action. If we want to see a future where rural communities can have access to exascale processing power, what steps can we take to make that a reality? Or if we don’t want to see a future where access inequality has created a sub-class of untethered society, what protocols can we put in place to prevent a division of opportunity?

2018’s iteration of the Global Future Council on Computing laid the groundwork that this year’s members will be building on. While the first Council focused mainly on the realities of the technological future of computing, the second will look at the ways we can responsibly apply them across all industries.

Our mandate is: "New computing innovations are the nervous system for the Fourth Industrial Revolution, powering technologies that transform businesses across all industry verticals, and with immense potential to greatly benefit society. With this great potential also come great risks – how can we ensure that computing technologies are adopted responsibly, with sustainability and the greater public good in mind?”

Exponential technologies create the potential for both exponential advances and peril. We must move forward with confidence, but we must also keep our eyes open to the unintended consequences of innovation.

The opportunities that exascale computing could bring to the world – to developing nations, to scientific advances, and to corporate interests – are immense. But so is our responsibility to wield them correctly.

As we move from the conclusions of the first Council to the goals of the second, we will build upon what has already been achieved to advance a human-centric vision of the future of computing. After all, it’s not only computers that will soon go exascale: humanity will, too.

 

 

Georgia Frances King, Ideas Editor, Quartz.

This post first appeared on the World Economic Forum's Agenda blog.

Image credit: National Nuclear Security Administration via Flickr (CC BY-ND 2.0)

Disqus comments