Society problems

SCIENCE AND SOCIETY The future of science

An IMPERATIVE characteristic of science is its ability to predict the future: theories must generate testable predictions that can be confirmed.

To claim that the analysis is “scientific”, like the motivation behind historical materialism in its original form, is a way of saying that data and theory from the past are used to make statements about the future.

It is exactly this same predictive capacity that drives the contemporary fascination with data, algorithms and AI.

Unfortunately, algorithms do best when asked to replicate the well-documented past. A stark example was in the algorithm that was used to predict and distribute A-level grades to students who had not taken exams during the pandemic in 2020.

The algorithm succeeded in replicating the annual inequality between rich and poor schools, and deservedly caused an uproar.

Besides the ability to predict the future, science was important to leftists and revolutionaries in the 19th and 20th centuries because of its association with technology and progress.

At the time, the left imagined the ways in which technology would enable a better quality of life for everyone.

Soviet science is known for its spectacular investment in scientific progress and its major successes, as well as for the tragedy of Lyssenkism.

In contemporary Britain, the alignment of science and progress has largely been absorbed into the dreams of capitalists, leaving a void in the imaginary future of the left.

The most extreme recent changes in our lives have been produced by internet services and technology companies. With ever-deeper personal investment in the ways of life enabled by technological progress, there develops an irreconcilable antipathy towards the systems that currently allow it.

The challenge is to understand how we imagine a future that is both liberated and capable of using science and technology.

Rethinking science from the left requires understanding its relationship to progress, its role in shaping the future, and how it might fit into a better world.

In recent years, forward-looking research institutes have proliferated like mushrooms. The Future of Humanity Institute (2005) and the Global Priorities Institute (2018) at Oxford University, the Center for the Study of Existential Risk (2012) and the Leverhulme Center for the Future of Intelligence (2016) at University of Cambridge, Lifeboat Foundation (2009), Global Catastrophic Risk Institute (2011), Global Challenges Foundation (2012), Future of Life Institute (2014), Center for Long-Term Risk (2016). Many of them are located in powerful institutions and backed by enormous wealth.

What binds many of them is their association not only with concerns for humanity, but also with the movement known as “Effective Altruism” (EA).

Over the past decade, EA has grown dramatically and gained significant influence, particularly in Britain, but also in the United States, thanks to incredibly wealthy donors.

Unlike many movements with which it could easily be compared, EA is not traditionally religious, but instead claims to be grounded in rationalism.

It explicitly claims to be based on science, data and moral philosophy. The movement originated around 2010 and focused on maximizing the good an individual can do in their lifetime.

Early organizations spun off from the movement focused on charitable giving and career planning.

Giving What We Can encourages people to donate 10% of their income to charities with the highest impact in terms of quality-adjusted life years saved per dollar.

80,000 Hours was a career guidance manual to help adherents understand what work they should be doing to maximize their usefulness – with recommendations such as becoming a hedge fund manager and donating money rather than becoming a doctor.

It has been criticized for its strong focus on individual action rather than collective action, viewing the status quo as a fixed set of conditions to be optimized.

Although initially motivated by a deeply critical approach to existing philanthropy, EA won over extremely wealthy backers from across the tech industry.

The movement has become increasingly forward-looking. The organization focuses its attention on what it calls the “long term,” a view of the long-term risks to humanity. Compared to the short term policy which looks no further than four years, the long term seems like an admirable alternative.

What is surprising is that long-termists are actually interested in the future, thousands, or tens or hundreds of thousands of years in the future.

What is even more surprising is that they think that on these timescales, problems like hunger or global warming are short-term hiccups.

These research institutes are very intensely focused on the risks posed specifically by future artificial intelligence, in a hypothetical future where computers become “smarter” than humans, and bring about the end of humanity itself.

If this surprises you, you are not alone. When the absolute disaster of anthropogenic climate change threatens to produce misery, famine, war and the liquidation of ecosystems on an unimaginable scale, why would a moral philosophy movement choose to focus on computers going rogue?

The answer may lie in the identity of the donors. Many of EA’s major backers are tech billionaires: Peter Thiel (Paypal), Jed McCaleb (Mt. Gox bitcoin), Elon Musk (Tesla, Twitter), Dustin Moskovitz (Facebook), Vitalik Buterin (Ethereum – cryptocurrency), Jaan Tallinn (Skype).

All of these men lived lives dominated by the wealth they amassed through algorithmic capitalism.

Now they reinvest that money in their own obsessive concern. This thinking is made possible by moral philosophers who have created the argument that it is morally essential.

The first element of this reasoning says that more good can be done by focusing on things that currently don’t get much attention.

The other part says that although many lives may be lost or impoverished by climate change, there are a large number of potential future people whose lives and happiness can be preserved, provided that the given risk is not ” existential” – that is, it is not. kill every last human.

The sight is obviously obnoxious, but it’s easy to understand given the material concerns of the people funding it.

Effective altruism is a value system based on philanthropy, which is a capitalist response to the misery induced by capitalism itself.

Charitable redistribution is a band-aid, it does not solve the problem. It is tedious to assume that the best we can do is to mitigate the effects within the current arbitrary constraints.

The tech billionaires are right that we should care about their use of algorithms and demand control ourselves. They are wrong to believe that thinking more about them will save us.

If you’d like to join us to discuss science, the left, and the future, we’ll be hosting a series of three online discussions with top thinkers on science and society, hosted by the Marx Memorial Library.

The first is tonight at 7pm, register here: www.marx-memorial-library.org.uk/event/397.