Against the Anxiety of Replacement

Artificial intelligence stirs up anxiety about replacement, evoking scenarios in which humans are supplanted by machines. In reality, its contribution to economic growth and our overall well-being will be stronger the more complementary the new technologies are to human labour. It is up to us to steer their evolution in this direction. This is a task that presents unprecedented challenges but is far from impossible.

Everyone talks about artificial intelligence (AI), but few truly understand what it entails. From the first issue of eco, we wanted to dedicate a section to those who genuinely understand the subject—computer scientists, electronic engineers, and theoretical physicists capable of explaining in simple terms the object of so much attention. They have clarified that generative artificial intelligence, which can produce texts and images and solve complex problems, operates very differently from the human brain, to the point that it makes one wonder if the name “artificial intelligence” is a clever (and human) marketing invention to promote super-computers that navigate the web at unprecedented speeds using the vast amount of information available here. The image you see was produced by ChatGPT4, which we asked to self-represent, more precisely to provide an image of generative artificial intelligence. As you can see, it is not the image of a human brain. Yet you can glimpse a central core, hardware at the subcortical level, and several light beams resembling axons through which the electrical impulse should run to a sort of cerebral cortex. Thus, the confusion remains. Perhaps because to understand the differences between human and simulated intelligence, we first need to better understand the mechanisms behind our reasoning. As Tomaso Poggio reminded us in the June issue of eco, progress in recent years has been more in electronic engineering than in neuroscience.

What Impact Will It Have on All of Us?

When we talk about artificial intelligence, we are mainly interested in knowing its impact on our lives, social interactions, economy, labour, health, information, and the functioning of our democracies. These are questions that trouble many, generating “replacement” anxiety – scenarios in which humans are supplanted by machines. After all, the history of humanity is paved with technological pessimism. Many catastrophic predictions about the consequences of new technologies have proven unfounded when put to the testm and the end of work supplanted by machines has been declared hundreds of times. Yet millions of jobs continue to be created in economies worldwide, and the employment rate (the ratio between employed individuals and the working-age population) has been increasing almost everywhere throughout the 20th century and the early 21st century. Even unemployment is at historic lows in many countries. However, the new frontiers of technological progress are redefining our way of working much more than in the past. Machines can now replace humans not only in repetitive routine tasks but also in intellectual tasks and professions. Tasks that were once exclusively human, such as writing, translating, designing, and producing videos, can now be performed by machines instead of people. There is a fear that instead of us guiding these developments and using them to enhance the quality of our work, algorithms will take over and make decisions for us in ways disadvantageous to humanity. There is a fear of creating super-intelligent entities with values misaligned with those of humans, like HAL 9000, the on-board computer in 2001: A Space Odyssey.

The Duty to Govern Change

To assess the validity of these widespread concerns, we cannot rely solely on computer scientists, electronic engineers or theoretical physicists, because evaluating the future impact of artificial intelligence requires expertise beyond these disciplinary horizons. We have therefore consulted social scientists who have long dedicated themselves to these topics, working closely with those at the forefront of AI research. As you will see, the texts they have written for us do not offer definitive theses but rather alternative scenarios. There are many “ifs” and “depends.” We know it would be much more reassuring not to have these hypothetical clauses, and we understand the disappointment of readers who will not find a definitive vision of the future of AI in these pages. But no one can provide certainties about the future developments of technologies that are inherently dual in nature, meaning they can have a wide range of applications in both civilian and military fields. And then, the alternative scenarios, the “ifs,” the “depends,” in their way, give us an answer. They tell us that the future impact of artificial intelligence depends on us – on how we manage technological progress, direct it towards shared goals, minimize potential undesirable effects, and promptly address them. In other words, technological progress is not something to passively witness. We design it, direct it. Perhaps never before has there been such a role for governments in directing research, reducing the concentration of economic power given by exclusive access to immense databases, curbing the abuse of dominant positions, and sanctioning perverse uses of artificial intelligence. The problem is how to do it. Should we anticipate the issue, as Europe seems to have wanted to do with the Artificial Intelligence Act we discussed in the second issue of this magazine, introducing restrictive rules even at the cost of stifling AI development? Or, aware of the speed of ongoing changes and the fact that inventions are by definition unpredictable, should we prepare to intervene after the fact to sanction any deviant behaviour? If we choose the latter path, we need to be equipped to intervene quickly, trying to make AI adoption a reversible process if something goes wrong. Then there is a second, equally intricate problem of jurisdiction. Who can regulate AI or sanction its undesirable applications? Those with authority to intervene generally operate on a national or regional (Europe, United States) scale, but here we are dealing with actors operating on a global scale. Think, for example, of the export of facial recognition technologies by China to totalitarian regimes, which we document in this issue. What can be done to prevent AI from being used to stifle any democratic aspirations in dictatorial regimes? This is not just a theoretical risk. Autocratic regimes have increased imports of these Chinese technologies during periods of intense internal opposition. Perhaps only global trade restrictions implemented multilaterally can prevent – or at least significantly reduce – such transactions.

AI’s Contribution to Economic Growth

Financial markets worldwide believe that artificial intelligence will be a powerful antidote to the growth slowdown induced by the demographic winter in advanced countries. If the working-age population decreases, the only way to maintain sustained growth rates is to increase labour productivity, reversing the current deceleration. The most optimistic scenarios imply additional economic growth due to artificial intelligence of about 1.5% per year. It may seem small, but it is actually huge. A country that grows at an average of 1% without AI, with the additional boost provided by new technologies, would double its national income in less than 30 years instead of 72 years. One of the most important factors for such a positive scenario to materialize is for AI to be complementary rather than substitutive of human labour. Otherwise, displaced labour by new technologies would become unproductive or shift towards lower productivity segments. In other words, the goals of increasing growth and protecting labour are far from antithetical in managing artificial intelligence. Policies for large-scale AI training and measures to stimulate a complementary use of new technologies to labour are two sides of the same coin. But making artificial intelligence complementary to human labour is not always easy. There are tasks where new technologies clearly replace human labour. Think of movie dubbing. How can one compete with machines capable of replicating actors’ original voices, translating them promptly into all the world’s languages, and making them coherent with lip movements? In other cases, workers themselves oppose collaboration with artificial intelligence. The example of doctors is telling in this regard. AI adoption could greatly help in preventing terminal or chronic diseases, increasing the possibility of not only a longer life but also a healthy old age and reducing the burdens on our healthcare systems. Yet, as we show, many doctors refuse to use AI as a diagnostic support tool.

Algorithms and Large Platforms

We do not want to argue that we should always and unconditionally trust algorithms. We are aware that delegating choices to them can lead to socially harmful outcomes. There is also a problem of accountability. If an algorithm finds it optimal – relative to the objective to be maximized – to discriminate or collude, who will be responsible for its choices? The algorithms used on the major platforms we all know – Netflix, Spotify, Booking, Amazon, eBay, Airbnb, Uber, Vinted, and all the others – to personalize offerings perform a useful function in better directing our web searches towards what we desire. They allow us to quickly find what we are looking for, facilitating interactions between buyers and sellers, homeowners and renters, drivers and passengers. They also offer income opportunities to those who sell services occasionally or want to get rid of items they no longer need but can still serve others. But these algorithms can also influence our choices, steering them towards certain goods and services that by virtue of widespread recommendations undergo significant price increases. We document how the exclusive access by platforms to the information each of us provides when booking a flight, choosing a restaurant, watching a movie, listening to music, informing ourselves, communicating with others, buying goods, and choosing among different payment methods can, beyond a certain level, make the costs of personalizing the offer outweigh the benefits. We highlight how there have been many cases where platforms have induced users to provide information they did not want to share and that was not strictly necessary for the service’s provision. As you can see, the concentration of information reduces competition, stifles innovative startups, and poses significant privacy protection issues.

Necessary to Continue Investing in Human Intelligence

In the ongoing technological revolution, many businesses are at a disadvantage and risk falling further behind. Using artificial intelligence in production processes requires IT knowledge and organizational skills that small businesses, which dominate our country’s industrial structure, often lack. Instead of outdated policies for “Made in Italy” that value goods often produced everywhere except Italy, we should think about implementing policies that promote knowledge transfer between businesses and widespread learning of the opportunities offered by artificial intelligence. Unlike AI learning, human learning has an extraordinary level of energy efficiency. As Marc Mézard reminds us, a three-year-old child can distinguish a cat from a dog, while generative intelligence needs to analyze hundreds of thousands of animal images to achieve this. Deep learning requires high energy consumption by data centres with associated carbon emissions. True, the information processed by AI circulates much faster than in the human brain (10 milliseconds instead of 100 milliseconds for simple tasks), but the potential of that mysterious tangle of neurons that is our brain is still largely unknown. The most serious mistake we can make right now is thinking that artificial intelligence reduces the need to invest in human intelligence and neuroscience.

direttore@rivistaeco.com

Can We Control Artificial Intelligence?
5/2024
Can We Control Artificial Intelligence?

Managing Its Impact on Social Interactions, the Economy, Employment, and Our Democracies

There is widespread concern about how artificial intelligence might affect our lives—changing social interactions, the economy, jobs, health, information, and the functioning of our democracies. Leading scientists have called for a six-month pause on its development to better understand the path we are on. But technological progress isn’t something we should just passively witness. We have the power to shape and guide it. In this issue, we present ideas on how and where to take action to ensure artificial intelligence complements human work and improves well-being, this time without harming those with lower incomes.

1

AROUND THE WORLD

CHART OF THE MONTH

HEALTH AND ECONOMICS

UNDERSTANDING FINANCE

Subscriptions

We offer a one-year subscription that provides digital access to the english version of Eco. 12 issues €65/year

Subscribe
Subscription