Pictet Group
Is AI running out of steam?
The AI field is booming, as decades of research finally comes to fruition in the form of novel tools and models that are approaching, if not yet reaching, human-like intelligence.
From the string of achievements from DeepMind, such as game-playing and predictive modelling of protein folding that eluded scientists for decades, through to the human-like language produced by GPT-3, the latest product of the ‘deep learning’ revolution that mimics the human neocortex, AI seems to be reaching its zenith.1
But is this really a golden age? The field has, after all, been over-hyped in the past. In the 1960s, Herbert Simon predicted that computers would be able to rival humans within 20 years. Marvin Minsky, a founding father of computer science, predicted that the problem of creating artificial intelligence would be solved within a generation. Neither prediction has yet come to fruition.
Economist Impact has analysed the history and pace of innovation in key technological sectors. Using data science tools, we developed a unique approach to measuring innovation activity, based on an analysis of the language used in scientific papers and patents.2
Winter turns to summer
Our analysis of AI indicates a surge in innovation in the early 1980s, falling significantly in the latter part of the decade. This tracks the second so-called “AI Winter”, which refers to periods where the field struggled to deliver results leading to a slowdown in funding and research. Yet these were not entirely barren times. Our model indicates that key concepts, like neural networks and deep learning, were attracting more attention even in fallow times and laid the foundation for breakthroughs we are witnessing today. (There can, after all, be very long lags between AI concepts and tools and technologies; the first ‘artificial neuron’ was developed in 1943.3
Narrowing down
Our research finds an overall narrowing of linguistic diversity in academic literature from around 2006, reaching its lowest level of the entire period in the most recent years measured. More positively, linguistic diversity in patents increased during this period.
This aligns with other evidence suggesting the field is becoming focused on a subset of themes, potentially trading greater depth of research into commercialisation for breadth, risk, and innovation.
Juan Mateos-Garcia, director of data analytics at Nesta, worked on one study to explore thematic diversity in AI research. Their findings, based on a sample of 110,000 AI papers, used topic modelling to quantify the thematic composition of AI research and estimate research diversity. They found thematic diversity has declined over the last decade, just as AI seems to be taking flight in the real-world. “You would have expected the increasing scale of the AI market to favour diversity but our prior hypothesis [before conducting our research], that diversity was in fact stagnating, held—despite the expansion of the field”, says Mr Mateos-Garcia.
One reason, he posits, is that the private sector is the dominant actor in funding and conducting AI research and firms are focusing on a smaller range of business-oriented applications. “Industry participants are more short-sighted in their focus on narrow technologies but in innovation you cannot be sure if something is going to be successful."
Wither academia?
Mr Mateos-Garcia argues that the university sector’s focus is also narrowing, in part because of its growing participation with industry. This is concerning because academic institutions were the hothouse of AI breakthroughs in the past, from the landmark 1956 Dartmouth College summer school (credited for birthing the AI era) to the work programmes of the Stanford AI project and MIT AI Lab. Critical innovations driving AI today, such as neural networks and deep learning, have their roots in academia.
While commercialisation is clearly important to ensure continued investment into AI for solving real-world problems, there are trade-offs to the current trend towards a narrower focus.
Mr Mateos-Garcia’s concern is that AI innovation is not just narrowing, but that the field as whole is failing to innovate in areas that matter most. “We have seen amazing advances in advertising and social media and to an extent in scientific discovery, with AlphaFold being the most recent, but in many of the biggest social challenges—education, health and environment, and even Covid-19—did AI really help us? There’s very little evidence of impact.”
Other leading AI researchers have also expressed concern that the field is on a plateau, with exceptional capabilities in narrow tasks but little sign of approximating the flexibility and adaptability that typifies the human mind.4
Eastern winds
A final trend observable in our model is a decisive shift of AI innovation locales from North America and Europe in the 1960s and 1970s to East Asia, especially Japan and China. The global share of AI research from China leapt from 4% in 1997 to 28% in 2017, the largest of any country.5
The trend in part consolidates the broader shift towards commercial and practical applications rather than foundational research. As of March 2019, the number of Chinese AI firms reached 1,189, second only to the over 2,000 firms in the US, and their focus is primarily on speech and image recognition, which are finding use cases in areas including retail, fintech, transport and entertainment. Leading Chinese companies find themselves at an advantage over Western firms because the power of AI rests, in large part, on huge tracts of data that allow for continued refinement and recalibration, which China’s 1.4 billion-strong population can provide. But this does feed into the category of so-called “weak AI” focused on solving narrowly defined problems.6
AI, like the sciences of which it is part, is in constant tension between basic innovations in the fundamental concepts and models—or “paradigm”—and what science historian Thomas Kuhn called “normal science”, in which researchers tinker within a commonly agreed framework. The commercialisation of AI for solving real-world problems is in part a sign of success, given the field’s past winters. But as our model shows, sometimes winters can be a productive period of reflection and experimentation too.
The future will likely throw up surprises, as some predictions (both positive and dystopian) fail to materialise and other unexpected applications and uses emerge. Already, the view that AI and automation will lead to mass unemployment is being challenged by a more nuanced sense, based on real-world deployment, in which it augments human capabilities, such as carrying out rote work that allows people to develop skills and mastery over complex issues, while correcting errors and biases that could impact work performance.7 The importance of human oversight will create new categories of expertise and skills around ethical and responsible AI that may, in time, lead to novel research agendas to shape the next generation of AI innovations.
Insights for investors
- 27 per cent of businesses have adopted AI to optimise service operations, 22 per cent to enhance products while on the other end of the scale, 16 per cent use tit to model risk and 14 per cent for fraud and debt analytics according to the 2021 McKinsey AI survey.
- AI is being used to crack quantum computing problems. In one recent example, it helped break down a problem that required 100,000 equations into a mere four equations without sacrificing accuracy.
- Some 97 million jobs globally involving AI will be created between 2022 and 2025, helping to transform nearly every industry, but employers will struggle to fill the positions, according to Forbes.
About Economist Impact
Economist Impact combines the rigour of a think-tank with the creativity of a media brand to engage a globally influential audience. With the power of The Economist Group behind it, Economist Impact partners with corporations, foundations, NGOs and governments across big themes including sustainability, health and the changing shape of globalisation to catalyse change and enable progress.
[2] Our model also indicates vibrancy from 2010 onwards (see Figure 2, below), including foundational concepts powering contemporary applications, like generative adversarial networks (GANs) and zero-shot learning. Both are key to image and data classification at which AI is excelling. A major step forward came in 2012 with AlexNet, a convolutional neural network architecture that outperformed other models in image recognition and is considered one of the most influential papers published in computer vision. Influential AI scientist Yann LeCun has called GANs the “most interesting idea [in the field of machine learning] in a decade”.
[3] https://towardsdatascience.com/mcculloch-pitts-model-5fdf65ac5dd1
[4] https://www.bbc.co.uk/news/technology-51064369
[5] https://hbr.org/2021/02/is-china-emerging-as-the-global-leader-in-ai
[6] https://hbr.org/2021/02/is-china-emerging-as-the-global-leader-in-ai
[7] https://www.raconteur.net/technology/artificial-intelligence/ai-human-augmentation/