Extracts follow from a ‘cover story’ just published in the Financier Worldwide Magazine
- According to UBS, the AI industry was a $5bn marketplace by revenue in 2015. By 2025, the size of the AI software market is forecast to reach $126bn
- McKinsey Global Institute reckons AI techniques could create between $3.5 and $5.8 trillion in value annually across nine business functions in 19 industries in the coming years. This accounts for about 40% of the overall $9.5 to $15.4 trillion annual impact potentially enabled by all analytical techniques.
- Accenture claim that by 2035 AI will double growth rates for 12 developed countries and increase labour productivity by as much as a third.
Automation evolution
AI itself has evolved dramatically, particularly over the last 10 years.
“Machine learning, a subset of AI, has been an area of research for over half a century but has only achieved transformational success with recent increases in processing power and memory and the availability of very large training data sets, sometimes by-products of the internet age,” explains Matt Hervey, a partner and head of artificial intelligence at Gowling WLG. “This has vastly improved computer vision and language processing, in turn enabling unprecedented automation of previously human-only tasks. High-profile examples include self-driving cars and medical diagnosis, but vision and language perception enable automation of a vast range of low profile, menial tasks across all sectors.
“The effects of such automation are unclear to experts and the public alike, so current attitudes to AI may not last,” he adds. “What is abundantly clear is that governments, regulators, lawmakers and companies around the world are conscious of both the economic potential of AI and the risks to society, including fake news, mass unemployment, loss of privacy, and challenges to human autonomy and dignity.”
More recently, the coronavirus pandemic has had a significant impact on adoption of AI, as companies responded to the challenges of worker productivity during the crisis. “COVID-19 has accelerated AI and its applications by decades,” says Clare Lewis, a partner at McGuireWoods. “With the unprecedented move online, from tele-medicine, e-learning and remote working, the demand for AI and machine learning has never been greater.” Indeed, AI can help remote workers stay focused on their most important duties by eliminating tedious tasks.
Sector-specific trends have also emerged during the COVID-19 crisis, with healthcare one obvious beneficiary. “The beauty of AI is that it can benefit all sectors that rely on data,” says Giles Pratt, a partner at Freshfields Bruckhaus Deringer LLP. “But the healthcare sector, and particularly pharmaceutical and biotech, may have the most to gain, as ‘failure rates’ in drug research and development (R&D) remain high and costly. “AI is increasingly being used in drug development, analysing and learning from large data sets to identify suitable compounds and to predict efficacy and side effects of new treatments,” he continues. “Reducing the time and cost involved in R&D can make a tangible difference in this space – and the importance of efficient drug development has really been put in the spotlight during the COVID-19 pandemic.”
In manufacturing, AI can monitor and analyse equipment to issue alerts when a service is necessary. This predictive maintenance enables businesses to cut out scheduled services. In turn, downtime and overall maintenance costs are reduced.
Across a range of industries, automation is being targeted to replace specific tasks within a role, particularly repetitive tasks considered ‘low-value’. “AI can make certain jobs more efficient and interesting,” says Ms Lewis. “In the legal realm, for example, lawyers use AI tools for mundane document review and due diligence tasks which previously needed to be reviewed manually. When workers are freed up from menial tasks, they can focus on increased client service and innovation.”
Cyber strength
Cyber security is another key area of AI application – indeed, it is the leading area according to the Consumer Technology Association, with 44% of all AI applications being used to detect and deter security intrusions. AI can provide an ‘always on’ solution to help protect businesses from malicious attacks. It can monitor systems to identify and fix vulnerabilities, allowing the IT team to concentrate on key risks.
“The scale and complexity of large organisations’ IT environments means the task of monitoring systems for irregularities is becoming increasingly difficult,” points out Mr Pratt. “As part of a multi-layered cyber security strategy, we see AI playing a significant role in detecting and responding to threats by first learning what is ‘normal’ for a specific IT environment, and then identifying anomalies. That makes AI an important line of defence against attack, and in managing the legal and regulatory risks associated with cyber security incidents,” he says.
Workforce worry
There is, however, so much that we do not know about the implications of AI. One thing is increasingly clear, however: AI will be profoundly disruptive. Some already view the proliferation of AI and its potential future applications in a negative light.
In terms of productivity and employment, for example, robots have been depicted as taking jobs from workers. Concerns about AI making human labour obsolete are understandable. According to a recent study from MIT and Boston University, robots could replace as many as 2 million workers in manufacturing alone by 2025.
“Whenever you have a leap in efficiency, there are large strides made in terms of economic growth,” says Ms Lewis. “However, the dark side to AI is that some workers, such as truck drivers for example, will need to re-tool their skills very quickly and be able to relocate to find new jobs. Software developers around the world are developing software specifically geared to replace well-paid managers who perform repeatable tasks. The AI technology being developed is very exciting, but the collateral damage will have long-term repercussions in terms of poverty and inequality. Innovative solutions are needed for those who fall through the cracks.”
COVID-19 has exacerbated this issue. In the US, for example, around 40 million jobs were shed at the height of the pandemic, and according to the University of Chicago, around 42% of those losses will be permanent. With many companies in survival mode for the foreseeable future, the pandemic has provided further incentive to increase automation levels. AI, after all, does not need to socially distance.
On the other hand, there are suggestions that AI will actually have a net positive impact on jobs. According to PwC, for example, AI is projected to create as many jobs as it displaces in the UK over the next 20 years – in absolute terms, around 7 million existing jobs could go, with around 7.2 million engendered.
In the short- to medium-term, AI is more likely to automate certain tasks within a role, rather than the entire role itself. There will be a focus on AI for complex calculations, routine processes and pattern recognition, for example, which can boost profitability and free up employees. Ultimately, AI can exist symbiotically with humans. The technology does not operate in a vacuum; it requires humans to function properly and deliver the desired efficiency and productivity gains.
Implementation hurdles
To date, AI has typically been deployed in the form of industrial and collaborative robotics, as well as machine vision and machine learning. But it is continually evolving. For example, industry leaders expect significant growth in predictive systems which use AI to manage intelligent supply chains. Manufacturers also predict increased use of robotic process automation (RPA) in their operations.
At present, the most significant barrier to deployment of AI solutions is that many organisations lack clarity on how to implement them. There is also a lack of employees with the necessary digital skills to implement AI, or even to define what skills are needed.
To overcome such issues, companies must adopt a holistic approach. This may entail a workforce transformation strategy which considers what AI-specific jobs need to be created and how to provide relevant AI training to employees at every level.
“Some companies are proposing their own ethical frameworks to protect workers,” explains Mr Hervey. “Rolls-Royce recently launched its Aletheia Framework for AI. This requires the company to consider the impact of AI on its workers, such as to deploy AI ‘shown to improve the well-being of employees, such as improved safety, working conditions, job satisfaction’, to analyse ‘potential job role changes or potential human resource impacts and the opportunities for retraining’, to explore ‘upskilling opportunities’ and so on.
Currently, the topic is a source of debate and speculation, with competing arguments on all sides. “The impact on labour markets remains to be seen,” says Mr Hervey. “Some futurists predict mass unemployment, some predict that new forms of work will be invented, while others predict that AI will be used to ‘augment’ rather than replace human employees.
Thinking regulation
As with any disruptive technology, the dawning of the age of AI has sparked calls for greater regulation. The need for increased regulation is becoming clearer, given the speed of AI development and uptake.
The European Commission is currently developing a regulatory framework that could have an impact on any company looking to do business in the EU. It hopes to promote a human-centric approach, where AI primarily services people and increases their wellbeing.
In anticipation of regulatory developments, it is prudent for companies to pre-emptively introduce a vetting process for AI products and services, to reduce disruption and drive productivity.
Rewards
AI stands to play an increasingly significant role in the day-to-day operations of many businesses, helping them to create value by generating profit, reducing costs and improving customer experience.
Increased integration of AI into workstreams seems inevitable, enabling companies to eliminate tedious tasks and focus employees on more productive activities, boosting speed, efficiency and accuracy.
Overall, AI-enabled technologies have the potential to dramatically increase economic output.
In a 1996 lecture entitled “Big Bills Left on the Sidewalk,” the late American economist, Professor Mancur Olson, made a powerful observation: an individual from a poor country – say, Haiti – who migrates to a richer country like the United States immediately becomes vastly more productive and earns a far higher wage than before. The individual has not changed overnight, so their skills or cultural attitudes cannot explain their improved situation. The answer must instead lie in their new country’s environment
Olson therefore concluded that many (or most) economies are not socially efficient. A better institutional and social context, and higher stocks of assets from past investments, can make an enormous difference to individuals’ productivity, and hence to their living standards.
The challenge, as Olson pointed out, is that individuals cannot change the overall context in which they live and work, except by moving elsewhere. The improvements needed to raise an entire economy’s productivity require coordinated, collective action. Olson’s own well-known research on the logic of collective action explored why this is so difficult to achieve.
Unfortunately, Olson’s “big bills” insight about the need for coordination rarely features in the current productivity debate. Instead, the discussion – why output per worker hour has been virtually flatlining in many OECD countries since the mid-2000s, or which targeted policies might help to revitalise left-behind towns or regions – has focused on numerous potential contributory factors, rather than the need for coordinated action.
For example, policymakers typically undertake cost-benefit appraisals of potential infrastructure investments on a project-by-project basis. But the returns to any project will be affected by other decisions, both private and public. If a new railway line opens, will local bus timetables change to coordinate people’s journeys? Will developers build houses nearby, and will other government agencies open schools in the area? Without coordinated decision-making, investing in new projects where more of the other pieces are already in place will generally look like the better value-for-money option. Unfortunately, government agencies appraising projects are rarely tasked with conducting a holistic survey of the policy landscape.
Regional or local low-skills traps present a similar problem. If there are no high-paying jobs in a particular area, then individuals have no incentive to invest in their own education. And if the local pool of available skilled labour is small, employers have no incentive to open offices or factories there. The only option for people who want to move up is to move out.
Although the obstacles to increased productivity are nearly universal, the solutions will be specific to each place and reflect its asset legacy, industrial history, location, and local politics. There is no science – yet – regarding what kinds of decisions need to be taken at different levels of government, or how to coordinate choices across departmental silos and budgets. (That is why these issues are central to the agenda of the United Kingdom’s recently established Productivity Institute.)
Nobody would be surprised that the factors contributing to low or stagnant productivity include lack of investment in physical and intangible assets, skills shortages, inadequate infrastructure, poor management, and a weak macroeconomic environment. More surprising is the lack of attention paid so far to finding a recipe that addresses these problems in tandem. Economists and policymakers must begin to rectify this without delay.