Extracts follow from a ‘cover story’ just published in the Financier Worldwide Magazine
- According to UBS, the AI industry was a $5bn marketplace by revenue in 2015. By 2025, the size of the AI software market is forecast to reach $126bn
- McKinsey Global Institute reckons AI techniques could create between $3.5 and $5.8 trillion in value annually across nine business functions in 19 industries in the coming years. This accounts for about 40% of the overall $9.5 to $15.4 trillion annual impact potentially enabled by all analytical techniques.
- Accenture claim that by 2035 AI will double growth rates for 12 developed countries and increase labour productivity by as much as a third.
Automation evolution
AI itself has evolved dramatically, particularly over the last 10 years.
“Machine learning, a subset of AI, has been an area of research for over half a century but has only achieved transformational success with recent increases in processing power and memory and the availability of very large training data sets, sometimes by-products of the internet age,” explains Matt Hervey, a partner and head of artificial intelligence at Gowling WLG. “This has vastly improved computer vision and language processing, in turn enabling unprecedented automation of previously human-only tasks. High-profile examples include self-driving cars and medical diagnosis, but vision and language perception enable automation of a vast range of low profile, menial tasks across all sectors.
“The effects of such automation are unclear to experts and the public alike, so current attitudes to AI may not last,” he adds. “What is abundantly clear is that governments, regulators, lawmakers and companies around the world are conscious of both the economic potential of AI and the risks to society, including fake news, mass unemployment, loss of privacy, and challenges to human autonomy and dignity.”
More recently, the coronavirus pandemic has had a significant impact on adoption of AI, as companies responded to the challenges of worker productivity during the crisis. “COVID-19 has accelerated AI and its applications by decades,” says Clare Lewis, a partner at McGuireWoods. “With the unprecedented move online, from tele-medicine, e-learning and remote working, the demand for AI and machine learning has never been greater.” Indeed, AI can help remote workers stay focused on their most important duties by eliminating tedious tasks.
Sector-specific trends have also emerged during the COVID-19 crisis, with healthcare one obvious beneficiary. “The beauty of AI is that it can benefit all sectors that rely on data,” says Giles Pratt, a partner at Freshfields Bruckhaus Deringer LLP. “But the healthcare sector, and particularly pharmaceutical and biotech, may have the most to gain, as ‘failure rates’ in drug research and development (R&D) remain high and costly. “AI is increasingly being used in drug development, analysing and learning from large data sets to identify suitable compounds and to predict efficacy and side effects of new treatments,” he continues. “Reducing the time and cost involved in R&D can make a tangible difference in this space – and the importance of efficient drug development has really been put in the spotlight during the COVID-19 pandemic.”
In manufacturing, AI can monitor and analyse equipment to issue alerts when a service is necessary. This predictive maintenance enables businesses to cut out scheduled services. In turn, downtime and overall maintenance costs are reduced.
Cyber strength
Cyber security is another key area of AI application – indeed, it is the leading area according to the Consumer Technology Association, with 44% of all AI applications being used to detect and deter security intrusions. AI can provide an ‘always on’ solution to help protect businesses from malicious attacks. It can monitor systems to identify and fix vulnerabilities, allowing the IT team to concentrate on key risks.
“The scale and complexity of large organisations’ IT environments means the task of monitoring systems for irregularities is becoming increasingly difficult,” points out Mr Pratt. “As part of a multi-layered cyber security strategy, we see AI playing a significant role in detecting and responding to threats by first learning what is ‘normal’ for a specific IT environment, and then identifying anomalies. That makes AI an important line of defence against attack, and in managing the legal and regulatory risks associated with cyber security incidents,” he says.
Workforce worry
There is, however, so much that we do not know about the implications of AI. One thing is increasingly clear, however: AI will be profoundly disruptive. Some already view the proliferation of AI and its potential future applications in a negative light.
In terms of productivity and employment, for example, robots have been depicted as taking jobs from workers. Concerns about AI making human labour obsolete are understandable. According to a recent study from MIT and Boston University, robots could replace as many as 2 million workers in manufacturing alone by 2025.
“Whenever you have a leap in efficiency, there are large strides made in terms of economic growth,” says Ms Lewis. “However, the dark side to AI is that some workers, such as truck drivers for example, will need to re-tool their skills very quickly and be able to relocate to find new jobs. Software developers around the world are developing software specifically geared to replace well-paid managers who perform repeatable tasks. The AI technology being developed is very exciting, but the collateral damage will have long-term repercussions in terms of poverty and inequality. Innovative solutions are needed for those who fall through the cracks.”
COVID-19 has exacerbated this issue. In the US, for example, around 40 million jobs were shed at the height of the pandemic, and according to the University of Chicago, around 42% of those losses will be permanent. With many companies in survival mode for the foreseeable future, the pandemic has provided further incentive to increase automation levels. AI, after all, does not need to socially distance.
On the other hand, there are suggestions that AI will actually have a net positive impact on jobs. According to PwC, for example, AI is projected to create as many jobs as it displaces in the UK over the next 20 years – in absolute terms, around 7 million existing jobs could go, with around 7.2 million engendered.
In the short- to medium-term, AI is more likely to automate certain tasks within a role, rather than the entire role itself. There will be a focus on AI for complex calculations, routine processes and pattern recognition, for example, which can boost profitability and free up employees. Ultimately, AI can exist symbiotically with humans. The technology does not operate in a vacuum; it requires humans to function properly and deliver the desired efficiency and productivity gains.
Implementation hurdles
To date, AI has typically been deployed in the form of industrial and collaborative robotics, as well as machine vision and machine learning. But it is continually evolving. For example, industry leaders expect significant growth in predictive systems which use AI to manage intelligent supply chains. Manufacturers also predict increased use of robotic process automation (RPA) in their operations.
At present, the most significant barrier to deployment of AI solutions is that many organisations lack clarity on how to implement them. There is also a lack of employees with the necessary digital skills to implement AI, or even to define what skills are needed.
To overcome such issues, companies must adopt a holistic approach. This may entail a workforce transformation strategy which considers what AI-specific jobs need to be created and how to provide relevant AI training to employees at every level.
“Some companies are proposing their own ethical frameworks to protect workers,” explains Mr Hervey. “Rolls-Royce recently launched its Aletheia Framework for AI. This requires the company to consider the impact of AI on its workers, such as to deploy AI ‘shown to improve the well-being of employees, such as improved safety, working conditions, job satisfaction’, to analyse ‘potential job role changes or potential human resource impacts and the opportunities for retraining’, to explore ‘upskilling opportunities’ and so on.
Currently, the topic is a source of debate and speculation, with competing arguments on all sides. “The impact on labour markets remains to be seen,” says Mr Hervey. “Some futurists predict mass unemployment, some predict that new forms of work will be invented, while others predict that AI will be used to ‘augment’ rather than replace human employees.
Thinking regulation
As with any disruptive technology, the dawning of the age of AI has sparked calls for greater regulation. The need for increased regulation is becoming clearer, given the speed of AI development and uptake.
The European Commission is currently developing a regulatory framework that could have an impact on any company looking to do business in the EU. It hopes to promote a human-centric approach, where AI primarily services people and increases their wellbeing.
In anticipation of regulatory developments, it is prudent for companies to pre-emptively introduce a vetting process for AI products and services, to reduce disruption and drive productivity.
Rewards
AI stands to play an increasingly significant role in the day-to-day operations of many businesses, helping them to create value by generating profit, reducing costs and improving customer experience.
Increased integration of AI into workstreams seems inevitable, enabling companies to eliminate tedious tasks and focus employees on more productive activities, boosting speed, efficiency and accuracy.
Overall, AI-enabled technologies have the potential to dramatically increase economic output.