July 6, 2017
The “Fourth Industrial Revolution,” as it has been referred to by the World Economic Forum, is upon us – central to which is the development and adoption of automated technologies. Though broadly designed to make human tasks easier, its advent poses numerous challenges for policymakers.
In spite of the enormous impact that automation will have on our way of life, it remains somewhat of a nascent field in terms of policy and global public consciousness. The UK is no exception, although we are now starting to see more activity in this area.
To thrive, the UK must not only create an environment which acts as an ‘incubator’ for automation technologies to thrive and develop, it must be equipped to capture their value. However, as we examine the basics of automation, its impact and the policy framework that is being developed to support it, we are led to ask whether the Government doing enough, fast enough to prepare?
Broadly speaking, there are two elements of automation from a business perspective: industrial automation and information automation.
This involves the use of control systems, computers or robots to handle different processes and machineries for an industry, ultimately replacing a human being. It constitutes the automation that companies would implement in their factories; for example, Australian company Fastbrick Robotics has developed a robot, the Hadrian X, which can lay 1,000 standard bricks in one hour – a task that would take two human bricklayers the better part of a day or longer to complete. The idea here is to use technology to perform tasks that can be repetitive, dangerous or unsuitable for humans – this is industrial automation, the automation of labour.
Machine learning is a type of artificial intelligence (AI) where the computer programme automatically develops and acts as it is exposed to new data. Machine learning excels in solving complex, data-rich business problems where traditional approaches, such as human judgment and software engineering, increasingly fail. This is the automation of information – it is intelligent automation. Today, it allows retailers to target customers with very specific advertising and it can crunch petabytes of patient data to improve healthcare. The scope of what is possible in the future could be staggering.
When policymakers discuss the implications of industrial automation policy, there is a recurring theme: are robots taking our jobs? In a recent report, the World Economic Forum predicted that robotic automation will result in the net loss of more than 5 million jobs across 15 developed nations by 2020 – a relatively conservative estimate, although this includes a quarter of Britain’s jobs. It is hard to contest the fact that all industrialised countries have seen a decline in the number of manufacturing jobs for decades. Part of this is due to globalisation; and part of this can be attributed to automation. Doomsayers will argue that industrial automation will accelerate this trend. Perversely though, amidst technological progress, productivity is slowing rather than accelerating. Whilst it seems counter-intuitive, it is simply because productivity-enhancing machines will not enhance productivity on their own – their value is in if they are used effectively by workers. Proponents argue that robotics is sparking a labour transition where low-skilled manufacturing labour is being replaced by high-skilled labour – the staff that develop, maintain and implement these machines effectively. The onus will be on government to provide an environment for those in low-skilled jobs to retrain for highly skilled jobs – a big ask financially and politically.
Turning to artificial intelligence and machine learning, the technology industry is both excited and scared in equal measure about the possibilities. Those in favour will point to the fact that it is already changing our lives for the better. We are making quantum leaps in everyday technologies. Speech recognition works far better than it used to and we are increasingly interacting with our computers by just talking to them, whether it is Amazon’s Alexa, Apple’s Siri, Microsoft’s Cortana, or the many voice-responsive features of Google. Then there are the advances in image recognition that extend far beyond ‘cool’ social apps. Medical start-ups claim they will soon be able to use computers to read X-rays, MRIs, and CT scans more rapidly and accurately than radiologists to diagnose cancer earlier and less invasively, and to accelerate the search for life-saving pharmaceuticals.
Many are worried. Philosopher Nick Bostrom describes the super-intelligence of the future as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” Bostrom also describes it as, “quite possibly the most important and most daunting challenge humanity has ever faced.” We may at one stage get to a point where computers could be designing computers, and the process of human discovery will be accelerated exponentially. This could help us in all manner of scientific discovery – it could drive forward space exploration, we could all have AI personal assistants, healthcare diagnostics could be drastically improved – you name it, it could have an impact. But it could also be used for autonomous weaponry; it could irreversibly affect our privacy and impact employment levels. This level of super intelligence may be a few decades away, but an ethical and regulatory framework for this technology is a concern that needs to be addressed now.
What is widely recognised as the biggest difference between this industrial revolution and its predecessors is the rapid pace at which it is taking hold. And, never before have we experienced change of this scale on both the demand and supply sides of the equation. Exciting as this is, it also leaves governments very little time to get the policy settings right. Having previously received sharp criticism from the House of Commons Science and Technology Committee for a perceived lack of AI strategy, the May Government must at the very least be credited for recognising the importance of AI, and featuring it prominently in recent policy announcements.
The Government has placed a strong emphasis on research and development. Delivering on a commitment made in its Industrial Strategy, Chancellor Philip Hammond announced in the May Budget a £4.7 billion increase in R&D funding by 2020 that is targeted towards investments in AI, smart energy technology robotics and 5th generation (5G) networks. AI cannot happen without data and so these measures are critical. It also is important to note that this is the largest R&D investment since the late 1970s and includes social impact research where academics and think tanks are collaborating with the tech industry to determine for example how driverless cars may benefit low mobility citizens. Encouragingly, the Government also intends to introduce an autonomous vehicles Bill in the new Parliament, the hallmark of which will be expanding compulsory vehicle insurance to driverless cars. Testing these technologies also tends to take place outside of London and so it offers the Government a prime opportunity to build on its Tech North program and engage the regions, taking steps towards creating an economy that “works for everyone”.
While such initiatives and funding commitments suggest that everything is on track, there are a number of measures requiring further attention. Automation’s reliance on data brings with it the need for appropriate data governance. Interestingly, the Royal Society and British Academy has assessed that the current framework for governing automation and the management and use of data cannot keep pace with technological advances. They have also called for the establishment of a new independent body to oversee the use of data. The Industrial Strategy has also not been without its detractors. Former Business, Energy and Industrial Strategy Select Committee Chair, Iain Wright was one of several MPs to criticise the agenda upon its release, assessing it too incremental and offering a “business as usual” approach. An example of this may be found in the £17.3 million put towards funding AI research in UK universities. While academics will debate the sum, it is worth adding that this money will be delivered through Engineering and Physical Sciences Research Council grants and not direct investment, which casts further doubt over the efficacy of this commitment.
Policymakers’ flat-footed response to the rise of the “gig economy” has many unions and other organised groups worried that automated, disruptive technologies will displace human capital. For example, General Secretary of Unite, Len McCluskey, has characterised automation as “imperilling the skills and jobs that are the backbone of today’s economy, which needs to be planned for now, not tomorrow.”
The Government is alert to this tension and its recent policy announcements have broadly indicated how it will address this – skills. Non-legislative commitments were made to technical education in the Queen’s Speech and the Spring Budget saw £300 million announced along with pilot programs to study learning over the course of one’s career. While education funding remains intensely debated by both sides of politics, the need for greater science, technology, engineering and maths (STEM) skills is well recognised. Indeed, a lack of digital skills already costs the economy up to £63 billion a year according to the House of Commons Science and Technology Committee. Digital Minister, Matt Hancock, also recently stated that coding has become “a blue collar job”. To this end, pledges were made in the Digital Strategy to create “prestigious institutes of technology” that will deliver high-quality technical educations. Even a casual observer may remark, however, that this seems an unrealistic vision for a £170 million investment. By way of comparison, countries like Germany have long had dedicated higher technical training schools, which for years have helped put Germany at the top of global STEM bachelor degree enrolment numbers.
There are numerous related questions the Government has yet to answer, ranging from how it will tackle the enormous ethical considerations, to preventing cyberattacks to reforming taxation and intellectual property rights. It is quite possible that many of these will be addressed in the forthcoming Digital Charter, designed to provide a framework for how to behave digitally as well as the Government’s review into artificial intelligence, which is expected after the summer.
Yet, industry and the public are after something more – a broader vision for how Britain will harness this technology. Without it, insecurities about the impact of artificial intelligence will linger. The UK runs the risk of both lagging behind the rest of the world in the development and adoption of automated technology, and lacking the policy settings to make good use of it. Unfortunately, we are likely to see Brexit dominate the policy agenda for the foreseeable future and for AI policy to be pushed down the priority list.
Though the era of automation has arrived, it will be some time before we are able to assess whether the UK’s current policy direction is the right one, especially since what we have been presented with so far offers a promising but incomplete version of the final product.
Business therefore has a critical role to play. It must continue to work actively with the Government and the wider community to instil confidence in an automation-driven economy and a less cautious policy agenda to support it.