On March 8-9, 2018, a bespoke group of approximately 200 leading entrepreneurs, investors and advisors focused on deploying and commercializing cutting edge technologies gathered in Monte Carlo from across the globe for the 11th annual CleanEquity® Monaco Conference. Complementing other plenary sessions and emerging company presentations, the conference initiated a new feature — Covington CleanEquity Conversations — intended to capture and memorialise the unique thought leadership opportunity presented by the gathering in Monaco. On the first day, conference participants separated into three breakout groups for Chatham House Rule discussions curated by partners from the international law firm Covington & Burling LLP of three critical issues confronting cleantech deployment and commercialisation:
- AI and IoT – Benefits, Risks, and the Role of Regulation
- Sustainability – What goals should businesses prioritise and what are the right metrics?
- Will market driven innovation alone save us from climate change?
On the second day, the Covington team reported during the conference’s final plenary session key takeaways from the three breakout group discussions. Covington and CleanEquity organizer and specialist investment bank, Innovator Capital, are pleased to share brief summaries of the thought leadership developed by the proceedings of conference participants on each of the three topics.
________________________________________________________
AI and IoT – Benefits, Risks, and the Role of Regulation
- Rapid evolution and proliferation of artificial intelligence and the Internet of Things holds tremendous promise for dramatic, transformational efficiency gains in nearly every industry.
- At the same time these technologies present risks of massive employment disruption, losses of privacy and yielding of human free will to decisions made by algorithms and machines.
In a session led by Covington’s Corporate Partner Simon Amies, conference participants examined these propositions and then considered two questions: Where should regulation step in? Can regulation be effective to manage the risks without diminishing the benefits?
The Benefits
There was universal agreement that evolution and proliferation in AI and the Internet of Things have the potential to bring transformational efficiency gains across virtually all sectors of industry. AI has already transformed business models in the technology sector through the deployment of sophisticated algorithms to process vast quantities of data, and machine learning and automation are already being utilized on a large scale in other areas of industry, revolutionizing processes and delivering significant efficiency gains.
A number of the presenting companies noted that artificial intelligence already plays a pivotal role in their businesses, with some utilizing the technology at the heart of their business model — one company uses its machine learning system to manage and optimize grid operations — and others use AI as a tool to enhance research and refine product development. One participant flagged the fundamental change to supply chain dynamics and manufacturing processes with the emergence of the smart factory in the Industry 4.0 model, leading to increased efficiency, reduced costs and maximization of resources. Mass-customisation of lower-cost goods manufactured to order in close proximity to the market bring reduced shipping costs and lead times.
The Risks
But as often happens with the adoption of disruptive technology, new and often unforeseen risks and challenges emerge.
One participant noted concerns surrounding access to, control and ownership of personal data in the field of healthcare with the focus on development of personalized and precision medicines. Another flagged how personal data could be used by employers to make hiring decisions or by insurers to price auto or life policies, but without explicit consent from the individuals.
One participant pointed to the safety and security concerns of having automated intelligent systems replace humans at the controls of cars and other machines and equipment. In the case of the autonomous vehicle, who is responsible in the event of an accident where the system makes a conscious decision which turns out to be the wrong one, causing a fatality? Another participant identified the risks of malicious attackers disrupting or asserting control of systems run by AI and IoT, whether on an industrial scale or on a micro level seeking to take advantage of one individual.
The threat of AI and IoT to jobs was also highlighted. Many jobs that have kept the workforce occupied for generations could become redundant almost overnight as businesses look to adopt technologies that bring gains in efficiency and productivity and at the same time reduce labour costs. The labour market is predicted to encounter massive change of a scale not seen since the Industrial Revolution which will have consequent effects on wealth inequality and potentially global stability. While governments and policy makers are likely to take steps to protect jobs, there will be increasing demand for skilled technicians capable of supporting digital capabilities.
The Role of Regulation
The discussion then focused on the two key questions of (a) where should regulation step in and (b) can regulation be effective to manage the risks without diminishing the benefits?
The first observation was that against the backdrop of recent high profile data breaches and the imminent deadline for implementation of the EU’s General Data Protection Regulation, regulation is appropriate and has an important role in managing the risks presented by AI and IoT. Data privacy legislation has continually evolved since the emergence of the internet, adapting and reacting to the challenges associated with mass collection, use and storage of personal data to ensure privacy, security and transparency. Privacy laws already apply to AI systems that process personal data, which means new systems need to be designed adhering to these standards where applicable.
One participant commented that it should not be left to the law-makers to ensure risks are adequately legislated against. There is also a role for participants in the market, particularly large corporations, to ensure responsible and fair practices are followed through the adoption of codes of best practice reflecting key ethical principles. It was noted that Microsoft had established six ethical principles to guide the development and use of artificial intelligence — AI systems should be fair, reliable and safe, private and secure, inclusive, transparent, and accountable.[1]
In discussing the risks of autonomous vehicles, one participant noted that current product liability laws would apply, meaning that claims may exist where loss is caused by a vehicle that is found to be defective or unsafe. It is likely that these laws will evolve to clarify where responsibility lies, and manufacturers and insurers will look to law-makers to set down standards on how autonomous systems that control driverless vehicles should operate in specific situations rather than make these decisions for themselves.
It was noted that the adoption of standards and regulations for AI and IoT would need to be consistent and coordinated on a global level. International policy-makers such as the Organisation for Economic Co-operation and Development will need to develop standards that will be accepted universally. With an increasingly fierce arms-race developing between developed nations to be the economic leader in AI and IoT, this will be challenging.
The final point tackled by the group was the need for employment laws to evolve to recognize the changes in employment practices that are likely to flow from the move to automated systems. Current employment laws are based around the model of employers employing workers at specific worksites, whereas people are increasingly engaged through remote, part-time or project-based work. As jobs are displaced through adoption of AI and IoT, new skilled roles will be created to develop, monitor and manage the new systems. Governments will have an important role in ensuring that the education curriculum adapts to ensure students acquire the necessary skills required to support digital capabilities.
[1] Microsoft. 2018. The Future Computed – Artificial Intelligence and its role in society. https://blogs.microsoft.com/uploads/2018/02/The-Future-Computed_2.8.18.pdf