Simply, and generally, AI refers to the ability of a computer system to complete increasingly complex tasks or solve increasingly complex problems in a manner similar to intelligent human behaviour. Examples range from IBM’s Watson system that, in 2011, won a game of Jeopardy! against two former winners to emerging technologies fuelling the development of  driverless cars.
AI is expected to have a profound impact on society, whereby intelligent systems will be able to make independent decisions that will have a direct effect on human lives. As a result, some countries are considering whether intelligent systems should be considered “electronic persons” at law, with all the rights and responsibilities that come with personhood. Among the questions related to AI with which the legal profession is starting to grapple: Should we create an independent regulatory body to govern AI systems? Are our existing industry-specific regulatory regimes good enough? Do we need new or more regulation to prevent harm and assign fault?
While we are at least a few steps away from mass AI integration in society, there is an immediate ethical, legal, economic and political discussion that must accompany AI innovation. Legal and ethical questions concerning AI systems are broad and deep, engaging issues related to liability for harm, appropriate use of data for training these systems and IP protections, among many others.
Governments around the world are mobilizing along these lines. The Japanese government announced in 2015 a “New Robot Strategy,” which has strengthened collaboration in this area between industry, the government and academia.
Late last year, the United Kingdom created a parliamentary group — the All Party Parliamentary Group on Artificial Intelligence — mandated to explore the impact and implications of artificial intelligence, including machine learning. Also late last year, under the Obama administration, the White House released the reports, Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.” The reports consider the challenge for policymakers in updating, strengthening and adapting policies to respond to the economic effects of AI.
In February 2017, the European Parliament approved a report of its Legal Affairs Committee calling for the review of draft legislation to clarify liability issues, especially for driverless cars. It also called for consideration of creating a specific legal status for robots, in order to establish who is liable if they cause damage.Most recently, the Canadian federal government announced substantial investments in a Pan-Canadian Artificial Intelligence Strategy. These investments seek to bolster Canada’s technical expertise and to attract and maintain sophisticated talent.
Lawyers can play a valuable role in shaping and informing discussion about the regulatory regime needed to ensure responsible innovation.
Ajay Agrawal, Founder of the Creative Destruction Lab and Peter Munk Professor of Entrepreneurship at the University of Toronto’s Rotman School of Management, says Canada has a leadership advantage in three areas — research, supporting the AI startup ecosystem and policy development. The issue of policy development is notable for at least two reasons. First, one of the factors affecting mass adoption of AI creations, especially in highly regulated industries, is going to be the regulatory environment. According to Agrawal, jurisdictions with greater regulatory maturity will be better placed to attract all aspects of a particular industry. For instance, an advanced regulatory environment for driverless cars is more likely to attract other components of the industry (for example, innovations such as tolling or parking).
Second, policy leadership plays to our technical strength in AI. We are home to AI pioneers who continue to push the boundaries of AI evolution. We can lead by leveraging our technical strengths to inform serious and thoughtful policy debate about issues in AI that are likely to impact people in Canada and around the world.
Having recently spoken with several Canadian AI innovators and entrepreneurs, I have identified two schools of thought on the issue of regulating AI. The first is based on the premise that regulation is bad for innovation. Entrepreneurs who share this view don’t want the field of AI to be defined too soon and certainly not by non-technical people. Among their concerns are the beliefs that bad policy creates bad technology, regulation kills innovation and regulation is premature because we don’t yet have a clear idea of what it is we would be regulating.
The other school of thought seeks to protect against potentially harmful creations that can spoil the well for other AI entrepreneurs. Subscribers to this view believe that Canada should act now to promote existing standards and guidelines — or, where necessary, create new standards — to ensure a basic respect for the general principle of do no harm. Policy clarity should coalesce in particular around data collection and use for AI training.
Canada, home to sophisticated academic research, technical expertise and entrepreneurial talent, can and should lead in policy thought on AI. Our startups, established companies and universities all need to talk to each other and be involved in the pressing debate about the nature and scope of societal issues resulting from AI.
As lawyers, we need to invest in understanding the technology to be able to effectively contribute to these ethical and legal discussions with all key stakeholders. The law is often criticized for trailing technology by decades. Given the pace of AI innovation and its potential implications, we can’t afford to do that here.

This post first appeared as a Speaker’s Corner feature in the June 5, 2017 edition of the The Law Times.