Governments can have a pretty dismal track record when it comes to predicting the next big thing. Tax dollars spent on visionary projects are often, it seems, tax dollars thrown away. But, this past spring, Ottawa might have made its best bet yet with the $125 million it has set aside over the next five years for a Pan-Canadian Artificial Intelligence Strategy.
That money will go to three academic centres: the Montreal Institute for Learning Algorithms (MILA), the Alberta Machine Intelligence Institute (AMII) in Edmonton, and the new Vector Institute for Artificial Intelligence, based in Toronto. In return, the three organizations are to hire more scientists, do more research, train more students and – the important bit – nourish a growing ecosystem that will provide Canadian jobs, products and services based on artificial intelligence, or AI.
By itself, that money is not a huge sum. But consider that the Quebec government has allocated $100 million to its AI community in Montreal; Ontario has set aside $50 million for Vector; and, in September 2016, the Canada First Research Excellence Fund gave $93.6 million to a trio of universities – Université de Montréal, Polytechnique Montréal and HEC Montréal – for cutting-edge research in an area of AI called deep learning.
That’s not all. A host of companies – from local start-ups to established tech giants like Google and Microsoft – are pouring millions of dollars into Canada for AI research. Just this past September, Facebook announced it was setting up an AI research lab in Montreal to be led by McGill University computer science professor Joëlle Pineau. The giants are setting up branch offices, hiring Canadian-trained experts, plowing time and cash into applications of what is widely seen as an advance in science and engineering that will be as transformative as the internet. And the start-ups are aiming to be the next tech giants, but this time based in Canada.
Altogether, it’s been quite a year. Eric Schmidt, chairman of Google’s parent company Alphabet, recently tweeted that Canada is smart to “quadruple down” on AI, referring to push from governments, universities, large companies and start-ups.
Canada is smart to “quadruple down” on AI with 1) gov’t, 2) universities, 3) large co’s, 4) startups https://t.co/TvrY9tITEI 4/8
— Eric Schmidt (@ericschmidt) October 26, 2017
Why Canada, you ask, and why now? To answer those questions, one needs a definition and a bit of history. First, the definition: despite everything you might have read or seen, the robots are not coming. Do not expect the civilized and well-mannered “positronic robots” from the fiction of Isaac Asimov, the aggressive and violent Cybermen of Dr. Who, or the prototypical Tik-Tok of L. Frank Baum’s Oz series. AI is “very different from what people see in science fiction, that’s for sure,” says Yoshua Bengio, director of MILA and a professor of computer science at U de Montréal. “It’s not so much robots but computers becoming smarter, thanks to progress and research in what’s called machine learning and especially deep learning.”
If you want an example of AI, Dr. Bengio says, take out your cellphone. The speech recognition system – iPhone’s Siri, for instance – depends on machine learning systems developed in Montreal that allow the computer program to turn the sound of your voice into words. “We take that for granted because we can all do it,” Dr. Bengio says, but it’s not easy to build a computer program that can mimic that basic human ability.
The phone systems, of course, don’t really understand meaning, but once sounds are converted to words, the underlying program can undertake actions based on them – things like finding yesterday’s baseball scores or searching the web for a cookie recipe. It seems as if the phone has understood but it’s an illusion. The goal for AI, Dr. Bengio says, is to have a range of programs that mimic a human level of understanding in various spheres. “We’re not there yet,” he says, although in some areas the research is getting close. After some made-in-Canada advances, translation software, for instance, is now about half as good as a human translator and getting better. And software to scan medical images can now approach human accuracy in determining whether the image shows cancer or not.
One next step, Dr. Bengio says, will be more flexible and powerful voice-controlled programs that interact with their users in ways that approach human-level understanding, programs that could act as everything from personal secretaries to companions for shut-ins. Banks, for instance, are now experimenting with “robo-advisers” – AI programs that offer, among other things, financial advice. Other on-the-horizon examples abound, from driverless cars to computerized legal analysis to new genomic therapies.
Now for the bit of history: AI research in this country began, for all intents and purposes, a little more than three decades ago and it was, for a long time, marginal. The Canadian Institute for Advanced Research, or CIFAR, set up a research group in 1983, dubbed Artificial Intelligence, Robotics and Society, built around Geoffrey Hinton, a computer scientist now regarded as one of the rock stars of AI. (Now a professor emeritus at the University of Toronto, Dr. Hinton works part-time at Google’s Toronto office and is chief scientific adviser for the Vector Institute.) But while these pioneering researchers made some progress in areas such as machine vision, the field as a whole was in an “AI winter,” according to CIFAR president and CEO Alan Bernstein, and the researchers were viewed as being on the fringe of the computer science world.
Despite that, people such as Drs. Hinton and Bengio, and computer science professor Rich Sutton of the University of Alberta, painstakingly laid the theoretical groundwork for today’s burgeoning AI research. Their persistence and scientific acumen were rewarded when the speed and power of computers caught up with them, allowing the computing-intensive theory to start moving out of the academy and into the marketplace. “That was a difficult time and it would have been easier to just stop and follow the fashion of the day,” Dr. Bengio recalls. “But sometimes people can be stubborn and here it paid off.”
Dr. Bernstein, whose institute will coordinate the federal AI spending, notes that $125 million might seem like a lot of money for one field of scientific endeavor. But, for the physicists and chemists whose noses might be a bit out of joint, he says that AI will have applications in other scientific fields. He recalls a meeting of materials scientists who were looking at using AI to winnow down a multitude of candidate molecules for new solar panels. And a Canadian biotech start-up, Toronto’s Deep Genomics, is using AI to untangle the complicated genetic basis of some illnesses.
The problem Canada now faces is to translate its theoretical and scientific lead into jobs, jobs, jobs. At AMII in Edmonton, “we have spent 15 years training students, but we haven’t had places for them to go in Canada,” says Cameron Schuler, the institute’s executive director. But demand for people trained in AI is “growing exponentially,” he says, with top salaries on offer in places like southern California. So the goal is for the three institutes to push what he calls “broad AI” – people who will use applications of AI, those who will develop new applications and those who will continue to advance the science – and to keep all of it in Canada. “That’s actually happening now,” he says.
Doina Precup, associate dean of research in the faculty of science at McGill and a senior fellow at CIFAR, agrees. “My PhD graduates, even three years ago, if they would have said, ‘I want to do research, where should I apply?’ I would not have said apply in Canada. … Now, all of a sudden, all the big players are here and our students are really excited. They see that they don’t have to leave the city.”
Dr. Precup, like her colleague Dr. Pineau, recently joined one of those big players – she now splits her time between McGill and Google as head of the company’s new Deep Mind office in Montreal. She admits that she spends less time at the university as a result, but she still advises graduate students and teaches the advanced computer science courses. She finds the research at Google exciting – “things move quicker there,” she says – and the connection with the company opens up many internship possibilities.
The two approaches to AI developed by Canada’s pioneers are called deep learning – the main focus of the Montreal and Toronto groups – and reinforcement learning, whose heartland is in Alberta. In a nutshell, the deep learning approach works by training a computer program on vast amounts of data so that it can pull out patterns and recognize them when it sees them again. In contrast, reinforcement learning uses a carrot and stick approach, in which the computer program is rewarded electronically when it does something right and punished for mistakes.
“I don’t like the term artificial intelligence,” says AMII scientific director and U of A professor Osmar Zaïane. “Intelligence is intelligence.” But whether it’s computed intelligence or the sort that resides in your head, the term is “actually quite difficult to define,” he says.
Vector Institute research director and U of T professor Richard Zemel says a formal definition of artificial intelligence might be the “scientific and engineering study of getting computers to perform tasks that require some characteristic features of intelligence.” But that doesn’t move us much further ahead, he says with a chuckle. “What are characteristic functions of intelligence? That’s the debate.”
The main difficulty is getting software programs to learn in ways analogous to how humans learn. “Reinforcement learning” sounds very much like how we perceive our own teaching and learning, guided by a word of praise here or a shake of the head there, or by other forms of positive reinforcement. But we also learn in some cases by analyzing heaps of data and abstracting patterns. What’s more, the human form of “deep learning” requires much less information than computers to reach a result. A human infant learns what a cat is from a handful of examples; a computer might need millions before it could reliably identify a picture of a cat. A goal of current research, Dr. Zemel says, is “few-shot learning” – cutting down the amount of data needed to teach a computer to do a job.
One thing that’s clear is that today’s AI programs, however impressive, lack something we humans take for granted: flexibility. Recently, a program based on reinforcement learning – AlphaGo, designed by Google’s Deep Mind subsidiary – repeatedly beat the best players of the ancient Chinese board game Go, and in the process actually invented new strategies that astonished the masters. But the program, despite its brilliance, can’t play chess. “Humans can play chess, play Go, fry an egg, drive a car, mow the lawn … we are very versatile,” U of A’s Dr. Zaïane says. On the other hand, when enough of our devices are interconnected – the “internet of things” – they might someday be able to work together to mimic human flexibility, he says.
But, for the moment at least, the robots aren’t coming. What is coming is a tsunami of new products and services that, by one estimate, will be worth $100 billion in five years. And, like real tidal waves, the AI tsunami has the potential to leave destruction and dislocation in its wake. Ottawa’s plan is to give the three AI institutes the support they need to surf the wave scientifically and technically, Dr. Bernstein says, and they’ll do that by hiring more top-level people and turning them loose on important problems, while at the same time collaborating with industry and training the youngsters that the growing commercial sector will need. There’s certainly interest, Dr. Zemel notes – an undergraduate machine-learning class he taught when he came to U of T 15 years ago had 25 students. “Last year it had 520,” he says.
The three AI institutes will get CIFAR funding to support new academic chairs, and there’s enough money available to hire between 40 and 50 senior people, Dr. Bernstein says. “One of the bottlenecks for universities is hiring enough professors in the area, so the CIFAR chairs will be very helpful,” says McGill’s Dr. Precup. Some will study classical AI, but others might be software engineers or computational biologists. If all goes well, the three centres combined will train up to 300 postgrads a year, as well as many who stop short of an advanced degree but will still work in the field.
Another important aspect, Dr. Bernstein says, is to understand the kind of disruption AI will bring – on employment, privacy and our relationships with our devices – and finding ways to navigate it. This will require expertise from the humanities and social sciences, from ethicists to legal scholars, he says.
“AI will bring value to all the consumers of all these products,” Montreal’s Dr. Bengio says, but “I believe there will very probably be major social disruption,” due mainly to changes in the job market as AI applications render some jobs obsolete. That sort of upheaval is likely to happen much more rapidly than it has in the past and governments will need extra sources of revenue to handle them, by improving social security systems, for instance, or beefing up training programs. “The only way I can see that this can work is if some of the wealth created by AI is generated here,” Dr. Bengio says. “Canada needs to be a leader not just scientifically but also industrially.”
So that’s why Canada and why now. But is Canada’s grand plan for AI likely to bring home the scientific and industrial bacon? Dr. Precup is cautiously optimistic. “It’s early days. There is a lot of excitement, and a lot of really neat stuff that’s been done in terms of the applications and the theory, but there are many open questions that remain,” she says. “I think we’re going to have a lot of fun for quite a while.”
The Forum on the Socially Responsible Development of AI, held in Montreal, ended with the release on November 3 of the Montreal Declaration for a Responsible Development of Artificial Intelligence. The declaration, according to organizers, aims to spark public debate and encourage a “progressive and inclusive orientation to the development of AI” around seven key values: wellness, autonomy, justice, privacy, knowledge, democracy and accountability.
“We want this declaration to spark a broad dialogue between the public, the experts and government decision-makers,” says Guy Breton, rector of Université de Montréal. AI, he says, “will progressively affect all sectors of society and we must have guidelines, starting now, that will frame its development so that it adheres to our human values and brings true social progress.”