Skip navigation
FEATURES

How artificial intelligence is about to disrupt higher education

Education in the 21st century must be built on the premise of human-machine entanglement.

By OLLIVIER DYENS | APR 30 2014
“For the past century or so, the most reliable path to wealth has been the ability to process information. That’s what got you into the most prestigious universities and graduate programs, which led to a career at a prestigious firm. Yet, the reason that information processing has been so highly valued is precisely because humans are so bad at it. So it shouldn’t be surprising that computers are taking over what we have come to regard as high-level human tasks.”
Why the Future of Technology is All Too Human, Greg Satell: Forbes

When Google announced the recent acquisition of Boston Dynamics, DeepMind and Nest, many in Silicon Valley became uncomfortable if not outright nervous. What’s behind Google’s new interest? Why is the search giant buying one artificial intelligence (or AI) and robotic-based company after another? And while pundits all over the world offered their analysis of the situation, no one in higher education even batted an eyelid. AI? Not particularly interesting, except to a few in computer science. We have much bigger fish to fry right now: MOOCs, online learning, learning outcomes, budgets cuts, government policies and so on.

We should have paid attention. As Wired announced: Google is trying to make our brains irrelevant.

The 20th century saw machines take over most of our physical endeavours. The 21st will see machines take over most of the cognitive ones. We do not lift, pull, push or bend things anymore. We oversee machines doing so. Soon, the same process will apply to our ability to analyze, understand and create. Maybe even empathize. We are entering what Erik Brynjolfsson and Andrew McAfee call The Second Machine Age.

Surgery will soon be conducted mostly by machines. In the near future, diagnostics of all kinds will be produced by computers. In many law offices, software has already replaced lawyers. Tax accountants are also threatened. In the next few years, translators will be unemployed except in rare and very specialized cases. Most of the financial trading is already done by machines. Even fundamental human skills, such as counselling and psychology, will use AI software to complement, and oftentimes even guide, human training, feeling and judgment.

Andrew McAfee wrote:

Human parole boards do much worse than simple formulas at determining which prisoners should be let back on the streets. Highly trained pathologists don’t do as good a job as image analysis software at diagnosing breast cancer. Purchasing professionals do worse than a straightforward algorithm predicting which suppliers will perform well. America’s top legal scholars were outperformed by a data-driven decision rule at predicting a year’s worth of Supreme Court case votes.

But this is only the beginning. Merge AI with Big Data and you have the making of a perfect storm.

While Big Data will be able to predict almost any behaviour, and detect trends that we thought were the exclusive domain of free will, AI will create storylines out of this data. For example, most universities today struggle with mental health issues and with retention and graduation rates. Use Big Data, crush the numbers in specialized AI software, and soon the narrative of why and how mental health issues appear, of why some students persist and some not, will become clear, predictable and operational. Our most messy and complex behaviour will soon become quantifiable because the amount of available data directly linked to behavioural pattern is simply ginormous. In fact, People’s Analytics is already used in many large scale companies and it can quite reliably predict what type of person will end up moving up the hierarchy, something that was once left to intuition. A company called Cataphora already produces software that can recognize sentiments in an email or “subtle changes in style” and sniff out unusual patterns that are key to white collar crimes.

The arts are not immune. Software exists that can now reliably predict the potential success of a song, or of a screenplay. Have you heard of David Cope? He invented algorithms that have composed classical music so rich that experts are often fooled into thinking the piece was not only composed by a human being, but by a great human composer.

Fiddling while Rome burns?

In the meantime, we debate over how much education should cost; over whether the development of skills is more efficiently done in person or through a network; over how current and future demographics trends will impact enrolment. And while dangerous and threatening clouds are forming on the horizon, we often create programs and curricula as if the world around us had barely changed. The classroom, the lab, the credit-based curriculum and the number of contact hours, are still fundamental features of our higher education system.

Is this enough? Not in the slightest. We must ask ourselves whether our education system, our methods and structures are appropriate to deal with highly performing AI and Big Data. Are we giving students the tools needed to understand, monitor and guide machines, to tap into AI and Big Data’s amazing and frightening potential? Are we training students so that they will not be passive bystanders but active drivers of positive changes? Do we know how to ensure that students will have the skills needed to understand the exponentially complex world we live in, a world where most decisions will be made by machines? Rest assured, most of those decisions will be to our benefit. AI, using Big Data, will have been programmed to ensure the improvement of the human race. But if we’re not careful, we will not be privy to those decisions.

The last 50 years have been both disruptive and transformative. Empires, both political and financial, have fallen; technology has kept inching closer to biology; IT-based communication has gone beyond ubiquitous and is now probably fundamentally affecting the gene pool. But one even bigger revolution has taken place since the Second World War, one that has perfectly transformed humanity: we now experience the world almost uniquely through the “mind” of the computer. Every day, our experience of reality is being mediated by machine language, by algorithms, by software and the ability of networks to fire information closer and closer to the speed of light. We now cognitively and physically experience the world through machines. Every day, our sensual experience becomes a machine one.

We already have visual access to both the cosmos and the strange universe of the microcosm. Soon, we will see, hear, touch and taste worlds even more foreign to our senses, dimensions our brains and our biology are not meant to access. The computer-mediated experience of the world will open up fantastic new realms of the physical, of the cognitive and of the sensual. The microcosm and the macrocosm will be experienced physically and that experience will uncover extraordinary new structures of understanding, overwhelming new sensations and astonishing unique perceptions. And that experience will be total: for humans the world will soon be embodied almost exclusively through machines.

Now imagine for a moment the convergence of AI, Big Data and computer-mediated experiences. Suddenly, our ability to pull back into a human-mediated experience becomes very remote. Through AI and Big Data, reality becomes so multifaceted that only a computer-mediated, filtered and reconstructed experience of the world can make sense of it. We quickly become spectators in and of our own world.

Many people will disregard these warnings by highlighting the fail promises of AI in the last 50 years. But that was before Big Data. That was before Deep Blue, Watson, highly performing networks, Nuanced AI, Deep Learning AI, frightening life-like robots, computers that learn from experience. That was before the self-driving Google Car which seemed impossible only a few years ago. And while many people have heard of this fantastic self-driving automobile, few realize that most cars today are becoming quietly independent of their drivers. Look under the hood, as we use to say: your car breaks for you, steers for you, monitors you and keeps you awake; your car sees traffic problems way ahead of you, warns you and reacts for you if you don’t; your car parks for you, knows where you’re going… But yes, go ahead and believe in the fiction of humans controlling machines, that driving today is possible without machine language and a computer-mediated experience of the world.

There is urgency involved

While this may sound very dramatic and dystopian, I am neither pessimistic nor fundamentally worried. We will be changed. We will lose many things but also gain so many more. We have co-evolved with technology for hundreds of thousands of years and perhaps even more. Our fate has been linked to that of technology for as long as we have been present on this earth.

I am not pessimistic, but I feel a deep sense of urgency. We absolutely need to rethink our education system, to understand these fundamental shifts and to provide our students with cognitive skills, abilities and methods that will allow them to seed this world with humanity.

But this will not be achieved using current methods.

A good friend of mine holds a very important position at GE Medical System. A scientist and computer programmer, he is the lucky father of a very cute eight-year-old. Recently he asked me: Why should my daughter learn multiplications? What’s the point?

The immediate reaction would be: to train the brain to think in a certain way, to develop the cognitive process that will one day be useful for much more than simple multiplications. But the real question is: are multiplications the best way to train the human brain to deal with a machine dominated and mediated world? Is using a cognitive process that hasn’t changed in hundreds of years, a cognitive process that was meant to deal with a much simpler, much more human-mediated world, the best way to develop a brain and an emotional system capable of living, breathing and swimming in a “computer-complexified” reality?

Gary Kasparov, in the New York Review of Books, talks about the strange and fascinating relationship between AI and chess. While the game has dramatically changed since Kasparov lost a series of matches against Deep Blue in 1997, it has not disappeared. In fact, thanks to computer software, a different chess culture has developed. For example, very young players can now develop grandmaster level skills independently at a very young age, something that was unthinkable only a few decades ago. There are even tournaments now where humans and machines work together to move the game forward into mysterious new realms.

He wrote:

The surprise came at the conclusion of the event. The winner was revealed to be not a grandmaster with a state-of-the-art PC but a pair of amateur American chess players using three computers at the same time. Their skill at manipulating and “coaching” their computers to look very deeply into positions effectively counteracted the superior chess understanding of their grandmaster opponents and the greater computational power of other participants. Weak human + machine + better process was superior to a strong computer alone and, more remarkably, superior to a strong human + machine + inferior process.

Our next move

My proposal is to think of chess as an analogy for education. The development of critical and creative skills, of communication skills, of ethics and citizenship should proceed not with or alongside computers and software, but within and through non-organic intelligence. We “use” computers in the classroom as we use other teaching tool. But AI, Big Data and the Internet of Things are not tools. They are environments ripe with intelligent structures, dynamics and patterns. Our education system must become a process that not only taps into this environment, but grows through and within it. Education in the 21st century must be built upon the premise of the human-machine entanglement.

If we do not seriously reframe our education system, if we do not develop highly specialized cognitive skills among our students, if we do not pause and think about what is specific and unique to the human cognitive system (the ability to strategize, to make metaphors and analogies, to merge concepts into ideas, to recombine, to use senses and emotions along with reasoning to model the world, to produce representations that feel like reality), if we do not ensure that our education system taps into these skills and enhances them through machine intelligence, then at best we will be like today’s and tomorrow’s drivers — thinking of driving as something we control. We will, at best, create a generation of men and women barely engaged cognitively, a generation for whom the world becomes less and less manageable, a world where complexity, challenges and drama are dealt with by machines. And at worst, we will create a generation with irrelevant cognitive skills, a generation that will question what it means to be human when even the act of thinking about such a question will be outperformed by a machine.

Dr. Dyens is deputy provost (student life and learning) at McGill University. His field of research is the impact of technology on society.

COMMENTS
Post a comment
University Affairs moderates all comments according to the following guidelines. If approved, comments generally appear within one business day. We may republish particularly insightful remarks in our print edition or elsewhere.

Your email address will not be published. Required fields are marked *

  1. PJB / April 30, 2014 at 2:26 pm

    This progression is not progress at all. I’m convinced this is the wrong direction to move. I’ve only seen a sharp decline in critical thinking skills, and accurate weighing of the evidence, with the increased use of infotech. For a small example, I’m going to have to do a simple mental calculation to send this response. My children are not learning to do this in school today, because of dependence on tech to think for us. They’ll need to ask a computational device to come to the simple answer.

  2. beth / April 30, 2014 at 3:24 pm

    I don’t like the tone of this article at all. We as free citizens have a right to say no, we do not want to live in a world run by machines. We should not just be forced to accept that this is how things are going to be. If children are not taught that 1 + 1 equals 2, then they will be like those living in Orwell’s 1984, forced to accept whatever the machine tells them is true.

  3. Michael Monagan / April 30, 2014 at 4:58 pm

    I agreed with this, up to the point when the author could not answer “Why should my daughter learn multiplications?” and decided that multiplication is not important. That made me upset.

    We’ve had calculators since 1975 and yet we still teach how to multiply. Why? We have learned that to understand what you are doing (when you multiply) you do need to do it yourself (for a while) before you use a calculator.

    Incidentally, to submit this Email I had to answer a security question. Guess what it was?

    It was to multiply 1 x 5 = ….

    That speaks volumes.

  4. Andrew Park / May 1, 2014 at 10:23 am

    What a depressing and disempowering vision of the future. there are so many things wrong about this that I hardly know where to begin.

    Mybe I should start with the standard response I have whenever someone talks (with breathless anticipation) of the coming melding of human and machine and the virtues of artificial intelligence (AI):

    “How about some real intelligence?”.

    When I see students who have been raised as digital natives, I do not see a generation that is more intelligent, enlightened, or better informed than their pre-digital peers. In many ways, they apeear less informed about the things that matter.

    How can this be when they have a world of information at their fingertips? I suggest that it is because the internet is not really an environment “ripe with intelligent structures, dynamics and patterns”, but a chaotic no-man’s land where unordered “info-tainment” and other distractions sap our shrinking attention span and seek to occupy the entirety of our shrinking bandwidth.

  5. Joel C / May 1, 2014 at 9:49 pm

    The comments thus far make me wonder if anyone actually read the article.

« »
--ph--