The Machine Age

In 2012, computer scientists working with Google conducted a novel experiment in artificial intelligence. They created a program to review 10 million random YouTube videos and look for recurring patterns. There was no human coaching or guidance, just an algorithm that scanned the pixels in every scene of every video.

The algorithm was not entirely new. Mathematicians have previously used similar techniques to organize large data sets. What was new was that the program mimicked the way humans identify and learn visual patterns. And the computer they used was powerful enough that it could teach itself about these patterns fairly quickly, without the benefit of human intervention.

Three days later, the program had identified recurring patterns that it learned were important. Among many images, the most prevalent pattern was the face of a cat.

In case you weren’t aware, funny cat videos dominate YouTube. They are everywhere.

The popular press reported on the Google research with journalistic giggles. Computer scientists, however, and those who specialize in artificial intelligence took it more seriously. No one taught the computer to identify the face of a cat or associate that pattern with something important. The program did it all by itself.

The algorithm used in the research is part of a field of programming called “deep learning.” Whether you have heard of the term or not, assume it will become common parlance very soon. It has nothing to do with teaching young people more effectively and everything to do with the way computers learn.

The origins of computer code–machine language are simple and brutish. A computer program could execute a task, no matter how complex, as long as the human programmer knew what she wanted the computer to do and had the time to set out meticulous instructions.

The deep-learning technique is different. Deep learning, in its newest form, duplicates the way human beings observe and learn from our environment: recognizing patterns, sorting those patterns into groups and then associating new patterns with the learned experience.

Before, the limiting factor to deep learning was computational speed and capability. But that limitation is quickly disappearing as we follow the exponential curve of Moore’s Law and processing power doubles repeatedly, seemingly without limit.

It is only a matter of years before everyday computers, infused with deep-learning algorithms, will handle tasks like writing, creative problem solving and even artistic creation more efficiently than human beings.

That sounds profound, and it is. It also is both exhilarating and worrisome. As Jeremy Howard, the computer scientist and deep-learning expert, warns, we have faced transformative technology before, such as during the industrial revolution. We have not, however, faced transformative technology that moves so quickly that it exceeds the human ability to catch up, compensate or cope.

You may assume that lawyers do not need to worry about this deep-learning revolution. The legal profession rests atop the service economy. Computers may replace journalism, medical diagnosis, accountancy and auditing, but lawyers bring reasoning, skill and common sense that computers cannot duplicate. We bring judgment.

But at the current pace of computational change, that is a naïve assumption.

Our value as lawyers is rooted in the same cognitive tools that underlie all intellectual professions: pattern perception and classification. Those are the same skills that deep learning exploits. No matter how jealously we protect our social status as lawyers, we are limited by the computational speed of our own brains and the heuristic bias of our experience. Computers have no such limitations.

This is not a Luddite’s rant. The social benefits of deep learning are profound, empowering computers to solve creative problems that a human could never attempt. But with exceptional social benefit comes exceptional social change.

So the question we should ask ourselves is not how we cling to our old, analog profession but how we can benefit and flourish in a world where computers are better thinkers, problem solvers and creators.

The Google experiment from 2012 is old news. The world has already transformed since then.