Years before Open AI’s ChatGPT startled the world in 2022, Erik Brynjolfsson was studying the questions that have since made AI a red-hot topic: Will the technology spark mass unemployment? Which companies will thrive or wither as AI advances? Will rapidly improving AI be good or bad for the economy?
Brynjolfsson is well equipped to answer such questions. He’s a professor at Stanford University, director of its Digital Economy Lab, and an economist who moved his office to a different building “to be in the middle of all the computer scientists.”
Unlike many others, Brynjolfsson sees AI potentially bringing a bright future for human workers. The key to developing AI is “augmenting humans rather than mimicking them,” he says. From an economist’s perspective, “augmentation creates new capabilities and new products and services, ultimately generating far more value than merely human-like AI.”
In an interview with Fortune, Brynjolfsson explains how some business leaders are falling behind in adopting AI, why he thinks it will be “an elixir” for the economy, and why bosses should get employees using AI more.
This interview was edited and condensed for clarity.
Fortune: Millions of employees are worried about losing their jobs as AI and other technologies advance. What does your research say?
Colleagues and I looked at how AI and machine learning would affect occupations. We analyzed what types of tasks would be automated—taken over by the technology—and what types of tasks would still be performed by humans augmented by technology.
There was no occupation where these technologies just ran the table and could do all the different tasks. In each case there were some tasks like what radiologists do, reading medical images, that technology can do much better. But radiologists actually do about 27 distinct tasks, according to our taxonomy. And there are some tasks—consulting with other doctors, explaining outcomes to patients — where AI is really not the best tool or cannot be used at all. So roles will change a lot, and managers and entrepreneurs will be creative about how they do that. I certainly expect a lot of disruption and change, but not mass unemployment.
What’s an example of AI augmenting rather than replacing a worker?
In a study that colleagues and I conducted, a company with a call center did a phased roll-out of a large language model — generative AI — that gave suggestions to some of the workers [as they responded to callers], but not to others. So we got a kind of controlled experiment. The people who had access to the technology were dramatically more productive. It was about 14% on average, but the least experienced workers were about 35% more productive within just a couple of months, a big, big change. They were going up the learning curve so much faster with this tool. We looked at millions of transcripts and found that customer sentiment and customer satisfaction dramatically improved. There were a lot more happy words and a lot fewer angry words in those discussions. The employees seemed happy. They were less likely to quit—much less turnover. It was really a win for every group. This is an example of a system that was used to augment workers, not replace them. It was still a human talking to the customer, not a machine talking to the customer.
What does this mean for the U.S. economy? Labor productivity has increased only about 1.5% a year since 2005. Can AI improve that?
I see a coming productivity boom. I see about a doubling of productivity growth in the coming decade as a result of these technologies. It’s not as simple as just buying the software or hardware. General purpose technologies like electricity, the steam engine, early computers, often take a decade or more to translate into significant productivity gains because of all the changes you have to make in business processes and rescaling. This time it’s definitely happening faster.
Doubling productivity would be really significant.
For a lot of the problems we have—the federal budget, health care, these big, nagging problems—productivity is like an elixir that makes all those problems much more solvable.
Some business leaders are ordering their employees not to use ChatGPT for data security reasons. Are they making a mistake?
You need to embrace this technology and not resist it. Some of my fellow teachers and professors tell their students not to use it. There are some potential security issues that need to be addressed, but I don’t think they’re insurmountable. A number of companies —Cohere, Abacus, others—will train your models locally. Open AI [creator of ChatGPT] will sign NDAs [non-disclosure agreements]. Certainly people have to be careful about proprietary data not getting into the wrong hands, just as you would when you’re using cloud services or anything else. But I think that’s mostly a bit of a distraction. This is a transformative technology. Every company should be working really hard to figure out how its employees can use it more, not less. Put in security, but don’t miss the forest for the trees.
You teach a course called The AI Awakening. When you taught it last spring, what were the students interested in?
It was way oversubscribed, so we chose the 70 best students. More than half of them were looking to start companies in this space. It’s not a random sample. It’s Stanford.
This story was originally featured on Fortune.com