Our glorious future — more robots, more human flourishing
Peter Thiel’s famous interview question — what important truth do very few people agree with you on? — is famous for a reason. It lends itself to discussion; important and very few are quite subjective.
If you answer it honestly, you admit you’re going against the consensus on an important issue. Doing that takes some courage. And if it’s in the context of an actual interview, it also risks alienating your interviewer in multiple ways. For example, he may think your important truth is unimportant, thereby throwing your judgment into question. Or he may be committed to the opposite of your position, and unable to overcome any bias he may carry against those on your side of the issue. Or, of all the horrors, you might not be able to back up your claim — or even change your mind on it when challenged — thereby showing any number of undesirable characteristics. (Not that changing your mind is undesirable; I would like to think that’s what I do when presented with compelling arguments or perspectives on a topic.)
I don’t know what the best answer I could come up with for the question is, but I do have a topic in mind that is important and that seems to have a high percentage of people I try to follow on one side of it. We’re going to explore some arguments concerning automation, the increasing use of robots and artificial intelligence, and the effect of technology on jobs, societal wealth, and human happiness.
I’ll conflate these concepts throughout (automation/robots/AI/technology) for ease of reading, but I think of them as the same thing for purposes of my argument.
Crudely put, I’m pro-technology because it will lead to more human flourishing. Setting up the topic as a binary choice — i.e., whether the effects of automation/AI are generally good or bad for us — is probably unfair, but I’m going to do it anyway because that’s typically how I see automation discussed.
None of what follows is to meant to minimize the negative effects of automation on any particular person. So if you or a loved one lost a job to automation, don’t think that any of this diminishes that experience. In fact, I fully expect most of us (myself included) to experience some personal loss due to improvements in AI, but all of these improvements will create a wealthier world with more opportunities for more people, and hopefully much more human happiness.
II.
Many very intelligent people disagree with me on automation, which is one thing that captivates me. But I’m also intrigued by how convinced they seem to be that increased automation is going to be awful for us. Even better, at least for purposes of a good discussion, many of them think that some of us are out of our minds for looking forward to our glorious future filled with AI.
Consider what Andrew Yang recently said on Sam Harris’s podcast. They were discussing universal basic income, and this automation issue was something of a focus of that discussion. The utter disdain they conveyed for the argument I’m going to make (about not fearing the robots) was astonishing. A sample from Yang:
“I find this line of reasoning just so lazy, and ridiculous, and frustrating, where otherwise educated people will actually cite the Industrial Revolution, and say ‘but look, 120 years ago we went through something similar, and things were like…’ that’s actually the argument.”
And Harris seems to share this view, saying on the same episode that a few heuristics — like appeals to the Industrial Revolution and to Luddites — are doing an inordinate amount of work in the minds of those of us who don’t fear automation. At one point, Harris said the following about their anti-AI view (emphasis mine):
“It should be obvious, I mean, you would think it would be obvious, but it isn’t… AI is not analogous to an internal combustion engine; the way in which an engine replaces human labor is not at all the way in which true AI will replace human labor… there is a category shift in what is being accomplished with this new technology… It is nowhere written into the book of nature that there must always be things that we value and are willing to pay for that humans will be able to do best. And certainly it is not written anywhere that there always will be an equal number of things that we will be willing to pay for that humans do as well as any other technology, you know, when compared to any point in the past. And so, the idea that this is a stable situation, when we are envisioning a time when we will be able to build machines that are better than us at, in the ultimate case, everything we do, right? And then it’s just, what is left for humans, will be guided merely by our preference to be in the presence of humans, even if they are doing this job worse than machines can do it. It’s a very short list of things I think you will insist be done by a human… I am, like you, mystified by the skepticism here.“
That wasn’t the first time Harris conveyed his fear of the effects of automation. From a 2015 blog post by Harris:
“There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance. Once we built the perfect labor-saving device, the cost of manufacturing new devices would approach the cost of raw materials. Absent a willingness to immediately put this new capital at the service of all humanity, a few of us would enjoy unimaginable wealth, and the rest would be free to starve.”
Sam Harris is hardly the only brilliant person worried about automation. Scott Alexander has suggested on multiple occasions that AI will eventually be better than humans at everything, and lamented that we’re all unemployed at that point. From The Death of Wages is Sin:
“But there’s also this reductio ad absurdum where we can manufacture androids exactly as smart as humans in every way for $1. In this world, it seems obvious that all companies would buy androids (who work for free) and fire all their human workers, meaning an end to human employment.”
And from Technological Unemployment: Much More Than You Wanted to Know:
“Technology seems poised to disrupt lots of new industries very soon, and could replace humans entirely sometime within the next hundred years. (???)”
(The question marks at the end denote no confidence level expressed about this ultimate conclusion to his post, unlike the others, for which he expressed between 60% and 100% confidence.)
Even some economists seem to be on the side of the Luddites. Robin Hanson, who has also discussed this issue on Harris’s podcast, has written a book that considers the topic. I should probably go back to that podcast or read him directly, but this is a blog, so I’ll risk being unfair by citing a secondary source. As Business Insider put it (emphasis mine):
“Robin Hanson predicts in “The Age of Em” that we’ll develop cheap technology for emulating brains on computers in the next 100 years.
He expects emulations, or ems, to be like human brains but able to run 1,000 times faster and be copied. He predicts they will quickly put every human out of work and create a radical new civilization, living by the billions or trillions in a few megacities.”
There are certainly countless other examples along these lines. They all share the concern that significant amounts of human labor will no longer be valued in a world where AI comes to do things that people do today. And where much human labor is no longer valued, what will the world look like? What will people do?
III.
We shouldn’t worry about societal collapse over better technology. The fact that Harris and company are worried about it just goes to show that even brilliant people can make mistakes — even when they’re given what should be the antidote to this fear.
What is the antidote? Probably not analogies; Harris conveyed several arguments to Yang — as they were discussing this issue — that I thought should have calmed any concerns about automation. But they didn’t do the trick, probably because they were based in analogy without an explanation aimed at Harris’s main point (that AI is different). That let them fight the validity of the analogy.
Harris relayed to Yang the contents of an email that “a very successful entrepreneur and VC” gave him in preparation for the podcast. As explained by Harris, the email explained the “central concern” about reasons to doubt the anti-robot case (he jumps back and forth on this issue in the below excerpt, as he’s speaking for both the anti-robot view that he holds and the pro-robot venture capitalist):
“This is the kind of the first objection that you just have to figure out how to ram through if you’re going to get people to take UBI seriously. And so it’s this notion… that it really is different this time… We have obviously lived in a world for at least 150 years or so where we have noticed this effect of breakthroughs in technology where something comes online and it destroys jobs, we find new efficiencies in some labor process, and people can’t envision what the replacement jobs will be. And so there’s kind of this Luddite delusion. And what we’re saying, what you’re saying certainly, is that this time is different…. you could have gotten into a time machine and stood with the Luddites and shared their delusion and not seen what jobs would come in the wake of all the jobs that were being destroyed. There is this conviction that there will always be things for people to do. There will be jobs as long as there is anything in this world that people want.”
Harris continued relaying the pro-robot VC’s views, which should provide additional reasons not to fear the robots. One was that people are and will always be assets, and another was:
“that if we weren’t destroying jobs through breakthroughs in technology, that would be synonymous with the lack of material progress… this is always the process that has to be hoped for… it is this creative destruction picture of finding new efficiencies.”
I completely agree with the unnamed VC who tried to set Harris straight on automation/AI being a net positive. There’s another economist I used to read who has my same perspective on the issue. Don Boudreaux has spent years responding to letters from readers concerned about automation, AI, and the like. The ways he’s tried to make his point have varied.
Boudreaux has heard people worry that AI is becoming too human-like, to which he responds:
“Consider this: However impressive artificial intelligence might be, and however close science gets to creating machines that are as intelligent as human beings, each and every human being is an instance of real, authentic intelligence. With the birth of each baby, the world’s stock of real intelligence rises. Should we worry that this daily rise in real intelligence will heave most human beings into the ranks of the impoverished unemployed? It hasn’t yet. Why, therefore, worry that the rise of artificial intelligence will create a problem that the dramatic rise in real intelligence has yet to create? Note that during Mr. Hawking’s own 76 years of life the amount of real intelligence on earth increased by 280 percent, and yet the number of jobs – for workers of all skill levels, including the lowest – impressively increased during those years, as did average real wages.
We humans have from our time in caves found ingenious non-human means of doing work once done by humans. Thus far, such innovation has elevated – enormously – both our material and non-material standards of living. I see absolutely no reason to worry that the labor-saving innovations of today, even though they be called “artificial intelligence,” will lead to any less-happy outcome.”
And in response to concerns about technological unemployment creating a permanent decrease in jobs, Boudreaux says (emphasis mine):
“In presenting this example [about population increases not leading to lost jobs] I don’t mean to imply that no significant differences separate humans from robots. Differences there certainly are, some of which might indeed justify your anxiety. But too much of today’s fear of robots and innovation (and, also, of trade and immigration) – including your fear – rests on the historically and economically incorrect presumption that the number of productive tasks that we humans can perform gainfully for each other is limited. I, in contrast, believe that this number is practically unlimited.”
The unlimited-tasks argument was within the list of arguments that Harris recounted from his VC friend, excerpted above. Matt Ridley says something very similar:
“There are infinite new ways we can think of fulfilling each other’s needs and desires in exchange for reward. Look at the way modernity’s spectacular productivity has allowed the revival of crafts or the resurgence in live performance.
And in the unlikely even[t] that this end point were ever reached, so what? A world in which machines do literally everything we can ever think of needing done (“Take me to Mars, Hal, and on the way rewrite Shakespeare as rap”) is a world in which we can spend our entire time consuming the products of those machines’ work. After all, the purpose of all work is consumption, as Adam Smith nearly said. The Tim Worstall puts it this way: “There will continue to be jobs for humans as long as there are unsatisfied human wants and desires. Once all of those are satisfied then jobs don’t matter, do they?””
I think the idea of limitless desires is the more intuitive argument for why automation, and the cheap production that it delivers, is beneficial. That is, we’ll always want more and better goods and services, and because technology makes it possible to satisfy those demands cheaper, it benefits humanity. But I want to bring up a less intuitive argument for it.
IV.
I hadn’t seen anyone mention comparative advantage in these debates until I started writing this post, but it’s the stronger argument for not fearing technological progress. In short, comparative advantage says that if we care about having more wealth rather than less, then we should produce what we’re relatively better at, and exchange with others to obtain what they’re relatively better at producing.
That part is obvious enough that most people don’t fight it. But comparative advantage also says that it doesn’t matter whether you’re worse than your trading partners at everything, or better than your trading partners at everything — in either case, and those in between, you and your trading partners create more wealth by specializing and trading. When discussing an example about preparing for a dinner party with a roommate, and how best to allocate the chores of cooking and cleaning, EconLib put it like this:
“But what if your roommate is a veritable Martha Stewart, able to cook and clean faster and better than you? How can you earn your keep toward this joint dinner? The answer is to look not at her absolute advantage, but at your opportunity costs. If her ability to cook is much greater than yours but her ability to clean is only a little better than yours, then you will both be better off if she cooks while you clean. That is, if you are the less expensive cleaner, you should clean. Even though she has an absolute advantage at everything, you still each have different comparative advantages.
The moral is this: To find people’s comparative advantages, do not compare their absolute advantages. Compare their opportunity costs.
The magic of comparative advantage is that everyone has a comparative advantage at producing something. The upshot is quite extraordinary: Everyone stands to gain from trade. Even those who are disadvantaged at every task still have something valuable to offer.”
Notice that this failure to distinguish between an absolute advantage and a relative advantage is present in all of the anti-AI arguments from above. Harris, for example, puts it quite explicitly in his argument (emphasis mine):
“It is nowhere written into the book of nature that there must always be things that we value and are willing to pay for that humans will be able to do best.”
Comparative advantage tells us that it’s completely irrelevant whether humans are better than machines at any tasks. We’ll still have more wealth by doing what we’re least bad at doing, relative to the robots.
Giving my own example in the AI context would be fine, but this has turned into a monster post of block quotes, so let’s just continue in that spirit. A random site I found from searching “comparative advantage with robots” returned something from a University of Michigan econ course that gets at what I’m thinking:
“there will be tasks that robots are relatively worse at, even if they can outcompete humans at most tasks. So long as there is some finite supply of silicon and metal and servers, then there will be more and less productive uses of computing power, and so there will be better and worse things that humans can do. Sure, computer vision might get really good, but are we willing to spend our computing power to make better toys relative to, say, more complex tasks in medicine? And if there is this allocation of computing power, then on the other side there must be an allocation of human power to do other jobs.”
We can go on like this forever, but the fault lines should be clear by now. It’s Harris, Alexander, Hanson, and others saying this time is different; this time, technology will finally drive us from productive employment; AI will be better at us than everything, versus Matt Ridley, random economists, and me saying this is the Luddite fallacy; technology will make us richer and allow us to acquire new desires and new ways to fulfill them; and trade will always be possible thanks to comparative advantage.
To bring this back to some of the objections above, when Harris says “There is no law of economics that guarantees that human beings will find jobs in the presence of every possible technological advance,” I think he’s wrong. Comparative advantage looks like that law.
But suppose Harris et al are right, and that robots are going to make all of us unemployed. Wouldn’t humans simply ignore the robot’s economy and trade with each other? It’s kind of hard to imagine a future that I don’t think is possible (a robot-dominated/human-free economy), but in that world, I don’t see how humans would lose the assets that we control, or our ability to labor for what we want. Are the robots going to trick us into trading our assets away for nothing, or is the fear that we’ll be at war with Skynet, like in the Terminator films? (I’m not going down that path now.)
V.
We can come up with example after example of technologies throughout history not putting us all out of work and making us fantastically wealthy, and the pro-robot side should (rightly) grow more and more persuaded with each example. But the neo-Luddites think they’ve said something important by saying that automation and AI are different from earlier technologies, so they fight every example thrown their way. I wonder whether if there are any instances of the anti-robot side accurately identifying comparative advantage, yet still fearing that robots will destroy most human employment.
I had started this post with the intent to say that my pro-AI view was the important truth that (relatively) few people agreed with me on. But I actually think it’s comparative advantage that is more often ignored. People seem to be getting tripped up by the very plausible story that humans someday could be worse at everything than robots, as if that’s not the exact same story that the economists have been telling us about comparative advantage for two-hundred years.