I’d be surprised if Andreessen’s highly educated audience actually believes the lump of labor fallacy, but he goes ahead and dismantles it anyway, introducing—as if it were new to his readers—the concept of productivity growth. He argues that when technology makes companies more productive, they pass the savings on to their customers in the form of lower prices, which leaves people with more money to buy more things, which increases demand, which increases production, in a beautiful self-sustaining virtuous cycle of growth. Better still, because technology makes workers more productive, their employers pay them more, so they have even more to spend, so growth gets double-juiced.
There are many things wrong with this argument. When companies become more productive, they don’t pass savings on to customers unless they’re forced to by competition or regulation. Competition and regulation are weak in many places and many industries, especially where companies are growing larger and more dominant—think big-box stores in towns where local stores are shutting down. (And it’s not like Andreessen is unaware of this. His “It’s time to build” post rails against “forces that hold back market-based competition” such as oligopolies and regulatory capture.)
Moreover, large companies are more likely than smaller ones both to have the technical resources to implement AI and to see a meaningful benefit from doing so—AI, after all, is most useful when there are large amounts of data for it to crunch. So AI may even reduce competition, and enrich the owners of the companies that use it without reducing prices for their customers.
Then, while technology may make companies more productive, it only sometimes makes individual workers more productive (so-called marginal productivity). Other times, it just allows companies to automate part of the work and employ fewer people. Daron Acemoglu and Simon Johnson’s book Power and Progress, a long but invaluable guide to understanding exactly how technology has historically affected jobs, calls this “so-so automation.”
For example, take supermarket self-checkout kiosks. These don’t make the remaining checkout staff more productive, nor do they help the supermarket get more shoppers or sell more goods. They merely allow it to let go of some staff. Plenty of technological advances can improve marginal productivity, but—the book argues—whether they do depends on how companies choose to implement them. Some uses improve workers’ capabilities; others, like so-so automation, only improve the overall bottom line. And a company often chooses the former only if its workers, or the law, force it to. (Hear Acemoglu talk about this with me on our podcast Have a Nice Future.)
The real concern about AI and jobs, which Andreessen entirely ignores, is that while a lot of people will lose work quickly, new kinds of jobs—in new industries and markets created by AI—will take longer to emerge, and for many workers, reskilling will be hard or out of reach. And this, too, has happened with every major technological upheaval to date.
When the Rich Get Richer
Another thing Andreessen would like you to believe is that AI won’t lead to “crippling inequality.” Once again, this is something of a straw man—inequality doesn’t have to be crippling to be worse than it is today. Oddly, Andreessen kinda shoots down his own argument here. He says that technology doesn’t lead to inequality because the inventor of a technology has an incentive to make it accessible to as many people as possible. As the “classic example” he cites Elon Musk’s scheme for turning Teslas from a luxury marque into a mass-market car—which, he notes, made Musk “the richest man in the world.”
Yet as Musk was becoming the richest man in the world by taking the Tesla to the masses, and many other technologies have also gone mainstream, the past 30 years have seen a slow but steady rise in income inequality in the US. Somehow, this doesn’t seem like an argument against technology fomenting inequality.
The Good Stuff
We now come to the sensible things in Andreessen’s opus. Andreessen is correct when he dismisses the notion that a superintelligent AI will destroy humanity. He identifies this as just the latest iteration of a long-lived cultural meme about human creations run amok (Prometheus, the golem, Frankenstein), and he points out that the idea that AI could even decide to kill us all is a “category error”—it assumes AI has a mind of its own. Rather, he says, AI “is math—code—computers, built by people, owned by people, used by people, controlled by people.”
This is absolutely true, a welcome antidote to the apocalyptic warnings of the likes of Eliezer Yudkowsky—and entirely at odds with Andreessen’s aforementioned claim that giving everyone an “AI coach” will make the world automatically better. As I’ve already said: If people build, own, use, and control AI, they will do with it exactly what they want to do, and that could include frying the planet to a crisp.