My dear friend John Mulhall asks “Are you sympathetic to Tyler Cowen’s optimism about AI and technology in general?”
I am a first principles kind of person, so let’s start with finding a common starting point for how to think about technological change. One way I think of new technologies is as a new type of trade: I can now choose between exchanging money for a typewriter or for a digital word processor. On some dimensions, I favor one over the other and vice versa. When a new technology passes the market test and is implemented by users, there must be some type of “gains from trade” occurring. The trade would not occur if both parties did not perceive some real benefit along some dimension. The gains from trade ripple out throughout the economy, but they do not do so equally. While the writers’ industry and the computer engineers industry might see gains, those gains only very slowly and indirectly show up in regions which do not have those industries.
In general, the aggregate social benefit from a new type of trade requires individuals adjusting their behavior to realize the benefit. For example, electric drills change construction making those people and firms more productive and allowing them to be hired to do more for less total expense. Those are two gains: one to the firms using drills and the other to people hiring electric-drill-users. Those change to those two groups constitute the basic sum of benefits.
People and communities have to adapt to realize those benefits. Sometimes they choose not to. But they also generally do not receive those positive spillovers from the technology to the same degree as others. Though they still benefit indirectly, even when many levels removed. Many Amish communities famously use no electric powered equipment when building. Nonetheless, they do purchase high quality tools made by precision machining and electrification. So even their own agrarian-first production benefits from trade with a “high technology” society. Drills, screws, the use of electricity to power the one and drive the other, allowed America to quickly accommodate all sorts of changes. Suburbanization was made possible and cheap by precision machining!
A lot of dimensions of society are affected by even simple technologies, how much more so for more general technologies. LLM AI tools created through reinforcement learning is a very general technology, and thus how much harder must it be to predict net effects.
What are the spillovers and likely effects of AI? Many writers get bogged down in metaphor making for AI. It’s “a transformative technology”: a machine gun, a replacement for people, a complement to people, a therapist, a girlfriend, a test maker, an essay grader, a medicine finder, a coder, a nuclear physicist. It’s like electricity, the printing press, the internet on steroids, a bicycle for the mind, a parrot of intelligence, intelligence itself, a new species! Arguing over analogies does not go anywhere. At least for me it hasn’t.
So I call it quits and go back to rehashing the two standard effects of new technology (in the broad sense). One is that firms that cannot adapt to the higher productivity manner of doing things go out of business, and two, new products are created as a result of the original innovation. In the first case, much of economic growth is caused by driving low productivity organizations into the graveyard, and so we all benefit from that – even if it sounds bad.
Here’s the basic story of why. When demand is fulfilled by a more productive firm, resources are used more efficiently. When resources are used more efficiently, then savings can be used elsewhere instead. The savings caused by an increase in productivity do not go to waste, they are not hoarded by the capitalist dragon Smaug. They are used elsewhere.
And sometimes as a result of productivity and innovation better quality products can be crafted. Most people cannot predict easily a priori whether something will cause new and improved products, even less can we predict what those products will be. But at its most basic, this is what economic progress is: better use of resources and innovative uses of resources. And it’s a good thing.
Now, I always do wonder what the effect of a technology on society will likely be. I am given to speculation like that. But asking about a technology generally is a curious and overbroad question. Society is not a monolith. There are different age groups, classes, subcultures, ideological groupings, social networks which are constantly adapting to their circumstances together. As a whole, society is adaptive because it is in many parts. In the uptake of a new technology there is both a diffusion process and an adaptation process. Different groups find different uses. Different groups put in different safeguards – based upon what they see as their responsibility. If you notice this feature of our society, that disruptions are temporary, that negative externalities elicit coordinated responses, then a better equilibrium than the status quo ex ante can be expected. Better, however, does not mean costless. On some dimension for someone somewhere, value is lost. If I were handwriting this, I could be outside in warm summer air listening to the chirps of cicada-eating birds and the bestial groans of bird-eating cicadas, but instead I am inside. That is a cost to typing rather than writing, but it’s still a net gain.
This is my prior model on technology that allows me to be generally more optimistic than most others in our milieu. I believe in our collective ability to adjust to technological change, even if there is no formula to predict exactly what the adjustment will ultimately look like from the armchair. Its a lived solution.
I do worry about blocking the adaptive process too much. Strong regulation from the top can prevent a synthesis of the old and the new. A desire for total control over a technology and its diffusion in order to do damage control often does more damage than control. Perhaps biblical translation in the 16th century is an example. Shut down society’s adaptive reflexes and create a debt for a much more painful transition later. What was more painful China’s modernization or America’s? And could one have happened without the other, and would we want to go back?
In 1900 could Henry Adams have predicted the results of electricity? Or the effects of the window AC, the recorded LP, or the PC? Are not the social impacts of electricity found in the new organization and technology it allowed?
Or if the printing press is more your style. Did Erasmus know how science and politics would change as a result? Could he have drafted the perfect policies for the monarchies? How long did it take society to adapt to full literacy? We hardly know the answers to these questions even in retrospect. As W.H. Auden wrote, “Foresight as hindsight makes no sense.”
It doesn’t seem to me that we have a strong ability to forecast distributional or productivity changes from technology a priori. Too many things change in the process: the firms, the gains from the trade, the new products created are dependent upon human choices, ingenuity, and iteration. We are living an evolutionary process without an inevitable endpoint.
However, this agnosticism abdicates too much responsibility and truly is too optimistic about human nature. So allow me to walk it back a bit with some guidelines.
If I want to characterize a view about some technology, the first thing to do is to learn what is and how it works. Investigate what current users are paying for and how they are using it. If you can figure out what the average, or even better, what different clusters of users are paying for and how they are using it, you will get a factual understanding of what the technology is, rather than a merely theoretical one. Then you can just look at the current upside and downside uses. Then, insofar, as you wish to advise and craft good policy, notice the particular destructive uses, and look for ways to curb them without also destroying all positive value. Gesticulating at the bad is not a reason to slaughter them all and let God sort it out. We can almost always do better than blanket bans. We want to afford to the different parts of society the chance to maximize the upsides and minimize the downsides of technology. Consider too what happens with declining costs. Frequently claims about distributional effects between rich and poor are not true for long as costs come down.
The CEO of NVIDIA said in a Stanford School of Business talk that we need the organizations that regulate their fields to update their regulations to include AI as it relates to their mandate, but we do not need a body that oversees all. I share the spirit of the suggestion, though I will quibble. I believe in a need for standards of model robustness and model security. And eventually, the biggest problem with AI will not be misuse, which I think can be managed, but misalignment, which I worry cannot.
(I myself do in fact regulate AI and technology as head of a school. It is part of our adaptive process as schools, I would prefer to make those calls within my community though, and not have those decisions made for us.)
I think AI is an exceptional case, but if we just want to talk about automation technology in general, I strongly recommend the essay by economist Matt Clancy: ”When the Robots Take Your Job”, which covers the economic challenges of automation. He explains the assumptions and implications of a few academic models that relate Capital and Labor to automatable tasks. In these models, new creative technology does not destroy wages, but rather it increases them so long as there is enough capital to hire human necessary labor. One big caveat is that if wages in some industries are driven to zero and the number of non-automatable tasks does not increase, there could be an economy-wide wage collapse. It is a simple model, but like many of these models, it is a great intuition pump.
There are other greater worries one could and should have about General AI, mainly the misuse of AI tools to create dangerous synthetic pathogens, but the economic worries above are a good starting point for our discussion. Confer Matt Clancy’s behemoth paper, “Returns to Science in the Presence of Technological Risk” for a more rigorous take on the health and income risks and the benefits of science.
But the greatest worry of all is AI alignment, that is, whether eventually we can make technologies that both act as capable autonomous agents and don’t completely disempower or destroy humanity.
Nuclear weapons, biological pathogens, and autonomous AI systems are probably not good technologies. But in general, technology is good for human health and the cultivation of civilization.