Economic Life is About Choices, Not Just Tasks

I haven’t read Robin Hanson’s The Age of Em, but when I first heard about the book, it seemed odd to me that Hanson seems to think that having an endless supply of almost-free (slave) labor as emulated human intelligence stored on computers would increase GDP by massive amounts. Bryan Caplan picks up on this issue in his review, but doesn’t express it in exactly the way I would (Caplan in quotes, Hanson in bold):

Robin’s arguments for his single craziest claim – global GDP will double every “month, week, day, or even faster” – are astoundingly weak.  Yes, Argument #1 has superficial appeal:

Special three-dimensional (3D) printers have been created that can print about one-half of their components in about 3 days of constant use (Jones et al. 2011). If the other half could be made just as fast, a 3D printer could self-replicate in a week. If the other half of the parts for a 3D printer took ten times longer to make, then a 3D printer could self-replicate in 5 weeks.

Together, these estimates suggest that today’s manufacturing technology is capable of self-replicating on a scale of a few weeks to a few months.
In the real world, however, there are literally hundreds of bottlenecks that radically retard this kind of growth.  Politically, something as simple as zoning could do the trick.  Robin will naturally appeal to selection – the em economy will launch in whatever country has the most em-positive regulatory environment.  But the most favorable political environments on earth still have plenty of regulatory hurdles – especially for technologies that pose a threat to reigning powers.  And politics aside, we should expect bottlenecks for key natural resources, location, and so on.  As an engineer, I’m sure Robin’s heard of Murphy’s Law.  Furthermore, if ems are bad at any crucial task, biological humans have to take up the slack, in their usual sluggish meat-space way.


Caplan’s last sentence waves at what I think is the real issue for artificial intelligence transforming economic life, but it doesn’t quite articulate it. Caplan is a libertarian, so he tends to view behavior by market actors and political behavior as particularly different things. But my contention would be that, thanks to automation, economic behavior is politics, not just in the sense of having to deal with the slings and arrows of regulation and law, but in the sense that the formation of coalitions and alliances, the generation of consensus and conditional agreement, the use of persuasion and the availability of threats, are what it is all about. That is, economics is choice, and so once automation takes care of most of the doing of things, it’s left to people to make the choices, which is what they do in post-industrial work– generate the conditions under which choices can be made.

Hanson is, of course, in his Man from Mars blog Overcoming Bias, obsessed with all of these processes of alliance formation, signaling, and small-group politics that characterize human behavior. And I would guess that his book is interested in them, too, in how emulated intelligences would work and “live” together as well as individually. But while individual tasks might be aided by more artificial intelligence, economic behavior almost definitionally cannot be.

If economic behavior is about choice, as the first page in economics textbooks tell us, it requires individuals or corporate entities who are capable of choice. And if “ems” (the emulated intelligences that Hanson writes about) don’t have rights, they also are not capable of choice, only of execution of others’ choices.

This is semantic, but it’s not just semantic. If Pharaoh  orders his slaves to build a pyramid, does the GDP of Ancient Egypt go up? Maybe from some knock-on effects (because the postcard painters on the River Nile can sell more postcards), but in of itself? It seems to me that it doesn’t, until someone is capable of making choices in response to the slaves’ toil.


Can artificial intelligences who are deprived of rights make choices? They can execute their masters’ desires more or less closely, but they are still limited by those desires. And insofar as those desires conflict, it seems at least as plausible that artificial intelligence, if relatively easy to create and duplicate, will be used more for defense and counterdefense, for endless cycles of increasingly sophisticated parries and blocks, for coevolutionary arms races of hosts and parasites, virtual predator and prey, as for the cooperation and agreement that characterize economic growth.

The alternative is to allocate rights and self-determination to the artificial intelligences. And we all know where that leads.





Hanson responds to Caplan as follows, which doesn’t convince me:

One could have similarly argued that fundamental growth bottlenecks must prevent the previous observed huge jumps in growth rates, such as from foraging to farming, or farming to industry. And plausibly related obstacles did prevent those eras from starting as soon as they might have. But eventually obstacles were overcome. No doubt our current economy tolerates many delays that would have to be cut to enable much faster growth, and the em economy won’t start as early as it might because of regulatory and other delays. My book is mainly about what happens once those obstacles are overcome. Does Bryan really think such obstacles could never be overcome? Even when doing so might quickly allow a city or nation to dominate the world? His “near-zero prior” seems to come not from any fundamental analysis but, from his strong reliance on intuition; I suspect he would have similarly assigned a very low prior to manned flight in 1850, or to space flight in 1900.

I’m a strong believer in the inevitability of greater-than-human artificial intelligence, and have been since I read Marvin Minsky’s The Society of Mind when I was a kid.Hell, there are ways in which the internet has generated a super-intelligence already, simply by connecting existing human knowledge in a way that makes it accessible to search and iterative mapping and analysis. It’s not that we won’t have computers that can do things we can’t (we already do) and that perhaps will think greater or richer thoughts than are available to us. It’s that turning that capacity into an expanded or richer world isn’t simply a challenge of technology or even regulation or law.

For some reason, an image came to me just as I woke up this morning- a graph showing marginal returns to effort or work as an inverse function with the top labeled “Friendship,” something like this:



It’s a somewhat embarassingly sentimental and even vacuous image, but it does capture something I believe- that even as technology increases the returns to a small amount of effort (thus the inverse graph), it still requires cooperation, persuasion, and choice in order to be counted as increasing our individual or collective wealth.

8 thoughts on “Economic Life is About Choices, Not Just Tasks

  1. I don’t know if it counts as necroposting on a blog rather than a forum, but I thought I should respond to this. Hanson by default DOES expect ems to have rights, with secret enslaved ems existing only in jurisdictions where regulations like minimum wages officially prohibit ems making subsistence wages (and those jurisdictions would probably be outcompeted by ones with unregulated em labor). If we imagine a future without Hanson’s biological “retirees” living off their accumulated capital while ems do all the work, it’s unclear who ems would be owned by (if not other emes).

    But even if we imagined a future with super-advanced tool AI instead of emulations, would that be economically richer? It would have greater capacity and expend greater energy (the criteria by which J Storrs Hall & Joseph Tainter evaluate societies/economies). And, relevant to Hanson’s logic, that capacity would presumably include the military capacity to defeat societies opposed to their technology.

    Returning to the hypothetical of a society entirely consisting of ems, Caplan might deem them equivalent to machines someone left running, since he puts more weight on metaphysics (also believing in things like free-will & objective morality because they’re “common sense). As far as Hanson is concerned they would act like people (except much faster), so he evaluates them the same as people.

    Liked by 1 person

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s