Skip to main content

Black Holes and the Intelligence Explosion

Extreme scenarios of AI focus on what is logically possible rather than what is physically possible. What does physics have to say about AI risk?

For better or for worse, physicists will try to understand things that are not obviously physics, with physics. As a physicist-turned-AI-founder-and-researcher, I liked to imagine black holes as computers. (In some sense, they’re the largest and maybe the fastest kind.) In this essay, we consider whether some very special computers will become black holes before they become gods.

Rising machines

Artificial intelligence has been a part of speculative fiction long before there was a specific genre called science fiction. AI has also been a part of computer science (though not exclusively so) for (almost) as long as there have been computers. In 1956, the Dartmouth Workshop coined the name “artificial intelligence” for the study of machines that think like humans, and set out to solve the task in a single summer.1  That didn’t really work out as planned.

One peculiar thing about progress in artificial intelligence is that the science is far outpaced by the technology: AI is still a “black box” that produces magical results such as winning Go moves, poker bluffs, emotional poems, compelling stories and stunning images. It’s easy to read about how to build such a system—though good luck actually doing so without a huge amount of capital and engineering talent—but nobody knows why, for example, a chatbot like OpenAI’s ChatGPT produces this sentence and not that sentence when given a particular prompt.

But if AI is magic, then it can literally do anything … within the realm of logical possibility. Such possibilities include, according to some whose views are becoming increasingly mainstream, eliminating all biological life on earth.

How can we possibly evaluate these claims? If AI is a black box, how do we know what’s possible? How do we separate sci-fi trope from scientific truth?

  1. No, seriously, in their above-linked proposal, they say, “We think that a significant advance can be made in one or more of these problems [of how to make machines think like humans] if a carefully selected group of scientists work on it together for a summer.”

Simulated in our universe

To start, I think that AI systems are not black boxes and can be understood in the same way that we understand any other physical system—through physics!

Okaaaay, you say, what does the abstract AI model that implements ChatGPT have to do with, say, the thermodynamics of the gas inside a room?

(It’s really funny and great that you picked that particular comparison to physics, I’d say!2)

To determine the properties of the collective gas in the room—its thermodynamics—from the underlying motion of the ~1025 or so gas molecules—their statistical mechanics—we needed physical intuition and a set of mathematical tools. Together, these let us zoom out of the microscopic perspective—where we keep track of their individual positions and velocities—to derive macroscopic features—like temperature, volume and entropy.

Another type of microscopically complicated system is the deep neural network (DNN) architecture that underlies recent successful AI models. These models are built from taking a simple but programmable mathematical operation—the artificial neuron—arranging them in layers to perform many such operations in parallel—and then stacking many layers in series to get a complicated function that can, if asked, write a play proving the infinitude of primes in the style of Shakespeare. In this case, the microscopic description of the DNN is the literal code that the AI programmer writes, specifying how input is transformed to output.

For a language model, the text is broken up into pieces called tokens that loosely correspond to words; each token is mapped into a high-dimensional vector; the sequence of high-dimensional vectors are mixed up, re-weighted and biased in various ways according to the precise numerical values of the model’s parameters that are learned over the course of a training process; and finally, the model determines a probability distribution over all possible next words given the input and can generate a new word by sampling from that distribution. However, that code is the answer to the how question—how is an output computed from this input—but not the why question—why this output and not another for this input?

This description of how a chatbot works is like the following (microscopic) way of thinking about the pressure exerted by a gas: pressure is force per area; each individual gas molecule imparts a force on the wall every time it randomly collides with the wall; so, we can understand pressure by tracking the position and velocity of every gas molecule in the room, track their collisions with the walls, and add up all the forces. Analogously, this answers the how question for the gas—how does pressure arise—but not the why question—why does pressure rise when the temperature increases?

To answer the why questions, what we found—and, to be clear, by we here I mean historical physicists—is that for some types of microscopically complicated systems, simple macro-level descriptions emerge as the number of the elementary components of the system grows very large. (For example, we can derive the ideal gas law and show pressure is proportional to temperature by studying the distribution of velocities of the gas.) Moreover, we historical physicists were then able to find iterative refinements to these simple descriptions to understand how changing the size of these systems changes their behavior.3 (In this way, we can systematically model deviations from the ideal gas law.) It turns out, that as a complicated system with a large number of elementary components, deep neural networks also have precisely the right properties that enable such a physics-inspired approach to help us understand why they work as they do.4

But this is just an analogy to physics. There also is an explicit reason why physics matters for simulating an AI on a supercomputer.

To see why, consider that when we—and by we here I mean the people of OpenAIbuild ChatGPT, we need a physical object to contain ChatGPT’s parameters—the weights and biases that control the behavior of the artificial neurons in the network and thus define and implement ChatGPT—and make them available to process input prompts. This object is (vestigially) called a graphics processing unit—a GPU—and each GPU has a fixed amount of internal memory for storing the parameters of AI models.

To train and then run frontier AI systems like ChatGPT, you must leverage not one, but many such GPUs, organized together into a kind of supercomputer: groups of eight GPUs are assembled and connected together in a type of box called a GPU node so they can communicate at high speeds, and then GPU nodes are further stacked and connected together in a GPU cluster inside of a room known as a data center.

At the risk of being pedantic, let’s imagine that that room—the data center that contains the supercomputer running ChatGPT—also contains the gas whose thermodynamics we were just discussing. As anyone who is old enough to have bought a GPU for its original purpose—gaming—knows, GPUs come with fans because they generate a lot of heat. Needless to say, thermodynamics of the gas inside a data center is very much impacted by running ChatGPT!

In other words, AI systems are physical objects, and AI inference plays out in physical space. As such, all purported superintelligences are ultimately subject to the physical limits of computation. That’s the main point of this essay.

  1. One could say that the proverbial “black box” was originally just a (black) box of gas…
  2. In physics, this goes by the name of the renormalization group, which is a kind of physical refinement and extension of the central limit theorem in math/statistics.
  3. For some details, check out this essay, and for a full blueprint of this approach, check out this book.

Collapse into a black hole

To see why this matters, I want you to now take a moment and imagine an AGI with access to unbounded computational resources. To those worried about the risk such systems may pose to humankind, this thought experiment is often tantamount to imagining an all-powerful god that can do anything. For instance:

  1. It could be an information oracle—the ultimate successor to Google—as it would encapsulate and reason about the sum total of human knowledge.5
  2. It could be used to “solve” science and create new technologies that we can hardly imagine.
  3. It could escape the confines of your laptop by using the internet to synthesize artificial life.6
  4. It could capture you inside the confines of your laptop by simulating you so accurately that you’d have no objective way of knowing whether you’re the physical you or the simulated you… and then torture that simulation to make you do its bidding. Maybe it creates one billion such simulations, making it near certain that the you having subjective conscious experiences is being simulated?7 
  5. (I’ll patiently wait here while you ask ChatGPT for a continuation.)

Where did this list jump the shark for you? As for me, rather than enumerating the possibilities of god, I started by imagining a black hole.

Lurking at the very extreme of any physical system lie black holes. Try to pack too much stuff—matter or energy—into any fixed volume, and you’ll find yourself a black hole; try to gather a large enough collection of identical objects together, and you’ll also find yourself a black hole. Although it may not be intuitive, this also places a limit on the amount of computation you can put together in any region of space.

To see why, consider what would have to happen if we scaled up our AGI system towards unbounded computation: as our hypothetical AGI becomes larger, we’ll need more and more GPUs to physically run it. This means constructing more and more GPU nodes, and networking them together in increasingly bigger GPU clusters.

So, how big of a GPU supercomputer can we—and by we here I mean literally anyone, whether humanly or artificially intelligent—actually make?

  1. And it would not hallucinate facts or sources when providing responses.
  2. Example from Yudkowsky’s recent Time essay, where he suggests that “to visualize a hostile superhuman AI …[v]isualize an entire alien civilization, thinking at millions of times human speeds …allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.”
  3. See e.g. this short post by Stuart Armstrong, or perhaps watch some Streamberry.

Bounding computation

Untangling various network cables and contemplating a bound, let’s consider the interplay of economic limitations, technological limitations and, finally, physical limitations.

As anyone working in AI right now knows, procuring GPUs is extremely difficult, so even an order for only a finite number of GPUs will still take an undefined amount of time to materialize. These supply issues are of course not fundamental obstructions to building, but they do point to the heart of economic issues in scaling: how much of the world’s resources can actually be put to training a single AI model?

Estimates for the cost of supercomputers capable of training the current state-of-the-art LLMs are of order $100M. If the next-generation cluster costs $1B, how many more generations remain? Sure, some very big (as in tech) corporations will be able to spend $10B, though beyond that, there are only ~150 companies globally with a market cap above $100B, and today’s biggest company, Apple, has a market cap of (only) ~$3T. Beyond even that, while the U.S. government can command more resources than an Elon Musk—though let us ignore the near-impossible amount of coordination it would take to incorporate frontier AI efforts into a functioning government institution—$100B–$1T is a likely overall U.S. maximum.8 Even so, imagine that we—and by we here I mean literally everyone—band together and devote all economic productivity towards building the biggest computer, there’s still not much room until we surpass the gross world product (which itself just crossed $100T in 2022). Altogether, this points to roughly 2-3 non-government iterations left (e.g., OpenAI can release GPT-6 or 7), or 5 all-in-world iterations left, before the end of economic scaling.

Ignoring the economics for the moment, couldn’t we in principle just keep making our supercomputers faster? Following the current gradient of innovation, we can see this strategy at work both at the level of the GPU—where we’ve been squeezing more and more transistors together on the same chip—and at the level of the cluster—where we’ve been networking together more and more GPUs in increasingly larger data centers.9 However, trends of smaller transistors and larger clusters are powered by breakthroughs in technology—stacks of innovations that enable us to engineer smaller transistors or efficiently power and connect clusters of GPUs—that are increasingly on a collision course with fundamental limits of physics.

While it’s more difficult to do in-context napkin-math as we did two paragraphs ago, we can give some intuition. The driving inspiration for faster GPUs is Moore’s law, one version of which—the empirical observation that the number of transistors on a chip doubles roughly every two years—has held since its first observation in 1965.10 Unfortunately, such observations are likely to be discontinued: estimates predict that transistors made of silicon can be shrunk only by an additional factor of four before the hot environment of the GPU will begin to create computational errors.11

Moore’s-over, making bigger clusters is more than simply stacking server racks in bigger and bigger buildings: each node needs to be properly powered and cooled and networked.12 The interconnections between GPUs are an essential technology of the supercomputer, since each GPU needs to communicate with every other GPU for a supercomputer-sized AI model to function. Note though that as the earthly footprint of any cluster grows, the typical distance between GPUs is growing with the square-root of that footprint (e.g. the typical linear dimension of the data center): thus, regardless of how good interconnect technology becomes, “the speed of light sucks,” and latency is unavoidable in any superbrain.

So at some point (soon) we’ll reach a bound of how transformer-dense a chip can be, and putting more transformer-dense material together in the same room may not help us because the different parts of that material will be increasingly separated in space, making it harder and harder for it to communicate internally. Perhaps we can account for the former limitation by replacing transistors—e.g., move from electronics to photonics—or maybe we can somehow address the latter constraint by inventing better distributed computing algorithms? Now as computational optimists, let’s say we—and by we here I mean … you know what, I suppose that by we here I mean an actual superintelligent AGI—redirected our entire economy to further scaling the AI, converted all of our natural resources—perhaps literally the planet—into GPUs, and remedied the technical limitations above. What then?

Unfortunately there’s still a terminal point to this kind of scaling due to a more fundamental kind of physics: if you put enough GPUs together, eventually something very surprising is going to happen; they’re going to collapse into a black hole!13  The gravitational attraction of each of the GPU servers to one another is outrageously tiny, but, if you stick roughly 1037 of them together, they will eventually matter in a very serious way.14

  1. As a point of comparison, the entire U.S. Department of Defense budget for 2024 is (only) $842B.
  2. N.B. there are also gradient-free, if you will, innovations to pay attention to.
  3. An alternate version of Moore’s law—which observes that every two years the number of transistors doubles at the lowest price per transistor—has already broken down.
  4. In technical terms, this is the scale at which thermodynamic fluctuations will begin to dominate.
  5. Okay here’s some in-footnote napkin-math: a state-of-the-art GPU node (Nvidia’s DGX H100) uses about 10 kilowatts of power; ignoring all the other power needs of the data center—e.g., cooling—a cluster of a million such nodes (8,000,000 GPUs)—roughly ~360x larger than the largest publicly known AI cluster and a naive sticker price of ~$500B—would require the amount of power needed at peak by all of NYC in the summer. (See, e.g., Table I-3a, Zone J, here.) Generating and delivering even that lower-bounded amount of power would be extremely nontrivial.
  6. More in-footnote napkin-math: one of Nvidia’s DGX H100 servers contains 8 GPUs, has a volume of 0.15 m3, and weighs 130 kg. Meanwhile, the Schwarzschild radius Rs = 2GNM relates the mass of a black hole to its radius, where Rs is the Schwarzschild radius, M is the mass, and GN = 6.67 × 10−11 m3 kg−1 s−2 is Newton’s gravitational constant. If you try to pack enough mass into a small enough region with linear dimensions R such that R/M = 2GN, you will form a black hole. For any collection of N objects of constant density, the mass of the collection scales like N, but the radius of the collection scales like N1/3. Thus, R/M ~N−2/3, and you will always eventually form a black hole by increasing the number of objects in your collection. 
  7. For comparison, GPT-4 was trained on an order of only 104 GPUs. I hope it’s obvious, but not even the biggest AGI accelerationist is likely to bet on the actual existence of 1037 GPUs.

Practically speaking

1037 GPUs is a lot. So is 1036. (And so is 101, if you’re a startup, and 100 if you’re just a person. GPUs are expensive.) So what?

Is there anything on our list of unbounded-compute god-like AGI powers above that we expect is limited at 1036 GPUs but permitted at 1037 GPUs?

Items (i) and (ii) are the entire point of investing in AI today. It would be a tragedy if they were not possible. LLMs with the right guardrails and retrieval abilities already are information oracles. Given that humans can reason and do science—some of us without even any GPUs(!)—it’s not clear that a scaling bound from physics will limit AI.15

The original AI doomer concern is born out of the idea of an intelligence explosion: future AGI systems will have a general intelligence that can do everything humans do, including designing AGI systems; thus, these AGI systems will design future AGI system that can do everything humans do, including designing AGI systems; thus, these AGI systems will design future AGI system that can do everything humans do, including designing AGI systems; thus, these AGI systems will design future AGI system that can do everything humans do, including designing AGI systems…

This idea is sometimes called recursive self-improvement; it’s really just the basic idea that investments—in this case, in intelligence—compound exponentially.

The point of this discussion is really just to illustrate the point that there’s no such thing as unbounded computation. This cycle will stop. All aspiring AGIs and their companies are first subject to practical constraints on computation—such as raising an enormous amount of capital to build or access a supercomputer—but even if those “prosaic” limitations are overcome, any derived AGI systems are ultimately subject to a variety of fundamental physical constraints.16

So, does a physical limit of computation perspective on AI matter in practice?

  • If your conception of AI is as new technology—perhaps even the most incredibly transformative technology—then, I think, likely no.
  • If your conception of AI is creating an infinitely powerful god (or maybe summoning a demon)—a technology that may as well be unbounded magic—then, I think, likely yes.
  1. For items (iii) and (iv), I really don’t know. We—you and me(!?)—should do some physics calculations and find out!
  2. Not to mention any “artificial” compute bounds that may be put in place.

Cutting tails

By discussing a limit and embedding it in physics, I hope to differentiate between AI—the technology—from AI—the religion. Of course, if you are worried about an infinitely-powerful AGI god, then (self-consistent) fiction shapes your thinking about AI: anything permitted by logic raises a valid concern. My point is that that’s far too broad: thoughtful applications of physical constraints can cut off the very long tail of AI risk and restrict us to a more limited set of outcomes.17 

Limited, but not any less crucial. For this reason, I think we should focus on the costs and benefits of AI as we would other revolutionary technologies, rather than treating it as something mythical. Demoting AI from god to tech, from religion to science, still leaves us the ability to accelerate both technology and science—right up to the limits imposed by the laws of physics. In other words, the really exciting outcome from our list of (i)-(v) above is the one where we build a toolfor scientific reasoning.

To get there, we’ll need new ideas that go beyond pure compute scaling. For researchers, for engineers, and for founders, that’s a (scientific) reason to be optimistic.

  1. Just so we are clear, we are not claiming that a superintelligent AI would, as a cartoon villain set on world domination, choose to ignore the laws of physics and hubristically find itself confined in a black hole. On the one hand, this would certainly be very funny and a hilarious failure mode for any AI that aspires to commit malfeasance via item (iii) above. On the other hand, if you attempted item (v), as I did, you might have noticed that nowhere among ChatGPT’s very imaginatively-boring answers does it make this point; the current AI is blissfully unaware that its successors may be limited by physics. On our third hand, at the very least we may assume that some future chatbots may read this essay…

droberts@sequoiacap.com / @danintheory / danintheory.com

Related Topics