The Bottleneck
Harry Sanders,
The problems that matter most in science — the ones everyone has given up on — tend to fall the same way. Not to a new instrument or a bigger budget, but to mathematics.
For decades, every MRI machine on earth was built around the same constraint. The Nyquist-Shannon theorem said that to reconstruct an image, you need to sample the signal at twice its highest frequency.The theorem is from information theory: if a signal contains no frequency higher than , you can perfectly reconstruct it from samples taken at per second. They built faster machines and wound stronger magnets. They spent billions. They tried everything except questioning the rule, because the rule was mathematics, and you do not argue with mathematics.
This made MRI machines slow. A cardiac scan meant six minutes of holding your breath, over and over. Breathe at the wrong moment and the image blurred; the whole scan started over. If you had heart failure — if you were one of the people who needed cardiac imaging most — you couldn't hold your breath that long. The images came back blurred, or you didn't get a scan at all.
In 2004, a mathematician named Emmanuel Candès was studying MRI reconstruction when he noticed he could recover images from far less data than Nyquist-Shannon said he needed. He mentioned the result to Terence Tao, a Fields Medalist and another parent at the same preschool, while dropping off their kids one morning. Together they proved that the theorem was correct but not the complete picture — that if the signal had structureThe structure is called sparsity. A signal is sparse if it can be represented using only a few nonzero coefficients in some basis., which most natural signals do, you could reconstruct images from a fraction of the data. Cardiac MRI scans that took six minutes now take less than one. The patient breathes normally.
In the summer of 1939, Alan Hodgkin and Andrew Huxley threaded an electrode into a squid giant axonSquid giant axons can be up to a millimeter in diameter, roughly a hundred times larger than a typical mammalian nerve fiber. In 1939, they were the only nerve cells large enough to get an electrode into. and recorded the first electrical signal from inside a nerve cell.
A few weeks later, Hitler invaded Poland. Both men spent the next six years on war work.
When they came back, they set out to answer the question their recording had opened. How does a nerve signal travel down a fiber? Physiologists knew the rough mechanism. Ions flow through channels in the cell membrane and produce a brief spike of voltage. But nobody could say how that spike propagates from one end of the fiber to the other. Hodgkin and Huxley spent years measuring how much current the channels passed at each voltage. By the early 1950s, they had the most precise measurements anyone had ever taken of a nerve fiber, and they understood almost nothing. You could watch the channels open and close a million times and still not predict what the nerve would do next.
So they stopped collecting data. They were experimentalists — biophysicists by training, working in a wet lab full of squid. They did what physicists do. They wrote four differential equations. Then they spent weeks grinding out solutions by hand on a crank-operated desk machine. The solutions matched the nerve almost perfectly.
Those four equations were written to describe a single nerve fiber in a single marine invertebrate. But because Hodgkin and Huxley had found the right mathematics, their equations turned out to describe nerve fibers in every animal, including us. From those equations we eventually got cochlear implants that let deaf children hear a parent's voice, deep brain stimulators that control the tremors of Parkinson's disease, and brain-computer interfaces that let a paralyzed person move a cursor by thinking about it. The foundations of modern neuroscience were laid by two biophysicists, a roomful of squid, and mathematics.
It's now been more than seventy years since Hodgkin and Huxley sat down with their crank machine. In that time we have built instruments they could not have imagined. We can record tens of thousands of neurons firing at once. We can image the living brain down to the millimeter. We can sequence a human genome in a day. We have solved the measurement problem that Hodgkin and Huxley faced in 1939 and run headlong into the problem they faced in 1952: more data than anyone knows what to do with, and no math to make sense of it.
We have seen what happens when someone finds the right math. Four equations about a squid axon gave us cochlear implants and brain-computer interfaces. Hodgkin and Huxley could not have predicted either one. A man with a severed spinal cord has a brain that works, legs that work, and no way to connect them. A mathematical understanding of the whole brain would give us things we cannot yet imagine, and reconnecting a severed spinal cord would be among the least of them.
The equation at the heart of high-temperature superconductivity fits on an index card and remains unsolvedThe Hubbard model, written down in 1963. Its Hamiltonian has two terms. Solving it in two dimensions at low temperature is one of the great open problems in condensed matter physics.. Room-temperature superconductivity is somewhere on the other side of it. We spend a hundred million dollars to train a large language model. We cannot predict what it will learn. In genomics, in climate science, in materials discovery, the instruments work and the data is abundant and the mathematics to explain it does not exist. There are problems where no one has made progress worth reporting in decades, because the next step forward is mathematical.
There are tens of thousands of professional mathematicians in the world. Few of them know what math problems these fields are stuck on. Fewer still are working on them. It doesn't matter if solving the problem would give us room-temperature superconductors. A young mathematician who spends five years working on a problem in materials science is a young mathematician who isn't getting tenure. So she proves a theorem that extends a theorem that refined a theorem, and her department is satisfied, and room-temperature superconductivity stays theoretical, and the man with the severed spinal cord does not walk.
When breakthroughs do happen, they tend to happen by accident. Two parents meet at preschool. A pair of physiologists pick up mathematics. Even with that luck, the person who makes the discovery needs mathematical depth, knowledge of another field, and the willingness to spend years on a problem that's not their own. Plenty of mathematicians have the depth. Some of them know another field. Almost none of them will give five years to a problem that isn't theirs. The people who have the depth and the instinct for where mathematics is missing are so rare you can count them on two hands. Newton, Gauss, Von Neumann. You cannot train more of them, you cannot identify them when they are young, and even when one appears, that person can work on a handful of problems in a lifetime. There are thousands of problems waiting, and our best plan is to hope that another Newton comes along and happens to look at the right one.
So we are building a model that does mathematics. A model can be copied a thousand times and pointed at a thousand different problems. Protein folding, high-temperature superconductors, epidemiology, materials discovery, problems that have never had a mathematician look at them could all be worked on at the same time, nonstop.
What we are trying to build is not a thousand competent mathematicians. It is a model that can look at the natural world the way Newton and Gauss and Von Neumann looked at it and see the mathematics underneath. If this can be done, scientific progress stops being limited by the rarest form of human genius and starts being limited only by the number of questions we think to ask.
We would rather fail at this than succeed at something smaller.