Anytime astronomers figure out a new way of looking for magnetic fields in ever more remote regions of the cosmos, inexplicably, they find them.

These force fields — the same entities that emanate from fridge magnets — surround Earth, the sun and all galaxies. Twenty years ago, astronomers started to detect magnetism permeating entire galaxy clusters, including the space between one galaxy and the next. Invisible field lines swoop through intergalactic space like the grooves of a fingerprint.

Last year, astronomers finally managed to examine a far sparser region of space — the expanse between galaxy clusters. There, they discovered the largest magnetic field yet: 10 million light-years of magnetized space spanning the entire length of this “filament” of the cosmic web. A second magnetized filament has already been spotted elsewhere in the cosmos by means of the same techniques. “We are just looking at the tip of the iceberg, probably,” said Federica Govoni of the National Institute for Astrophysics in Cagliari, Italy, who led the first detection.

The question is: Where did these enormous magnetic fields come from?

“It clearly cannot be related to the activity of single galaxies or single explosions or, I don’t know, winds from supernovae,” said Franco Vazza, an astrophysicist at the University of Bologna who makes state-of-the-art computer simulations of cosmic magnetic fields. “This goes much beyond that.”

One possibility is that cosmic magnetism is primordial, tracing all the way back to the birth of the universe. In that case, weak magnetism should exist everywhere, even in the “voids” of the cosmic web — the very darkest, emptiest regions of the universe. The omnipresent magnetism would have seeded the stronger fields that blossomed in galaxies and clusters.

Primordial magnetism might also help resolve another cosmological conundrum known as the Hubble tension — probably the hottest topic in cosmology.

The problem at the heart of the Hubble tension is that the universe seems to be expanding significantly faster than expected based on its known ingredients. In a paper posted online in April and under review with *Physical Review Letters*, the cosmologists Karsten Jedamzik and Levon Pogosian argue that weak magnetic fields in the early universe would lead to the faster cosmic expansion rate seen today.

Primordial magnetism relieves the Hubble tension so simply that Jedamzik and Pogosian’s paper has drawn swift attention. “This is an excellent paper and idea,” said Marc Kamionkowski, a theoretical cosmologist at Johns Hopkins University who has proposed other solutions to the Hubble tension.

Kamionkowski and others say more checks are needed to ensure that the early magnetism doesn’t throw off other cosmological calculations. And even if the idea works on paper, researchers will need to find conclusive evidence of primordial magnetism to be sure it’s the missing agent that shaped the universe.

Still, in all the years of talk about the Hubble tension, it’s perhaps strange that no one considered magnetism before. According to Pogosian, who is a professor at Simon Fraser University in Canada, most cosmologists hardly think about magnetism. “Everyone knows it’s one of those big puzzles,” he said. But for decades, there was no way to tell whether magnetism is truly ubiquitous and thus a primordial component of the cosmos, so cosmologists largely stopped paying attention.

Meanwhile, astrophysicists kept collecting data. The weight of evidence has led most of them to suspect that magnetism is indeed everywhere.

In the year 1600, the English scientist William Gilbert’s studies of lodestones — naturally magnetized rocks that people had been fashioning into compasses for thousands of years — led him to opine that their magnetic force “imitates a soul.” He correctly surmised that Earth itself is a “great magnet,” and that lodestones “look toward the poles of the Earth.”

Magnetic fields arise anytime electric charge flows. Earth’s field, for instance, emanates from its inner “dynamo,” the current of liquid iron churning in its core. The fields of fridge magnets and lodestones come from electrons spinning around their constituent atoms.

However, once a “seed” magnetic field arises from charged particles in motion, it can become bigger and stronger by aligning weaker fields with it. Magnetism “is a little bit like a living organism,” said Torsten Enßlin, a theoretical astrophysicist at the Max Planck Institute for Astrophysics in Garching, Germany, “because magnetic fields tap into every free energy source they can hold onto and grow. They can spread and affect other areas with their presence, where they grow as well.”

Ruth Durrer, a theoretical cosmologist at the University of Geneva, explained that magnetism is the only force apart from gravity that can shape the large-scale structure of the cosmos, because only magnetism and gravity can “reach out to you” across vast distances. Electricity, by contrast, is local and short-lived, since the positive and negative charge in any region will neutralize overall. But you can’t cancel out magnetic fields; they tend to add up and survive.

Yet for all their power, these force fields keep low profiles. They are immaterial, perceptible only when acting upon other things. “You can’t just take a picture of a magnetic field; it doesn’t work like that,” said Reinout van Weeren, an astronomer at Leiden University who was involved in the recent detections of magnetized filaments.

In their paper last year, van Weeren and 28 co-authors inferred the presence of a magnetic field in the filament between galaxy clusters Abell 399 and Abell 401 from the way the field redirects high-speed electrons and other charged particles passing through it. As their paths twist in the field, these charged particles release faint “synchrotron radiation.”

The synchrotron signal is strongest at low radio frequencies, making it ripe for detection by LOFAR, an array of 20,000 low-frequency radio antennas spread across Europe.

The team actually gathered data from the filament back in 2014 during a single eight-hour stretch, but the data sat waiting as the radio astronomy community spent years figuring out how to improve the calibration of LOFAR’s measurements. Earth’s atmosphere refracts radio waves that pass through it, so LOFAR views the cosmos as if from the bottom of a swimming pool. The researchers solved the problem by tracking the wobble of “beacons” in the sky — radio emitters with precisely known locations — and correcting for this wobble to deblur all the data. When they applied the deblurring algorithm to data from the filament, they saw the glow of synchrotron emissions right away.

The filament looks magnetized throughout, not just near the galaxy clusters that are moving toward each other from either end. The researchers hope that a 50-hour data set they’re analyzing now will reveal more detail. Additional observations have recently uncovered magnetic fields extending throughout a second filament. Researchers plan to publish this work soon.

The presence of enormous magnetic fields in at least these two filaments provides important new information. “It has spurred quite some activity,” van Weeren said, “because now we know that magnetic fields are relatively strong.”

If these magnetic fields arose in the infant universe, the question becomes: How? “People have been thinking about this problem for a long time,” said Tanmay Vachaspati of Arizona State University.

In 1991, Vachaspati proposed that magnetic fields might have arisen during the electroweak phase transition — the moment, a split second after the Big Bang, when the electromagnetic and weak nuclear forces became distinct. Others have suggested that magnetism materialized microseconds later, when protons formed. Or soon after that: The late astrophysicist Ted Harrison argued in the earliest primordial magnetogenesis theory in 1973 that the turbulent plasma of protons and electrons might have spun up the first magnetic fields. Still others have proposed that space became magnetized before all this, during cosmic inflation — the explosive expansion of space that purportedly jump-started the Big Bang itself. It’s also possible that it didn’t happen until the growth of structures a billion years later.

The way to test theories of magnetogenesis is to study the pattern of magnetic fields in the most pristine patches of intergalactic space, such as the quiet parts of filaments and the even emptier voids. Certain details — such as whether the field lines are smooth, helical or “curved every which way, like a ball of yarn or something” (per Vachaspati), and how the pattern changes in different places and on different scales — carry rich information that can be compared to theory and simulations. For example, if the magnetic fields arose during the electroweak phase transition, as Vachaspati proposed, then the resulting field lines should be helical, “like a corkscrew,” he said.

The hitch is that it’s difficult to detect force fields that have nothing to push on.

One method, pioneered by the English scientist Michael Faraday back in 1845, detects a magnetic field from the way it rotates the polarization direction of light passing through it. The amount of “Faraday rotation” depends on the strength of the magnetic field and the frequency of the light. So by measuring the polarization at different frequencies, you can infer the strength of magnetism along the line of sight. “If you do it from different places you can make a 3D map,” said Enßlin.

Researchers have started to make rough Faraday rotation measurements using LOFAR, but the telescope has trouble picking out the extremely faint signal. Valentina Vacca, an astronomer and a colleague of Govoni’s at the National Institute for Astrophysics, devised an algorithm a few years ago for teasing out subtle Faraday rotation signals statistically, by stacking together many measurements of empty places. “In principle, this can be used for voids,” Vacca said.

But the Faraday technique will really take off when the next-generation radio telescope, a gargantuan international project called the Square Kilometer Array, starts up in 2027. “SKA should produce a fantastic Faraday grid,” Enßlin said.

For now, the only evidence of magnetism in the voids is what observers don’t see when they look at objects called blazars located behind voids.

Blazars are bright beams of gamma rays and other energetic light and matter powered by supermassive black holes. As the gamma rays travel through space, they sometimes collide with ancient microwaves, morphing into an electron and a positron as a result. These particles then fizzle and turn into lower-energy gamma rays.

But if the blazar’s light passes through a magnetized void, the lower-energy gamma rays will appear to be missing, reasoned Andrii Neronov and Ievgen Vovk of the Geneva Observatory in 2010. The magnetic field will deflect the electrons and positrons out of the line of sight. When they decay into lower-energy gamma rays, those gamma rays won’t be pointed at us.

Indeed, when Neronov and Vovk analyzed data from a suitably located blazar, they saw its high-energy gamma rays, but not the low-energy gamma-ray signal. “It’s the absence of a signal that is a signal,” Vachaspati said.

A non-signal is hardly a smoking gun, and alternative explanations for the missing gamma rays have been suggested. However, follow-up observations have increasingly pointed to Neronov and Vovk’s hypothesis that voids are magnetized. “It’s the majority view,” Durrer said. Most convincingly, in 2015, one team overlaid many measurements of blazars behind voids and managed to tease out a faint halo of low-energy gamma rays around the blazars. The effect is exactly what would be expected if the particles were being scattered by faint magnetic fields — measuring only about a millionth of a trillionth as strong as a fridge magnet’s.

Strikingly, this exact amount of primordial magnetism may be just what’s needed to resolve the Hubble tension — the problem of the universe’s curiously fast expansion.

That’s what Pogosian realized when he saw recent computer simulations by Karsten Jedamzik of the University of Montpellier in France and a collaborator. The researchers added weak magnetic fields to a simulated, plasma-filled young universe and found that protons and electrons in the plasma flew along the magnetic field lines and accumulated in the regions of weakest field strength. This clumping effect made the protons and electrons combine into hydrogen — an early phase change known as recombination — earlier than they would have otherwise.

Pogosian, reading Jedamzik’s paper, saw that this could address the Hubble tension. Cosmologists calculate how fast space should be expanding today by observing ancient light emitted during recombination. The light shows a young universe studded with blobs that formed from sound waves sloshing around in the primordial plasma. If recombination happened earlier than supposed due to the clumping effect of magnetic fields, then sound waves couldn’t have propagated as far beforehand, and the resulting blobs would be smaller. That means the blobs we see in the sky from the time of recombination must be closer to us than researchers supposed. The light coming from the blobs must have traveled a shorter distance to reach us, meaning the light must have been traversing faster-expanding space. “It’s like trying to run on an expanding surface; you cover less distance,” Pogosian said.

The upshot is that smaller blobs mean a higher inferred cosmic expansion rate — bringing the inferred rate much closer to measurements of how fast supernovas and other astronomical objects actually seem to be flying apart.

“I thought, wow,” Pogosian said, “this could be pointing us to [magnetic fields’] actual presence. So I wrote Karsten immediately.” The two got together in Montpellier in February, just before the lockdown. Their calculations indicated that, indeed, the amount of primordial magnetism needed to address the Hubble tension also agrees with the blazar observations and the estimated size of initial fields needed to grow the enormous magnetic fields spanning galaxy clusters and filaments. “So it all sort of comes together,” Pogosian said, “if this turns out to be right.”

When James Maynard was three, a health visitor came to his home in Chelmsford, just northeast of London, to check on his development. Such visits were routine for young children, and the assessor led him through a standard battery of tests. There was just one problem: Maynard thought they were stupid.

So when she gave him a shape-sorting task, he intentionally put the shapes in a surprising order, then explained at length why his solution was more interesting than hers. And when she asked him what kind of animal the cow in his toy farm was, “he really enjoyed telling her that it was ‘sheep-sheep’ and watching her reaction,” wrote his mother, Gill Maynard, in an email. When he decided the assessment had gone on long enough, he declared it finished and pulled out his Legos.

“It was pretty memorable — a three-year-old demolishing this poor woman,” Gill Maynard said in an interview.

The assessor told his mother that James lacked discipline. “He’ll have real problems in school if he goes on like this,” she said.

Similar episodes cropped up throughout Maynard’s school years. There was the time when his physics teacher used a grading rubric Maynard thought was ridiculous: Correct answers without explanations or units earned only a third of the points. In protest, Maynard wrote the answers only, got them all correct, and scored 33%. “I think the teacher was probably pretty fed up with me,” he said.

“I was definitely one of those annoying kids who would say ‘Why? Why? Why?’ all the time,” Maynard said. He went through his school years “wanting to do my own thing, or at least wanting justifications for things.”

So perhaps it’s not surprising that in 2013, as a newly minted Ph.D., the 26-year-old Maynard simply shrugged when his postdoctoral adviser warned him off the problem he wanted to pursue, one of the most central questions about prime numbers (those whole numbers divisible only by 1 and themselves).

“I kind of said to him, ‘I hope you won’t work on this full time, because I’m really pretty confident you’re going to fail,’” recalled Andrew Granville, Maynard’s mentor at the University of Montreal at the time.

But Maynard “still had the courage,” said Dimitris Koukoulopoulos of the University of Montreal, “and just sat down and said, ‘OK, let me try this idea and see where it takes me.’”

Where it took him was a theorem that “prompted a major reevaluation” of how mathematicians think about the spacing between prime numbers, said Ben Green of the University of Oxford, where Maynard is now a professor.

Maynard is drawn to questions about prime numbers that are simple enough to explain to a high school student but hard enough to stump mathematicians for centuries. “There’s something very, very appealing to me about the contrast between being simple and fundamental, [and] still being just completely mysterious,” he said.

There’s an abundance of such questions, but fewer now than before Maynard appeared on the scene. For his early success was not a flash in the pan, but the first in a series of discoveries about prime numbers and related structures. Now seven years past his doctorate, Maynard is already considered one of the world’s leading number theorists.

He has had a “very steep upward trajectory in becoming a world-respected mathematician,” Green said.

Granville, who is writing a book on analytic number theory, complained that Maynard has greatly slowed his progress. “I’ve had to add about 150 extra pages because of him,” Granville said.

I sat down with Maynard in January at the Joint Mathematics Meetings in Denver, where he had come to receive the Frank Nelson Cole Prize in Number Theory. While this prize is officially awarded for a single notable paper, in Maynard’s case the prize committee couldn’t resist citing three papers, all of which appeared in top mathematics journals.

He had allocated only a day and a half for his Denver trip, but although we met just an hour after he’d arrived from the U.K., there was a spring in his step. “I’m still going on adrenaline,” he said, with a grin that broadened his narrow, boyish face. “The jetlag hasn’t hit me yet.” He smiled readily during our conversation, except when he had to pose for a snapshot. “People say I’m somehow unable to produce a suitable smile in photos,” he said, with a toothy grimace.

Time permitting, Maynard planned to wander the city taking photographs. He took up photography a few years ago to feel more connected to the many cities he visits for work, but it has turned into an obsession. “I went to Hong Kong in the summer and I was hiking out to take pictures with a tripod at dawn,” he said, even though normally he’s anything but a morning person.

Maynard has always had this obsessive streak, and as a child he passed through distinct dinosaur, geology and astronomy phases. “I’m very bad at being moderately interested in things,” he said. “I somehow have to be obsessive about it or I drop it entirely.”

Once Maynard becomes interested in a subject, he tends not to stop until he has reached the limit of his abilities, said his father, Chris Maynard. “But he’s not reached that point in mathematics yet. I think that’s, in a sense, what drives him.”

Even though everyone else in Maynard’s family was oriented toward the humanities — his parents are language teachers and his brother studied history — he always found himself on the path that offered the most mathematics. “At each stage, it just felt like the obvious next step, given how I was feeling at the time,” he said.

In graduate school at Oxford, his extraordinary mathematical strength became apparent. By the second half of his doctoral studies, said his adviser, Roger Heath-Brown, their meetings felt more like collaborations than a mentorship. “I’ve never had that feeling with a research student before,” he said.

By the time Maynard left Oxford for a one-year postdoctoral fellowship at the University of Montreal, he had started mulling over a potential approach to understanding the gaps between prime numbers. As a rule, primes get scarcer as you go out along the number line. But in some ways they also behave like a collection of random numbers, so mathematicians expect them to often be spaced much closer or farther than average. One of the most famous questions in mathematics is the twin primes conjecture, which posits that there are infinitely many pairs of primes that differ by only 2, like 11 and 13.

Maynard suspected that it might be possible to make progress on understanding prime gaps using a method for filtering primes described in a paper from about a decade earlier. While mathematicians had already studied the method closely, Maynard thought it should be possible to extract still more juice from it. “I kept doing calculations and computations, and I kept on getting these sort of small signals that there was something there to be understood and discovered,” he said. “I somehow got completely obsessed by it, and I really wanted to keep on going until I could come up with a way of explaining what I’d seen.”

Granville, his postdoctoral adviser, discouraged Maynard from pursuing this path. “I didn’t really entirely believe it could possibly work, what he was doing,” Granville said. But “James was never really put off by me being very skeptical, in fact — he just laughed at it.”

Early in Maynard’s explorations, a seismic event occurred in the number theory world. An obscure mathematician named Yitang Zhang proved, not quite the twin primes conjecture, but the next best thing: He showed that there are infinitely many pairs of primes that are at most a bounded distance apart (70 million, to be precise). The finding won Zhang instant glory, in the form of multiple job offers (including one from the University of California, Santa Barbara, where he is now on the faculty), invited lectures, news stories and even a documentary.

Meanwhile, Maynard kept working on his own approach to understanding prime gaps, and about six months later, in a flash of insight, he came up with a completely independent, more powerful approach than Zhang’s — it established that there are infinitely many pairs of primes that differ by at most 600. And Maynard’s approach applied not just to pairs of primes, but to triples, quadruples and bigger collections (with different bounds for each). “The result seemed kind of amazing and too good to be true,” said Kannan Soundararajan of Stanford University.

And indeed, when Maynard first figured it out, his euphoria was quickly followed by a wave of fear that he had missed some obvious error. Fortunately, he said, “I feel like I work much more productively when suddenly I get terrified that my result is wrong. … Nothing motivates me quite as much as fear.”

Granville insisted that Maynard nail down every detail as he wrote up his result. “Nobody’s going to believe you, because nobody’s ever heard of you,” he told Maynard. “You have to write this so well that nobody can argue with you.”

The end result, Soundararajan said, was an “absolutely fantastic” proof.

Near the end of the process, something happened that might easily strike terror into the heart of a young mathematician: Maynard and Granville learned privately that another mathematician had come up with essentially the same result, in the same time frame. And not just any mathematician, but one of the most prolific and highly regarded mathematicians of the modern era — Fields medalist Terence Tao, of the University of California, Los Angeles. The problem had caught Tao’s eye when he and other mathematicians formed a massive collaboration to reduce the 70-million bound in Zhang’s proof.

Tao had been feeling quite proud of his new result when he learned that a little-known 26-year-old had proved the same thing. “And to be honest, the way he wrote it up, he actually had a cleaner result than I did,” Tao said. “He proved slightly stronger statements.” Tao generously refrained from announcing his own work to avoid overshadowing the achievement of such a young mathematician, knowing that if he and Maynard wrote a joint paper, many mathematicians would assume that Tao had done the lion’s share of the creative work.

It’s easy to imagine an alternative timeline in which Zhang proved his result six months after Maynard instead of six months before (in which case Tao’s explorations would presumably have been delayed or simply forestalled). All the glory would then have accrued to Maynard instead of Zhang. But Maynard doesn’t feel envy at how things played out. “When Zhang proved his result, I was just completely excited,” he said. “The main joy I get is from solving the problem. And so I really didn’t think too much at all about, ‘Oh, if only I had done this slightly differently.’”

Maynard often chooses to stroll from home to office without wearing his glasses. The gentle blurring helps him focus on mathematics, but it has sometimes led him to walk right past his partner, Eleanor Grant, without recognizing her. “And there was one time when he thought he saw me and was really, really proud that he’d seen me, and kind of ran up to this person who didn’t look anything like me,” said Grant, a doctor in Oxford.

Maynard conforms to the stereotype of the absent-minded professor in this and a handful of other ways. For instance, he nearly always wears the same style of clothing, an open-collared white shirt and jeans. “I’m obviously not the most fashion-orientated person,” he wrote in an email. (Once, as a prank, all the mathematicians attending one of his talks showed up wearing the Maynard uniform.)

But he also belies many of the clichés about introverted mathematicians. Colleagues call him warm, fun-loving and outgoing. Pre-pandemic, he brought his own coffee beans to work and brewed coffee for the other number theorists every day after lunch. And when he spent a semester at the Mathematical Sciences Research Institute in Berkeley a few years ago, the house he shared with two other young mathematicians was the “party house,” Heath-Brown said (though Maynard qualified that to “a party house by mathematician standards”).

Many of the new generation of number theorists are more social because of Maynard, Granville said. “He’s kind of the center of the group.”

After Maynard proved his theorem about small gaps between primes, number theorists hastened to apply his insights to other problems. But the biggest success in doing so, to date, has come from Maynard himself, who figured out how to address large prime gaps as well, improving upon estimates that had previously seen no significant progress in more than 75 years. Maynard’s adaptation of his method to this new scenario “was one of the cleverest tricks I’ve ever seen in number theory,” Granville said.

“I would say that anybody would be happy to have proved two theorems of this type over the course of their career,” Soundararajan said of Maynard’s results on small and large prime gaps. “The fact that he did it after he’d just finished grad school is quite remarkable.”

In a disconcerting echo of the small-gaps story, Tao once again came up with roughly the same result at roughly the same time (though on this occasion he collaborated with Green and two other co-authors). Since then, Maynard and Tao’s tendency to come up with similar results has become a running joke in the number theory community. When Tao solved another long-standing number theory problem a year or two later, “I remember being very paranoid,” he said, “and just asking Andrew [Granville], ‘I really hope James hasn’t scooped me again this time.’”

Since then, Maynard has given the number theory community ample proof that he is more than just a clone of one of the most famous mathematicians in the world. Last year, for instance, he and Koukoulopoulos settled a nearly 80-year-old question called the Duffin-Schaeffer conjecture that asks which infinite collections of denominators produce fractions that do a good job of approximating irrational numbers. “It’s just been the holy grail in a certain area of … approximation for a long time,” Granville said.

And a few years ago, Maynard tackled perhaps the ultimate easy-to-state but hard-to-prove question about prime numbers, proving that there are infinitely many primes that don’t have any 7s (or any other digit you might choose). While numbers without any 7s are plentiful if you’re looking at small numbers, they are almost vanishingly rare when you start looking at, say, 1,000-digit numbers, so showing that this sparse number set contains infinitely many primes is no simple matter. “This is something that people have wondered about for a long, long time, and no one got anywhere close to proving,” Heath-Brown said.

This question makes sense in other bases besides 10, and Maynard started by coming up with a proof for very large bases. The bigger the base, the easier it is to prove a theorem of this sort, since if you’re in a base with, say, a million different numerals instead of just 0 to 9, a restriction like “no 7s” has a smaller impact. Maynard’s proof for large bases was “very elegant,” Granville said.

But Maynard became obsessed with proving his theorem in ordinary base 10. “Base 10 is somewhat arbitrary from a mathematical point of view, but … it’s the base that everyone talks about normally in daily life,” he said.

Starting with base 1,000,000, he kept reducing the base, first to 5,000, then 1,000, then 100. “It became almost a game with myself about how sophisticated an argument could I come up with,” he said. “It was almost like these betting machines or these games online that give you a little endorphin hit each time.”

He got stuck on base 12 for a long time — long enough to worry that the final goal would elude him. But eventually he made it to base 10. “I was very happy to just about drag myself over the line and then declare victory,” he said.

Maynard had to invent all kinds of new ideas to get to base 10. “This shows his utter, extraordinary, powerful strength as a mathematician,” Granville said.

This contribution and the rest have created a buzz of energy and anticipation among number theorists. “I’m not sure there’s anyone else in analytic number theory at the moment that I would think is generating more excitement,” Heath-Brown said.

“People are wondering, ‘What’s he going to do next?’” he said. “Everything seems possible.”

While much about the COVID-19 pandemic remains uncertain, we know how it will likely end: when the spread of the virus starts to slow (and eventually ceases altogether) because enough people have developed immunity to it. At that point, whether it’s brought on by a vaccine or by people catching the disease, the population has developed “herd immunity.”

“Once the level of immunity passes a certain threshold, then the epidemic will start to die out because there aren’t enough new people to infect,” said Natalie Dean of the University of Florida.

While determining that threshold for COVID-19 is critical, a lot of nuance is involved in calculating exactly how much of the population needs to be immune for herd immunity to take effect and protect the people who aren’t immune.

At first it seems simple enough. The only thing you need to know is how many people, on average, are infected by each infected person. This value is called R_{0} (pronounced “R naught”). Once you have that, you can plug it into a simple formula for calculating the herd immunity threshold: 1 − 1/R_{0}.

Let’s say the R_{0} for COVID-19 is 2.5, meaning each infected person infects, on average, two and a half other people (a common estimate). In that case, the herd immunity threshold for COVID-19 is 0.6, or 60%. That means the virus will spread at an accelerating rate until, on average across different places, 60% of the population becomes immune.

At that point, the virus will still spread, but at a decelerating rate until it stops completely. Just as a car doesn’t come to a stop the moment you take your foot off the gas, the virus won’t vanish the moment herd immunity is reached.

“You could imagine that once 60% of the population is infected, the number of infections starts to drop. But it might be another 20% that gets infected while the disease is starting to die out,” said Joel Miller of La Trobe University in Australia.

That 60% is also the threshold past which new introductions of the virus — say, an infected passenger disembarking from a cruise ship into a healthy port with herd immunity — will quickly burn out.

“It doesn’t mean you won’t be able to start a fire at all, but that outbreak is going to die,” said Kate Langwig of Virginia Polytechnic Institute and State University.

However, things quickly get complicated. The herd immunity threshold depends on how many people each infected person actually infects — a number that can vary by location. The average infected person in an apartment building may infect many more people than the average infected person in a rural setting. So while an R_{0} of 2.5 for COVID-19 may be a reasonable number for the whole world, it will almost certainly vary considerably on a more local level, averaging much higher in some places and lower in others. This means that the herd immunity threshold will also be higher than 60% in some places and lower in others.

“I think the range of R_{0} consistent with data for COVID-19 is larger than most people give credit to,” said Marc Lipsitch of Harvard University, who has been advising health officials in Massachusetts and abroad. He cited data indicating it could be more than twice as high in some urban settings as the overall U.S. average.

And just as R_{0} turns out to be a variable, and not a static number, the way people acquire their immunity also varies, with important implications for calculating that herd immunity threshold.

Usually, researchers only think about herd immunity in the context of vaccine campaigns, many of which assume that everyone is equally likely to contract and spread a disease. But in a naturally spreading infection, that’s not necessarily the case. Differences in social behaviors lead some people to have more exposure to a disease than others. Biological differences also play a role in how likely people are to get infected.

“We are born different, and then these differences accumulate as we live different experiences,” said Gabriela Gomes of the University of Strathclyde in Scotland. “This affects how able people are to fight a virus.”

Epidemiologists refer to these variations as the “heterogeneity of susceptibility,” meaning the differences that cause some people to be more or less likely to get infected.

But this is too much nuance for vaccination campaigns. “Vaccines are generally not distributed in a population with respect to how many contacts people have or how susceptible they are, because we don’t know that,” said Virginia Pitzer of the Yale School of Public Health. Instead, health officials take a maximalist approach and, in essence, vaccinate everyone.

However, in an ongoing pandemic with no guarantee that a vaccine will be available anytime soon, the heterogeneity of susceptibility has real implications for the disease’s herd immunity threshold.

In some cases it will make the threshold higher. This could be true in places like nursing homes, where the average person might be more susceptible to COVID-19 than the average person in the broader population.

But on a larger scale, heterogeneity typically lowers the herd immunity threshold. At first the virus infects people who are more susceptible and spreads quickly. But to keep spreading, the virus has to move on to people who are less susceptible. This makes it harder for the virus to spread, so the epidemic grows more slowly than you might have anticipated based on its initial rate of growth.

“The first person is going to be likely to infect the people who are most susceptible to begin with, leaving the people who are less susceptible toward the latter half of the epidemic, meaning the infection could be eliminated sooner than you’d expect,” Lipsitch said.

So how much lower is the herd immunity threshold when you’re talking about a virus spreading in the wild, like the current pandemic?

According to the standard models, about 60% of the U.S. population would need to be vaccinated against COVID-19 or recover from it to slow and ultimately stop the spread of the disease. But many experts I talked to suspect that the herd immunity threshold for naturally acquired immunity is lower than that.

“My guess would be it’s potentially between 40 and 50%,” Pitzer said.

Lipsitch agrees: “If I had to make a guess, I’d probably put it at about 50%.”

These are mostly just educated estimates, because it’s so hard to quantify what makes one person more susceptible than another. Many of the characteristics you might think to assign someone — like how much social distancing they’re doing — can change from week to week.

“The whole heterogeneity problem only works if the sources of heterogeneity are long-term properties of a person. If it’s being in a bar, that’s not in itself sustained enough to be a source of heterogeneity,” Lipsitch said.

Heterogeneity may be hard to estimate, but it’s also an important factor in determining what the herd immunity threshold really is. Langwig believes that the epidemiological community hasn’t done enough to try and get it right.

“We’ve kind of been a little sloppy in thinking about herd immunity,” she said. “This variability really matters, and we need to be careful to be more accurate about what the herd immunity threshold is.”

Some recent papers have tried. In June the journal *Science* published a study that incorporated a modest degree of heterogeneity and estimated the herd immunity threshold for COVID-19 at 43% across broad populations. But one of the study’s co-authors, Tom Britton of Stockholm University, thinks there are additional sources of heterogeneity their model doesn’t account for.

“If anything, I’d think the difference is bigger, so that in fact the herd immunity level is probably a bit smaller than 43%,” Britton said.

Another new study takes a different approach to estimating differences in susceptibility to COVID-19 and puts the herd immunity threshold even lower. The paper’s 10 authors, who include Gomes and Langwig, estimate that the threshold for naturally acquired herd immunity to COVID-19 could be as low as 20% of the population. If that’s the case, the hardest-hit places in the world may be nearing it.

“We’re getting to the conclusion that the most affected regions like Madrid may be close to reaching herd immunity,” said Gomes. An early version of the paper was posted in May, and the authors are currently working on an updated version, which they anticipate posting soon. This version will include herd immunity estimates for Spain, Portugal, Belgium and England.

Many experts, however, consider these new studies — not all of which have been peer-reviewed yet — to be unreliable.

In a Twitter thread in May, Dean emphasized that there’s too much uncertainty around basic aspects of the disease — from the different values of R_{0} in different settings to the effects of relaxing social distancing — to place much confidence in exact herd immunity thresholds. The threshold could be one number as long as a lot of people are wearing masks and avoiding large gatherings, and another much higher number if and when people let their guard down.

Other epidemiologists are also skeptical of the low numbers. Jeffrey Shaman of Columbia University said that 20% herd immunity “is not consistent with other respiratory viruses. It’s not consistent with the flu. So why would it behave differently for one respiratory virus versus another? I don’t get that.”

Miller added, “I think the herd immunity threshold [for naturally acquired immunity] is less than 60%, but I don’t see clear evidence that any [place] is close to it.”

Ultimately, the only way to truly escape the COVID-19 pandemic is to achieve large-scale herd immunity — everywhere, not just in a small number of places where infections have been highest. And that will likely only happen once a vaccine is in widespread use.

In the meantime, to prevent the spread of the virus and lower that R_{0} value as much as possible, distancing, masks, testing and contact tracing are the order of the day everywhere, regardless of where you place the herd immunity threshold.

“I can’t think of any decision I’d make differently right now if I knew herd immunity was somewhere else in the range I think it is, which is 40-60%,” said Lipsitch.

Shaman, too, thinks that uncertainty about the naturally acquired herd immunity threshold, combined with the consequences for getting it wrong, leaves only one path forward: Do our best to prevent new cases until we can introduce a vaccine to bring about herd immunity safely.

“The question is: Could New York City support another outbreak?” he said. “I don’t know, but let’s not play with that fire.”

If you could shrink small enough to descend the genetic helix of any animal, plant, fungus, bacterium or virus on Earth as though it were a spiral staircase, you would always find yourself turning right — never left. It’s a universal trait in want of an explanation.

Chemists and biologists see no obvious reason why all known life prefers this structure. “Chiral” molecules exist in paired forms that mirror each other the way a right-handed glove matches a left-handed one. Essentially all known chemical reactions produce even mixtures of both. In principle, a DNA or RNA strand made from left-handed nucleotide bricks should work just as well as one made of right-handed bricks (although a chimera combining left and right subunits probably wouldn’t fare so well).

Yet life today uses just one of chemistry’s two available Lego sets. Many researchers believe the selection to be random: Those right-handed genetic strands just happened to pop up first, or in slightly greater numbers. But for more than a century, some have pondered whether biology’s innate handedness has deeper roots.

“This is one of the links between life on Earth and the cosmos,” wrote Louis Pasteur, one of the first scientists to recognize the asymmetry in life’s molecules, in 1860.

The researchers’ next task is to see if the handedness of real particles can actually cause the speedy mutation seen in their model. After they published their research, Globus approached David Deamer, a biologist and engineer at the University of California, Santa Cruz, for help. Impressed by her ideas, he suggested the simplest biological test he could think of: an off-the-shelf assay known as the Ames test that exposes a bacterial colony to a chemical to find out if the substance causes mutations. But instead of evaluating a chemical, the researchers plan to roast the microbes with beams of chiral electrons or muons.

Proof that the handedness of particles really can mutate microbes would strengthen their case that cosmic rays shoved our ancestors off the evolutionary starting block, but it still wouldn’t fully explain the uniform chirality of life on Earth. The theory doesn’t address, for example, how “live” organisms and “evil” organisms managed to materialize from a primordial smoothie containing both right- and left-handed building blocks.

“That is a very hard step,” said Jason Dworkin, a senior astrobiologist at the NASA Goddard Space Flight Center and an investigator with the Simons Collaboration on the Origins of Life, “but if this [theory] can provide a different mechanism, another Darwinian pressure, that’d be interesting.”

Even before genetic evolution enters the picture, another unknown process seems to handicap “evil” life. The simple amino acid molecules that form proteins also exist in “live” configurations favored by life and “evil” configurations that are not (although the preferred chirality for “live” amino acids is almost exclusively left-handed). Careful analysis of meteorites by Dworkin and others has found that certain “live” amino acids outnumber “evil” ones by 20% or more, a surplus they may have passed on to Earth. The excess molecules could be the lucky survivors of billions of years of exposure to circularly polarized light, a collection of beams all spiraling in the same direction that, experiments have shown, can destroy one type of amino acid slightly more thoroughly than the other.

But, like the cosmic rays, the beams of light have a marginal effect. Countless interactions would be needed to leave behind a noticeable imbalance, so some other force may be at work as well. Light would have to pulverize untenably huge quantities of molecules to explain the excesses on its own, Dworkin says.

Sasselov has encouraged Globus and Blandford to consider whether cosmic rays might join forces with polarized light to shape the amino acids on asteroids. On Earth, the doses of cosmic rays — which he likens to supersonic bullets — that would be needed to make a noticeable chiral difference might prove too lethal, he speculated. “You’re destroying so much of everything,” he said. “You may be left with the [correct] handedness, but essentially you’re shooting yourself in the foot.”

Ultimately, the fact that researchers struggle to find a theory that balances the rise of chirality against the destruction of biological materials suggests that our ancestors may have been lucky to find that fine line.

“There is something special about planets like the Earth that protect this kind of chemistry,” Sasselov said.

In mid-March, the mathematicians Joshua Greene and Andrew Lobb found themselves in the same situation: locked down and struggling to adjust while the COVID-19 pandemic grew outside their doors. They decided to cope by throwing themselves into their research.

“I think the pandemic was really kind of galvanizing,” said Greene, a professor at Boston College. “We each decided it would be best to lean into some collaborations to sustain us.”

One of the problems the two friends looked at was a version of a century-old unsolved question in geometry.

“The problem is so easy to state and so easy to understand, but it’s really hard,” said Elizabeth Denne of Washington and Lee University.

It starts with a closed loop — any kind of curvy path that ends where it starts. The problem Greene and Lobb worked on predicts, basically, that every such path contains sets of four points that form the vertices of rectangles of any desired proportion.

While this “rectangular peg problem” seems like the kind of question a high school geometry student might settle with a ruler and compass, it has resisted mathematicians’ best efforts for decades. And when Greene and Lobb set out to tackle it, they didn’t have any particular reason to expect they’d fare better.

Of all the different projects he was working on, Greene said, “I thought this was probably the least promising one.”

But as the pandemic surged, Greene and Lobb, who is at Durham University in England and the Okinawa Institute of Science and Technology, held weekly Zoom calls and had a quick succession of insights. Then, on May 19, as parts of the world were just beginning to reopen, they emerged in their own way and posted a solution.

Their final proof — showing the predicted rectangles do indeed exist — transports the problem into an entirely new geometric setting. There, the stubborn question yields easily.

“It’s sort of weird,” said Richard Schwartz of Brown University. “It was just the right idea for this problem.”

The rectangular peg problem is a close offshoot of a question posed by the German mathematician Otto Toeplitz in 1911. He predicted that any closed curve contains four points that can be connected to form a square. His “square peg problem” remains unsolved.

“It’s an old thorny problem that nobody has been able to crack,” Greene said.

To understand why the problem is so hard, it’s important to know something about the kinds of curves the square peg problem talks about, which matters for Greene and Lobb’s proof, too.

The pair solved a problem about closed curves that are both “continuous” and “smooth.” Continuous means they have no breaks. Smooth means they’re continuous and also have no corners. Smooth, continuous curves are the ones you’d likely draw if you sat down with pencil and paper. They’re “easier to get our hands on,” said Greene.

Smooth, continuous curves contrast with curves that are merely continuous, but not smooth — the type of curve that features in Toeplitz’s square peg conjecture. This type of curve can have corners — places where they veer suddenly in different directions. One prominent example of a curve with many corners is the fractal Koch snowflake, which in fact is made of nothing but corners. The Koch snowflake, and other curves like it, cannot be analyzed using calculus and related methods, a fact that makes them especially hard to study.

“Some continuous [non-smooth] curves are really nasty,” Denne said.

But again, the problem Greene and Lobb solved involves curves that are smooth, and therefore continuous. And instead of determining whether such curves always have four points that make a square — a question that was solved for smooth, continuous curves in 1929 — they investigated whether such curves always have sets of four points that form rectangles of all “aspect ratios,” meaning the ratios of their side lengths. For a square the aspect ratio is 1:1, while for many high-definition televisions it’s 16:9.

The first major progress on the rectangular peg problem was made in a proof from the late-1970s by Herbert Vaughan. The proof initiated a new way of thinking about the geometry of a rectangle and established methods that many mathematicians, including Greene and Lobb, later picked up.

“Everybody knows this proof,” Greene said. “It’s kind of folklore and the sort of thing you learn over a lunch table discussion around the common room.”

Instead of thinking of a rectangle as four connected points, Vaughan thought of it as two pairs of points that have a particular relationship with each other.

Picture a rectangle whose vertices are labeled ABCD, clockwise from the top left. In this rectangle, the distance between the pair of points AC (along the diagonal of the rectangle) is the same as the distance between the pair of points BD (along the other diagonal). The two line segments also intersect at their midpoints.

So if you’re looking for rectangles on a closed loop, one way to pursue them is to look for pairs of points on it that share this property: They form equal-length line segments with the same midpoint. And to find them, it’s important to come up with a systematic way of thinking about them.

To get a sense of what that means, let’s start with something simpler. Take the standard number line. Pick two points on it — say the numbers 7 and 8 — and plot them as a single point in the *xy*-plane (7, 8). Pairs of the same point are allowed too (7, 7). Now consider all possible pairs of numbers that can be extracted from the number line (it’s a lot!). If you were to plot all those pairs of points, you’d fill in the entire two-dimensional *xy*-plane. Another way of stating this is to say that the *xy*-plane “parameterizes,” or collects in an orderly way, all pairs of points on the number line.

Vaughan did something similar for pairs of points on a closed curve. (Like the number line, it’s one-dimensional, only it also curves in on itself.) He realized that if you take pairs of points from the curve and plot them — without worrying about which point is the *x* coordinate and which one is the *y* — you don’t get the flat *xy*-plane. Instead, you get a surprising shape: a Möbius strip, which is a two-dimensional surface that has only one side.

In a way this makes sense. To see why, pick a pair of points on the curve and label them *x* and *y*. Now travel from *x* to *y* along one arc of the curve while traveling from *y* to *x* along the complementary arc of the curve. As you do so, you move through all pairs of points on the curve, beginning and ending with the unordered pair (*x*, *y*). But as you do so, you return to where you started, only with your orientation flipped. This orientation-flipping loop of unordered points forms the core of a Möbius strip.

This Möbius strip provides mathematicians with a new object to analyze in order to solve the rectangular peg problem. And Vaughan used that fact to prove that every such curve contains at least four points that form a rectangle.

Greene and Lobb’s proof built on Vaughan’s work. But it also combined several additional results, some of which were only available very recently. The final proof is like a precision instrument, which has just the right combination of ideas to produce the outcome they wanted.

One of the first big ingredients of their proof appeared in November 2019 when a Princeton graduate student named Cole Hugelmeyer posted a paper that introduced a new way of analyzing Vaughan’s Möbius strip. This work involved a mathematical process called an embedding, in which you take an object and transplant it into a geometric space. Greene and Lobb would eventually take Hugelmeyer’s technique and move it into yet another geometric space. But to see what they did you first need to know what he did.

Here’s a simple example of what an embedding is.

Start with a one-dimensional line. Each point on the line is defined by a single number. Now “embed” that line in two-dimensional space — which is to say, just graph it in the plane.

Once you embed the line in the *xy*-plane, each point on it becomes defined by two numbers — the *x* and *y* coordinates that specify exactly where in the plane that point lies. Given this setup, you can then start to analyze the line using the techniques of two-dimensional geometry.

The exercise might seem complicated, but it paid quick dividends for Hugelmeyer. He took the embedded Möbius strip and rotated it, the way you could imagine holding a block in front of you and twisting it a bit to the left. The rotated Möbius strip was offset from the original, so the two copies intersected each other. (Because the rotation takes place in four-dimensional space, the exact way the two copies of the Möbius strip overlap is hard to visualize, but it’s mathematically easy to access.)

This intersection was critical. Wherever the two copies of the Möbius strip overlapped, you would find two pairs of points back on the original closed curve that formed the four vertices of a rectangle.

Why?

First, remember that a rectangle can be thought of as two pairs of points that share a midpoint and are an equal distance apart. This is exactly the information encoded in the first three values of the four-dimensional address assigned to each point on the embedded Möbius strip.

Second, it’s possible to rotate the Möbius strip in four-dimensional space so that you only change one of the coordinates in each point’s four-coordinate address — like changing the street numbers of all the houses on a block, but leaving the street name, city and state unchanged. (For a more geometric example, think about how holding a block in front of you and shifting it to the right only changes its *x* coordinates, not the *y* and *z* coordinates.)

Hugelmeyer explained how to rotate the Möbius strip in four-dimensional space so that the two coordinates encoding the midpoint between pairs of points remained the same, as did the coordinate encoding the distance between pairs of points. The rotation only changed the last coordinate — the one encoding information about the angle of the line segment between the pairs of points.

As a result, the intersection between the rotated copy of the Möbius strip and the original corresponded exactly to two distinct pairs of points back on the closed curve that had the same midpoint and were the same distance apart. Which is to say, the intersection point corresponded exactly to the four vertices of a rectangle on the curve.

This strategy, of using an intersection between two spaces to find the points you’re looking for, has long been used in work on the square and rectangular peg problems.

“Where those [spaces] intersect is where you have the thing you’re looking for,” said Denne. “All of these proofs in the history of the square peg problem, a lot of them have that idea.”

Hugelmeyer used the intersection strategy in a four-dimensional setting and got more out of it than anyone before him. The Möbius strip can be rotated by any angle between 0 and 360 degrees, and he proved that one-third of those rotations yield an intersection between the original and the rotated copy. This fact turns out to be equivalent to saying that on a closed curve, you can find rectangles with one-third of all possible aspect ratios.

“Credit to Cole for realizing that you should think about placing the Möbius strip in four-dimensional space and having four-dimensional techniques at your disposal,” said Greene.

At the same time, Hugelmeyer’s result was provocative: If four-dimensional space was such a useful way to attack the problem, why would it only be useful for one-third of all rectangles?

“You should be able to get the other two-thirds, for goodness’ sake,” Greene said. “But how?”

Even before they were locked down by the pandemic, Greene and Lobb had been interested in the rectangular peg problem. In February, Lobb hosted a conference at the Okinawa Institute of Science and Technology that Greene attended. The two spent a couple of days talking about the problem. Afterward, they continued their conversation during a week of sightseeing in Tokyo.

“We didn’t stop talking about the problem,” Lobb said. “We were going to restaurants, cafes, museums, and every now and again we’d have a thought about the problem.”

They continued their conversation even after they were confined to their respective homes. Their hope was to prove that every possible rotation of the Möbius strip yielded an intersection point — which is equivalent to proving you can find rectangles with all possible aspect ratios.

Other types of geometric spaces make it possible to think about other types of constraints. The one that proved important in Greene and Lobb’s work is called a symplectic space.

This type of geometric setting first came up in the 19th century with the study of physical systems like orbiting planets. As a planet moves through three-dimensional space, its position is defined by three coordinates. But the Irish mathematician William Rowan Hamilton observed that at each point in a planet’s motion it is also possible to place a vector representing the planet’s momentum.

In the 1980s, a mathematician named Vladimir Arnold elaborated the mathematical study of symplectic geometry. He understood that geometric spaces with a symplectic structure intersect themselves under rotation more often than spaces without such a structure.

This was perfect for Greene and Lobb, who wanted to solve the rectangular peg problem for all aspect ratios by proving that a rotated copy of the parameterizing Möbius strip also intersects itself a lot. So they began trying to embed the two-dimensional Möbius strip in four-dimensional symplectic space.

“There was this pivotal insight to look at the problem from the perspective of symplectic geometry,” Greene said. “That was just a game changer.”

By late April, Greene and Lobb had determined that it was possible to embed the Möbius strip in four-dimensional symplectic space in a way that conformed to the structure of the space. With that done, they could start to use the tools of symplectic geometry — many of which bear directly on the question of how spaces intersect themselves.

In May, Greene and Lobb happened to remember an interesting fact about the Klein bottle: It’s impossible to embed in four-dimensional symplectic space so that it doesn’t intersect itself. In other words, there’s no such thing as a nonintersecting Klein bottle that also conforms to the special rules of symplectic space. This fact was the key to the proof. “It was the magic bullet,” Greene said.

Here’s why. Greene and Lobb had already demonstrated that it’s possible to embed the Möbius strip in four-dimensional symplectic space in a way that follows the rules of the space. What they really wanted to know was whether every rotation of the Möbius strip intersects the original copy.

Well, two Möbius strips that intersect each other are equivalent to a Klein bottle, which intersects itself in this type of space. And if you rotate a Möbius strip so that the rotated copy doesn’t intersect the original copy, in essence you’ve produced a Klein bottle that doesn’t intersect itself. But such a Klein bottle is impossible in four-dimensional symplectic space. Therefore, every possible rotation of the embedded Möbius strip must also intersect itself — meaning every closed, smooth curve must contain sets of four points that can be joined together to form rectangles of all aspect ratios.

The conclusion, in the end, arrived like an avalanche.

“It is like setup, setup, setup, and then the hammer lands and the proof is done,” Denne said.

Greene and Lobb’s proof is a good example of how solving a problem often hinges on finding the right light in which to consider it. Generations of mathematicians failed to get a handle on this version of the rectangular peg problem because they tried to solve it in more traditional geometric settings. Once Greene and Lobb moved it into the symplectic world, the problem gave way with a whisper.

“These problems that were being thrown around in the 1910s and 1920s, they didn’t have the right framework to think about them,” Greene said. “What we’re realizing now is that they’re really hidden incarnations of symplectic phenomena.”

Sudden, radical transformations of substances known to humanity for eons, like water freezing and soup steaming over a fire, remained mysterious until well into the 20th century. Scientists observed that substances typically change gradually: Heat a collection of atoms a little, and it expands a little. But nudge a material past a critical point, and it becomes something else entirely.

The mathematical key to cracking “phase transitions” debuted exactly 100 years ago, and it has transformed the natural sciences. The Ising model, as it’s known, was initially proposed as a cartoon picture of magnets. It’s now so commonly used as a simple model of physical systems that physicists liken it to the fruit fly, biology’s model organism. A recently published textbook deemed the Ising model “the system that can be used to model virtually every interesting thermodynamic phenomenon.”

It has also penetrated far-flung disciplines well beyond physics, serving as a model of earthquakes, proteins, brains — and even racial segregation.

Here’s the story of how a toy model of magnetism demystified phase transitions, became ubiquitous in science and continues to help push the boundaries of knowledge today.

In 1920, in a world recovering from a global flu pandemic, a German physicist named Wilhelm Lenz set out to understand why heating a magnet past a certain temperature causes it to suddenly lose its attractive power, as Pierre Curie had discovered 25 years earlier. Lenz imagined a magnet as a lattice of little arrows, each pointing either up or down, representing atoms. (Atoms are intrinsically magnetic, with north and south poles, and thus can be thought of as having orientations.) Arrows influence their neighbors, attempting to magnetically flip them to match their own orientation.

At the critical temperature, islands of all sizes coexist, from dots to continents. Here, one arrow can flip another, distant arrow, despite their not being neighbors — an indication that the system’s macroscopic properties have detached from its microscopic details. This detachment is the magic of universality. All systems with the same number of dimensions and the same symmetries go through identical phase transitions, regardless of whether their microscopic parts are iron atoms, water molecules or little arrows.

Universality means that whenever researchers want to understand a situation with many interacting entities that can be described with opposing labels such as “up” and “down” or “present” and “absent,” they’ll probably start with Ising. “There’s a way in which the Ising model is the simplest solvable model,” said Frances Hellman, a condensed matter physicist at the University of California, Berkeley. “And that gets you a long way toward understanding.” Researchers can also extend the model to suit additional physical systems by letting the arrows spin freely in a plane, for instance.

But even as the Ising model transformed physicists’ understanding of materials, researchers hit a wall in their efforts to exactly solve the 3D version — that is, to find a tidy formula for how magnetized a 3D lattice of arrows becomes at any given temperature. Even Richard Feynman failed in his attempt to complete Ising’s original 1920 assignment.

Today, computers can simulate the 3D Ising model and approximate its critical exponents to a reasonable degree of accuracy, so there’s little urgency to find an exact solution. Yet a yearning persists. One collaboration of physicists announced in 2012 that in exploring the space of logically possible physics theories — where each point matches a set of critical exponents — they had located a region containing the exact critical exponents of the 3D Ising model. Since then the group has whittled down the zone further. In December, they applied their approach to explain a puzzling measurement of liquid helium on a 1992 space shuttle flight.

Grinding out additional decimal places of 3D critical exponents is beside the point, said Slava Rychkov, a physicist at the Institut des Hautes Études Scientifiques in France who is involved in the effort. Elsewhere on their map of possible physics theories live Ising extensions, theories of bizarro universes with exotic particles, and perhaps even the elusive quantum theory of gravity in the real universe. The Ising model represents one of the simplest places in this abstract “theory space,” and so serves as a proving ground for developing novel tools for exploring uncharted territory.

If the exact values of its critical exponents can be pinpointed, “it will happen through some completely unknown, completely new way of solving” theories, Rychkov said. “It will be necessarily a revolution.”

With 1.94 billion people, South Asia is home to almost exactly one-quarter of the world’s population. The region, comprising eight countries — Afghanistan, India, Pakistan, Bhutan, the Maldives, Bangladesh, Sri Lanka and Nepal — is extremely poor, densely populated and geographically close to China, where the SARS-CoV-2 virus originated. The COVID-19 pandemic was expected to be a “perfect storm” for the region. However, as of June 22, South Asia had reported a total of only 765,082 confirmed cases and 19,431 deaths, accounting for just 8.5% of global infections and 4.1% of world fatalities — even as the numbers in some South Asian countries have spiked drastically in the past few weeks.

Many reasons have been offered over the past few months as to why South Asia might be an outlier in the pandemic: its tropical climate, protection offered by a tuberculosis vaccine called Bacillus Calmette-Guérin (BCG), exposure to malaria, and a weaker strain of the virus in the Indian subcontinent. However, *Quanta *spoke to 15 global health and infectious disease experts, researchers and epidemiologists, all of whom warned that there is little scientific evidence to back most of these claims.

“There is currently no data to suggest better immunity among South Asian populations, no reason to believe the potentially small benefit of hot, humid weather will even be visible against a backdrop of a nearly 100%-susceptible population,” said Jessie Abbate, an infectious disease and epidemiology research scientist with the French Laboratory for Translational Research on HIV and Infectious Diseases and the geospatial informatics company Geomatys. “There is no evidence of any functional mutations resulting in different ‘strains’ of the virus anywhere (let alone strains with different rates of virulence), and a recent study has shown there to be no correlation between BCG vaccination and COVID-19 susceptibility.”

It is therefore very unlikely that the novel coronavirus will have any less of an effect on South Asian populations. On the contrary, Abbate said, high densities in overcrowded South Asian cities pose a huge problem for control measures that rely on social distancing.

To understand what is behind South Asia’s low COVID-19 numbers to date, consider Afghanistan. Its health minister estimated on March 24 that 80% of the war-torn nation’s population might be infected with COVID-19 within five months — resulting in more than 25 million cases with possibly 110,000 fatalities. This would represent a mortality rate of 0.4%, significantly lower than the 1.4% infection fatality rate observed in New York City, for example. As of June 22, however, only 29,143 confirmed cases and 598 deaths had been reported.

The low numbers do not depict the real situation, according to experts. Nicholas Bishop, a Kabul-based emergency response officer with the United Nations International Organization for Migration (IOM), said that based on extrapolations of testing data, the true number of COVID-19 cases in Afghanistan is in the millions, “as community-level transmission is significant across all 34 provinces.” Poor testing rates, inadequate health infrastructure and violent conflict, among other factors, have masked the actual extent of infection.

“Afghanistan is presently only testing 646 persons per million,” said Bishop. “This is one of the lowest rates in the world and explains why the total official confirmed-case count remains low. Testing is constrained by limited availability of tests and related materials such as RNA extraction kits, reagents, qualified lab technicians and rapid-response sample collection teams, who are further limited by escalating levels of conflict” — a reference to the more than 4,500 attacks in the nation since the beginning of the year by non-state armed groups.

South Asia appears to be an anomaly in the pandemic due to inefficient epidemiological data, according to experts. Epidemiological research has been fast-tracked because of the urgent need for answers about the swiftly moving COVID-19 pandemic, and that urgency has led to a deluge of data — but it ranges from statistically powered robust analyses to poorly designed studies fraught with methodological and ethical issues, said Seema Yasmin, an epidemiologist and director of the Stanford Health Communication Initiative.

“The conclusions from these studies are often made not by the scientists but by the public, who is presented with sometimes misleading headlines drawn from published data,” said Yasmin. “This can affect the public’s trust in science and the scientific process and has the potential to hamper public health efforts.”

Perhaps the most popular justification for the relatively low number of COVID-19 infections and deaths reported in South Asia is the region’s warm, humid climate. In April, for instance, the Trump administration announced findings from a new study that claimed the novel coronavirus loses potency with increased sunlight, heat and humidity. Experts object, however, that this is a correlation without a mechanism. The World Health Organization has also cautioned that high temperatures cannot prevent the COVID-19 disease.

Vikram Patel, a global health professor at Harvard Medical School, dismissed the influence of weather on the novel pathogen. “Certainly the spread of the virus in Mumbai, Chennai and New Delhi in the midst of peak summer heat and humidity pretty much settles the issue of whether climate neutralizes the virus,” he said.

Death counts are important because tracking the infection fatality rate is one of the more reliable ways to gauge the impact of COVID-19, said Prabhat Jha, an epidemiologist with the Dalla Lana School of Public Health at the University of Toronto. To determine the denominator for that calculation, “we need national random samples with a large sample size with antibody assays to determine who has been infected,” he said. However, he added, eight of 10 million overall deaths in India occur at home, and even registered deaths have no useful information on cause.

**“**The Indian Registrar General needs to do an updated survey of deaths,” said Jha. For COVID-19, the governments of India and other nations should be releasing anonymized data on cases daily and on deaths weekly, the way Singapore does, he said. “These data are part of the response.”

Experts believe that the COVID-19 numbers in South Asia are relatively low so far because early and stringent lockdowns delayed the disease’s peak in the region. For example, said Pratik Khanal, a research member of the Nepal Public Health Association, a nationwide lockdown was imposed in Nepal on March 24, a day after the second case was detected and it was kept in place until June 14.

“The lockdown restricted the movement of people, led to the closure of domestic and international flights and closure of nonemergency services,” Khanal said. This bought time for the government to expand testing facilities from just one to 19 across the country, to trace the contacts of infected people, and to prepare health facilities to handle the uptick in cases. Nevertheless, Khanal continued, Nepal still lacks the necessary infrastructure to handle the epidemic, and the government should “coordinate with private sector hospitals for expansion of testing facilities and hospitalization of COVID-19 cases.”

Because of that limited health spending, experts warn, most South Asian countries have crumbling health infrastructures that are ill equipped to handle the COVID-19 onslaught. Bangladesh, India, Pakistan, Nepal and Afghanistan do not have even one hospital bed per 1,000 people, while the numbers stand at only 4.3 hospital beds per 1,000 people in the Maldives, 3.6 beds per 1,000 in Sri Lanka and 1.7 beds per 1,000 in Bhutan. The numbers of available physicians are even more dire: Pakistan and Sri Lanka have one physician per 1,000 people and the Maldives has four. The remaining five countries do not have even a single physician per 1,000 people.

As South Asian countries ease their lockdown restrictions, many are witnessing a drastic rise in COVID-19 infections. For instance, the national lockdowns in Pakistan and Bangladesh were lifted on May 9 and May 30 respectively, and they loosened in India in early June. All three nations recently reported their highest single-day spikes in COVID-19 cases.

Experts warn that in South Asia, the pandemic is likely to develop along a trajectory similar to that seen in other affected nations. Abraham said that there might have been a lag in terms of the pace of virus transmission in South Asia, but in three to four years, “we will find that there’s pretty much uniformity across the globe.”

With COVID-19 infections and fatalities now on the increase, “South Asia should focus on health-system readiness to prevent, control and manage the COVID infections, as the socioeconomic consequences of the disease on the countries will be lethal,” Khanal said.

To that end, there is also a need for better epidemiological data, experts said. Abbate said that science always has room for hypotheses about atypical patterns in the spread of the coronavirus, but investigating the hypotheses before determining the actual extent of infection and its severity patterns — through adequate testing and reporting of cases and outcomes — “is a waste of time, effort and resources.”

Jha agreed and added, “We need not just love in the time of cholera, but data.”

“I don’t think I’m a gloomy person,” Katie Mack said. She just likes thinking about the end — the annihilation of Earth, the solar system, our galaxy and especially the universe. Apocalyptic topics that can put even these uncertain times into perspective. “The destruction of the whole universe: There’s nothing bigger and more dramatic than that,” she said.

Change is in the nature of her career. As she began her undergraduate studies at the California Institute of Technology, cosmologists were processing the 1998 discovery that some mysterious entity called “dark energy” was pushing galaxies apart from one another. While working toward her Ph.D. at Princeton University, the first results from the Wilkinson Microwave Anisotropy Probe (WMAP) came out, providing “our first really detailed accounting of the contents of the universe,” she said. “Since WMAP was partly led by people at Princeton, it was a big part of life there, and hugely exciting; I felt like I was right at the ground floor on some of the most exciting discoveries in cosmology.”

In her academic career — she’s currently on the faculty at North Carolina State University — she investigates the nature of dark matter, the physics of the early universe, the evolution of galaxies and the nature of black holes. But she’s most widely known as a science communicator and social media star. On Twitter, her @AstroKatie account has over 350,000 followers. And her upcoming book, *The End of Everything (Astrophysically Speaking)**,* tells the stories of true-life astronomical apocalypses. (In an ironic twist, the release of her book about cosmic cataclysms had to be delayed until August because of the coronavirus pandemic.)

*Quanta Magazine* caught up with her via videoconference during her time as a Simons Emmy Noether Fellow at the Perimeter Institute in the spring. An edited and condensed version of the conversation follows.

Well, it’s not like every person on earth can suddenly become an epidemiologist or a doctor or a respiratory expert and put all of their energy into that. We can’t all just drop everything and become different people. I know that there are a lot of physicists trying to do some kind of disease modeling, trying to put their skills toward that effort. And I don’t know if anything will really come of that or not — but I think that there’s a limited scope for that sort of respecialization.

But in terms of “why think about the universe when the world is difficult and things are happening?” — Well, why have the arts, why have music? Why literature? It’s part of the human condition to be curious and to want to understand the universe in every possible way. Part of what makes us human is those kinds of questions, that kind of curiosity, that kind of joy. And I think that we can’t — we can’t just shear all that away because things are bad.

I think it’s just — it’s the biggest, most dramatic thing that you can think of. The destruction of the whole universe: There’s nothing bigger and more dramatic than that. I like those big questions. I like things that are kind of hard to imagine, but that could have consequences that are just impossibly huge. Those are a lot of fun.

I read *A Brief History of Time*, and I saw some documentaries about Hawking. And I was just so fascinated by all these mind-bending things — like black holes and the Big Bang and time travel — you know, really mind-boggling topics. And just the idea that these are things you can actually study and learn something about, and understand in a quantitative, concrete way — I thought that was amazing.

I did meet him, actually; I guess I was probably 15. Hawking was giving a talk at Caltech. And my mom took me and a friend of mine to go see the lecture. As we were leaving the lecture theater, somehow we ended up going the same direction that Hawking was. So we sort of ran into him on the walk to the car. And I was too shy to say anything. But my friend went up to him and said: “My friend would like to say something to you.” And so I went and I said something like, “I really admire your work.” And he said, “Thank you very much.” So that was my first encounter with him.

I am keenly aware of the responsibility that comes with having a platform, and having a voice, and then having an influence on people. And the more visibility you have, the more responsibility you have to ensure that what you’re saying is responsible and not harmful in whatever way. I’m very aware of that, and I worry about that all the time.

I’ve also had a few people tell me that — that I inspired them one way or another. And that’s been very affecting, certainly. And a little scary. And I’ve been told by some people that seeing a youngish woman in this field has given them the confidence, as women, to approach it themselves. I’ve heard that from a few young women in high school or college.

Those are the three possibilities that make sense in a universe without a cosmological constant, or without dark energy. But what we learned in the late ’90s was that there is something that is causing the expansion to accelerate. So it’s very hard to see how a big crunch would happen.

Yes. The heat death of the universe is the end state of a universe that’s ruled by accelerated expansion forever. Every gravitationally bound system — galaxies, clusters of galaxies — gets more and more isolated from one another. And then each one ends up alone, and everything else gets carried farther and farther away such that they lose contact. So in the ultimate future, if we’re heading toward a heat death, our little group of galaxies, the Local Group, will be isolated. We won’t be able to see other galaxies at some point. We won’t even see evidence of the Big Bang, because we won’t see anything else that’s out there. And as that carries on, eventually star formation halts, because there’s no new material being brought in. The stars you have burn out. A lot of things fall into black holes, then the black holes evaporate. Particles decay. And if you leave that alone long enough, eventually you get a universe where the only thing that’s left is a few strange particles and some radiation.

From a physics perspective, if you define the arrow of time to be the direction of increasing entropy, once you reach the heat death, the arrow of time ceases to exist. And if there’s no arrow of time, I don’t know what the point of time is anymore.

If you have a kind of dark energy where the energy density is not constant, but is increasing over time, then — then you get this, this thing called phantom dark energy. And that leads to this horrific destruction of the universe in a finite time. Dark energy will start to overwhelm the gravitational binding of every galaxy. And the dark energy in this room is going to start overwhelming the binding of the stuff in this room. It starts to pull apart things that should not otherwise be affected by the expansion of space. So space itself is destroyed, basically.

I think that people don’t take it seriously just because it’s hard to envisage a fundamental theory that would make that happen and be consistent with other things we assume to be true about the universe. But in terms of the data, we can’t rule it out, and we may never be able to rule it out.

Vacuum decay is my favorite, for sure, for a few reasons. One, because it is a very dramatic kind of idea. But also because it seems to come right out of left field.

Vacuum decay is the idea that the universe that we are living in is not fully stable. We know that when the universe started, it was in this very hot, dense state. And we know that the laws of physics change with the ambient temperature, the ambient energy. We see that in particle colliders. We see that if you have a collision of high enough energies, then the laws of physics are a little bit different. And so, in the very early universe — the first tiny few microseconds or whatever it was — it went through a series of transitions. And after one of those transitions, we ended up in a universe that has — that has electromagnetism and the weak nuclear force and the strong nuclear force and gravity. It created the laws of physics that we see today.

And if that’s true, then some disturbance of our current universe could end up sort of kicking the laws of nature into a different state in which the laws of physics are different, with different fundamental particles and fields. If that happens, then at the point in space where that occurs, you get a bubble of this “true” vacuum, this other vacuum state of the universe, forming. And that bubble would expand at about the speed of light throughout the universe and destroy everything it encounters.

Much more seriously, because it doesn’t violate any fundamental principles we know of. However, I would say that most physicists, although we take it very seriously, we mostly don’t believe it’s going to happen. And the reason for that is that the way you get to vacuum decay as a possibility is to say that the Standard Model of particle physics — our current understanding of how particle physics works — is the whole story. [*Editor’s note: Most physicists believe that **the Standard Model is not the whole story**.*] So a lot of people would say, you know, this is an interesting problem and we do take it very seriously — but we think it’s not going to happen.

It’s painless; you don’t see it coming. You don’t notice when it happens, because you can’t feel it. You can’t see it. And then you don’t exist anymore. It’s technically inconsequential, right? It’s hard to think of it as being too tragic, in that sense.

That’s a huge thing that I’ve wrestled with in the course of writing this book, and I don’t think I came to a solid conclusion. It’s different from a personal death, because people think about their own death and they think, well, I’ll live on in some way through my children or my great works, or just the impact I had on the people around me. There will be some legacy to my existence in some way. But if it’s the whole cosmos that’s ending, that is no longer true. I think there’s a point at which you did not matter. And I don’t think we have the emotional or philosophical tools to wrestle with that.

The cute puppy pictured below is the latest addition to my extended family. Dax is a “Pomsky” — part Pomeranian, part Siberian husky. His pedigree is the subject of our first puzzle.

Dax is certified to be 56% Siberian husky and 44% Pomeranian. Given that a cross between two purebreds is nominally considered to have an equal genetic mixture of both breeds, how is Dax’s unusual genetic makeup produced? What is the smallest number of generations needed to produce his genetic makeup to the nearest percentage point? (You must start with purebreds and cross their offspring only with each other or with purebred Poms or huskies.) If there is at least one purebred parent in every generation, are there more purebred huskies or Pomeranians in Dax’s ancestry?

Suppose Dax had a cousin, Max, who was 60% husky and 40% Pomeranian. What is the smallest number of generations it would take to produce a dog like Max using the same rules as in Puzzle 1? In general, what is the smallest number of generations needed to produce a Pomsky with any given percentage of “huskyness” to the nearest integer?

I mentioned above that an equal genetic contribution from both parents is only “nominally” true. In actual fact, only female offspring receive an equal mix of genetic material from both parents. Believe it or not, males of mammalian species receive more genetic material from their mothers than they do from their fathers! That’s because males inherit an X chromosome from their mother and the much smaller Y chromosome from their father. Females, on the other hand, inherit X chromosomes from both their parents. Both sexes also receive an equal quantity of genetic material as non-sex chromosomes (autosomes) from both parents.

The total quantity of genetic material in a genome can be measured by counting the number of DNA base pairs in it. All the cells of female dogs, except for egg cells, have two copies of each chromosome with a total of approximately 5 billion DNA base pairs. Of these, 50% go to each offspring through the egg. Female offspring receive an equal genetic contribution from their father’s sperm, including all the non-sex chromosomes and another X chromosome. For male offspring however, the sperm carries the rest of the chromosomes and the Y chromosome, which has about 100 million fewer base pairs than the X chromosome. Based on this information, what proportion of their genome do male dogs actually inherit from their mother? How does this affect the answer to Puzzle 1?

This is not just academic curiosity. The fact that women have two X chromosomes, one from each parent, while men have only one, has been theorized to be one reason why men are about twice as likely to die of COVID-19 as women worldwide. The X chromosome has genes that are involved in the immune response, and the double dose that women possess might give them a stronger, more diverse immune system.

Here are a couple of other thought-provoking questions about the COVID-19 pandemic that you might find interesting to consider.

COVID-19 is known to be particularly devastating to the very old. The following data from the CDC gives the breakdown of about 70,000 COVID deaths in the U.S. by age group. It shows that people who are 85 or older are the most vulnerable. In this data, there were more males than females, although the ratio of male to female deaths was about 55:45, which is somewhat lower than in the rest of the world.

Age Group |
No. of Deaths |

Under 1 | 3 |

1-4 | 2 |

5-14 | 7 |

15-24 | 76 |

25-34 | 463 |

35-44 | 1,186 |

45-54 | 3,338 |

55-64 | 8,312 |

65-74 | 14,447 |

75-84 | 18,621 |

Over 85 | 22,543 |

However, as people get older, the number of years they can expect to live decreases. Which age group in the above table has lost the most years of life on account of COVID? Take a guess. In order to figure this out accurately, you have to use actuarial tables like this one, which shows that a 62-year-old man has an average life expectancy of another 20 years, while an 87-year-old can only expect to live another five years on average. Does the final answer surprise you?

My final question is about a hypothesis that sounds fantastical and is highly speculative:

Could pandemics be the reason that mammals reproduce sexually instead of asexually? Did pandemics give us sex?

Consider the following points:

- Asexual reproduction is twice as efficient as sexual reproduction, if all individuals can reproduce, instead of just half the population. In addition, it eliminates the hassle of finding a mate (or learning how to use a dating app!). Another advantage is that in asexual species, just one individual can quickly colonize new territory. The asexual Brahminy blind snake, which looks more like a large worm, has been spread to six continents by single individuals hitchhiking in flower pots!
- Sex, of course, increases genetic diversity in an organism’s offspring, which might make them fitter and more adaptable. This is useful, especially in adverse circumstances. But the efficiency of asexual reproduction often wins out at other times. As Christie Wilcox wrote in a recent
*Quanta*article, many simple animals use both asexual and sexual reproduction, using the latter only in difficult times. - Most lizards can reproduce asexually using a process called parthenogenesis, in which a female can reproduce without the genetic contribution of the male. About 50 kinds of lizards reproduce exclusively by asexual means (obligate parthenogenesis), plus the one species of snake mentioned above.
- If reptiles can do it, why not mammals? Contrary to what one might expect, sperm is not absolutely necessary for mammalian eggs to start dividing. The right chemical stimulus can cause a mammalian ovum in a lab, even a human one, to start dividing and producing an embryo without sperm. So there must be an evolutionary reason why mammals have lost the capacity to activate those pathways naturally.
- One of the leading theories behind why sexual reproduction is necessary is the Red Queen hypothesis — the idea that sexual reproduction helps large animals keep one step ahead of rapidly evolving microscopic infectious agents.
- Pandemics are the hammer blows of such infections, and they affect large animal populations several times a century. The devastating ease with which pandemics level asexual populations, as was the case in the Irish potato famine, is well known. If the Black Death could kill 30%-60% of the genetically diverse sexually reproducing human population in Europe in the 14th century, what could it have done to an asexual version of us? An asexual population is the equivalent of putting the same genetic lock on every house on the street. A bacterium or virus that happened to find the right key could render such a species extinct in one fell swoop.

So maybe, just maybe, pandemics were responsible for making us the obligate sexual creatures we are!

What do you think about this hypothesis? Comments welcome, and happy puzzling!

*Editor’s note: The reader who submits the most interesting, creative or insightful solution (as judged by the columnist) in the comments section will receive a* Quanta Magazine *T-shirt** or one of the two *Quanta* books, *Alice and Bob Meet the Wall of Fire* or *The Prime Number Conspiracy* (winner’s choice). And if you’d like to suggest a favorite puzzle for a future Insights column, submit it as a comment below, clearly marked “NEW PUZZLE SUGGESTION.” (It will not appear online, so solutions to the puzzle above should be submitted separately.)*

*Note that we may hold comments for the first day or two to allow for independent contributions by readers*.

The physicists who run the world’s most sensitive experimental search for dark matter have seen something strange. They have uncovered an unexpected excess of events inside their detector that could fit the profile of a hypothetical dark matter particle called an axion. Alternately, the data could be explained by novel properties of neutrinos.

More mundanely, the signal could come from contamination inside the experiment.

“Despite being excited about this excess, we should be very patient,” said Luca Grandi, a physicist at the University of Chicago and one of the leaders of the 163-person experiment, which is called XENON1T. The experiment’s successor will be needed to rule out possible contamination from tritium atoms, Grandi said. That experiment is expected to begin later this year.

Outside experts say that whenever there’s a boring explanation, it’s usually right. But not always — and the mere possibility that XENON1T has made a discovery merits attention.

“If this turns out to be a new particle, then it’s a breakthrough we have been waiting for for the last 40 years,” said Adam Falkowski, a particle physicist at Paris-Saclay University in France who was not involved in the experiment. “You cannot overstate the importance of the discovery, if this is real.”

Particle physicists have searched that long for a more complete inventory of nature, beyond the set of particles and forces known as the Standard Model of particle physics. And for 20 years, experiments like XENON1T have hunted specifically for the unknown particles that comprise dark matter, the invisible stuff that throws its gravitational weight around throughout the universe.

If XENON1T’s signal comes from axions — a top dark matter candidate — or nonstandard neutrinos, “it would clearly be very exciting,” said Kathryn Zurek, a theoretical physicist at the California Institute of Technology. For now, though, “the mundane explanation of tritium is more likely in my mind.”

The result described in the paper is a pileup of events called “electronic recoils” inside the XENON1T detector. A sensor-lined tank of 3.2 metric tons of pure xenon, the detector is located thousands of feet beneath Gran Sasso, a mountain in Italy. As a chemically inert, “noble” element, xenon makes for a quiet gazing pool in which to look for the ripples of unknown particles, should any flit through.

The XENON series of experiments were originally designed to seek heavy hypothetical dark matter particles called weakly interacting massive particles, or WIMPs. Any WIMPs traversing the detector should occasionally collide with a xenon nucleus, generating a flash of light.

But after 14 years of searching with ever larger and more sensitive detectors, the researchers haven’t seen these nuclear recoils. Competing experiments looking for nuclear recoils in tanks of other noble elements and substances haven’t either. “It has been a saga, and we are all very desperate,” said Elena Aprile, a particle physicist at Columbia University who devised the xenon-based detection method and has been leading the XENON experiments ever since.

As the WIMP search kept coming up empty, XENON scientists realized several years ago that they could use their experiment to search for other kinds of unknown particles that might pass through the detector: particles that bang into an electron rather than a xenon nucleus.

They used to treat these “electronic recoils” as background noise, and indeed many of these events are caused by mundane sources such as radioactive lead and krypton isotopes. But after making improvements to dramatically reduce their background contaminations over the years, the researchers found that they could look for signals in the low-level noise.

In their new analysis, the physicists examined electronic recoils in the first year’s worth of XENON1T data. They expected to see roughly 232 of these recoils, caused by known sources of background contamination. But the experiment saw 285 — a surplus of 53 that signifies an unaccounted-for source.

The team kept the finding under wraps for about a year. “We have been working and working and trying to understand,” Aprile said. “I mean, these poor students!” After rejecting all possible sources of error they could think of, the researchers came up with three explanations that would fit the size and shape of the bump in their data plots.

First and perhaps most exciting is the “solar axion,” a hypothetical particle produced inside the sun that would be similar to a photon but with a tiny amount of mass.

Any axions produced recently in the sun couldn’t be the dark matter that has shaped the cosmos since primordial times. But if the experiment has detected solar axions, then it means axions exist. “Such an axion could also be produced in the early universe and then would make up some component of dark matter,” said Peter Graham, a particle physicist at Stanford University who has theorized about axions and ways to detect them.

Researchers said the energy of solar axions inferred from XENON1T’s bump doesn’t fit with the simplest models of axion dark matter, but more complicated models can probably reconcile them.

Another possibility is that neutrinos — the most mysterious of the known particles of nature — might have large magnetic moments, meaning they’re like little bar magnets. Such a property would allow them to scatter with electrons at an enhanced rate, explaining the surplus of electronic recoils. Graham said neutrinos possessing a magnetic moment “would also be very exciting since it indicates new physics beyond the Standard Model.”

But it’s also possible that trace amounts of tritium, a rare hydrogen isotope, are present in the xenon tank, and that their radioactive decays generate electronic recoils. This possibility “can be neither confirmed nor excluded,” the XENON1T team wrote in their paper.

Outside researchers say there are “not red, but orange flags,” as Falkowski put it, that point to the boring answer. Most importantly, if the sun creates axions, then all stars do. These axions pull a small amount of energy away from the star, like steam carrying away the energy of a boiling kettle. In very hot stars like red giants and white dwarves, where axion production should be greatest, this energy loss would be enough to cool the stars down. “A white dwarf would produce so many axions that we wouldn’t see hot white dwarves around today like we do,” said Zurek.

Neutrinos with large magnetic moments have been similarly disfavored: In comparison to standard neutrinos, more of them would be spontaneously produced inside stars, sapping away more of the stars’ energy and cooling down hot stars more than is observed.

But that logic might be flawed, or some other particle or effect might explain XENON1T’s bump. Luckily the physics community won’t have to wait long for answers; XENON1T’s successor, the XENONnT experiment — which will monitor for recoils in 8.3 metric tons of xenon — is on track to begin data collection later this year. “If the excess is there and at the same level,” Grandi said, then “we expect to be able to discriminate among [the possibilities] in a few months of data taking.”

“One thing is clear,” said Juan Collar, a dark matter physicist at the University of Chicago who is not involved in the experiment. “The XENON program continues to trailblaze in the dark matter field. The most sensitive experiment will be the first to run into the unexpected, and XENON continues to maintain a solid grip on that prized pole position.”

A time-honored practice in mathematical circles is to divide the field in two. There’s the traditional “applied versus pure” argument, which mirrors the experimental-theoretical divide of other disciplines — the tension between advancing knowledge toward a specific end and doing it for its own sake. Or we can bisect mathematics in the same way that our brain is split, with an algebraic “left hemisphere” that thinks in logical sequences and a geometric “right hemisphere” that has a more visual approach. But the field also breaks down according to a more subtle distinction: one’s preference between two flavors of mathematical beauty.

It’s tough for nonexperts to see mathematics as beautiful in the first place. Beauty is in the eye of the beholder, sure, but it’s also hard to see when the work of art is hidden in darkness, obscured by an impenetrable cloud of symbols and jargon. Trying to appreciate mathematics without understanding its inner workings is like reading a description of Beethoven’s Fifth Symphony instead of hearing it.

Yet mathematicians have no qualms about earnestly describing their equations and proofs as beautiful. It’s a sense of aesthetics that has proved remarkably universal, existing across cultures and times: A Babylonian mathematician and a modern student could find equal delight in studying a perfect arrangement of lines in plane geometry, or in solving a quadratic equation.

And roughly speaking, mathematical beauty can come in one of two forms, generic or exceptional. I would go so far as to say that mathematicians themselves come in these two flavors, too — at least, they tend to gravitate to one of the two poles.

The first variant is an ethereal form of beauty, reflected in formal structures and patterns. It’s a sense of wonder at the inexorable order in which the mathematical world arranges itself. Just think of how perfectly the natural numbers line up in an infinite row. Or consider the sequence of Euclidean spaces of increasing dimensions: a line, a plane, a space, etc. Or the rigor and precision of formal logic itself. These structures are incredibly powerful and useful, and from a certain perspective that can indeed be beautiful.

But for those on the other side of the divide — which, it seems, includes most people and certainly most non-mathematicians — it’s tough to get truly excited by the concept of a vector space in *n *dimensions, or a continuous function on the real line. To appreciate these ideas is to appreciate a form of abstraction, and this sense of aesthetics often feels cold and formal. It’s the beauty of an ice queen, best admired from a safe distance, never up close.

The second form of mathematical beauty is more relatable. It concerns the exceptions to the rules, the objects that do not fit into any larger category. These are the curiosities, the one-offs, the mathematical incarnations of the enchanting fossils and strange minerals that filled natural history cabinets in the 17th and 18th centuries. This beauty has a very different feel to it: It’s exotic, quaint, intimate — and, of course, quite subjective.

Consider, for example, the dodecahedron, a favorite object in many mathematical cabinets of curiosities. It is the regular solid built out of 12 pentagons, and it’s one of the five perfectly symmetric solids. Its attraction was once described to me as being “complicated, but not too complicated.” The shape has a long history as a symbol of the esoteric that goes back to the ancient Greeks, when Plato suggested a connection between the five objects, now called the Platonic solids, and the physical universe. The dodecahedron symbolized all the heavenly bodies — the stars and planets, each perfect in shape and movement. Ever since, this mathematical form has signified the extraterrestrial, and it became a beloved symbol of alchemists and astrologists. From a modern mathematical perspective it is still considered exceptional, one of only a handful of symmetric objects that fully stand on their own and are not part of any larger pattern. For example, it is easy to generalize a cube or a tetrahedron to an analogous object in arbitrary dimensions, but there are no higher-dimensional analogues of the dodecahedron.

Another mathematical misfit, a prize possession for any cabinet, is simply known as the monster. It is the largest exceptional building block out of which all symmetry groups can be constructed, a mathematical monstrosity that can only be visualized in a space of no less than 196,883 dimensions. Depending on your taste, the monster group is either the prettiest or the ugliest object in all mathematics.

Both types of beauty have charmed mathematicians over the years and led to many advances. Abstraction is an obviously powerful tool. It allows one to deal with all members of a family at once, and it places problems in a wider perspective. The mathematician who follows the ice queen often dislikes concrete applications or specific cases — Alexander Grothendieck, one of the high priests of abstract algebra, once famously picked 57 as an example of a prime number. (It’s not.) The fascination with mathematical outcasts has been a productive strategy too. Such objects often live at the intersection of multiple ideas and can act as an access point between completely different worlds. Aficionados of this style don’t care for “abstract nonsense” and cherish the peculiarities of the concrete case, warts and all.

But the real world is very different from the idealized landscape of mathematics. Most sciences are tethered to the universe that describes the real world — but that’s just one out of an infinity of mathematical possibilities. As Jean-Pierre Serre reportedly quipped to his mathematician colleague Raoul Bott, “While the other sciences search for the rules that God has chosen for this Universe, we mathematicians search for the rules that even God has to obey.”

Confronted with this existential question — what laws does the universe actually follow? — it’s natural for most scientists to gravitate toward the relatable charms of the exceptional objects in the cabinet. But science has taught us that the abstract and austere form of mathematical beauty often offers a safer long-term choice.

A famous demonstration of this involves the appearance of the Platonic solids in the early work of the astronomer Johannes Kepler. He proposed a model of the solar system that based the distances between planetary orbits on a particular configuration of the five solids. It was a beautiful idea, but doomed. Kepler himself later rejected this model, after concluding that the orbits of the planets did not form the singular perfect shape of a circle, but instead had the ugly appearance of an ellipse, which can take one of a whole range of shapes. It seemed a definite step backward. He compared this discovery to a “wagon full of manure” left in the Augean stables of science.

But while Kepler was initially led astray by his preference for exceptional objects, Isaac Newton would go on to explain the planets’ elliptical orbits based on his universal theory of gravity. In fact, he showed how all motions in the heavens were versions of circles, ellipses, hyperbolas and parabolas. The beauty lay in Newton’s abstract laws, not the specific solutions.

This is a lesson that physicists, and scientists generally, have learned many times over. In the 19th century, scientists moved away from the random collections of curiosity cabinets to a more systematic study of the natural world. Biologists started to collect all specimens in a group of organisms, not just the most beautiful butterflies or birds, and discovered the general theory of evolution. Chemists classified all the elements, going beyond the easy bling of silver and gold, and uncovered the periodic table’s patterns in the process. Physicists then revealed the symmetries of elementary particles hidden within the elements’ atoms.

Every time, they discovered that the universe’s beauty lies in the abstract structures underlying physical phenomena. These structures may at first feel confusing and difficult to relate to, but taking the long view often proves much more powerful and meaningful. And, indeed, more beautiful.

Physicists have traced three of the four forces of nature — the electromagnetic force and the strong and weak nuclear forces — to their origins in quantum particles. But the fourth fundamental force, gravity, is different.

Our current framework for understanding gravity, devised a century ago by Albert Einstein, tells us that apples fall from trees and planets orbit stars because they move along curves in the space-time continuum. These curves are gravity. According to Einstein, gravity is a feature of the space-time medium; the other forces of nature play out on that stage.

But near the center of a black hole or in the first moments of the universe, Einstein’s equations break. Physicists need a truer picture of gravity to accurately describe these extremes. This truer theory must make the same predictions Einstein’s equations make everywhere else.

Physicists think that in this truer theory, gravity must have a quantum form, like the other forces of nature. Researchers have sought the quantum theory of gravity since the 1930s. They’ve found candidate ideas — notably string theory, which says gravity and all other phenomena arise from minuscule vibrating strings — but so far these possibilities remain conjectural and incompletely understood. A working quantum theory of gravity is perhaps the loftiest goal in physics today.

What is it that makes gravity unique? What’s different about the fourth force that prevents researchers from finding its underlying quantum description? We asked four different quantum gravity researchers. We got four different answers.

*Claudia de Rham**, a theoretical physicist at Imperial College London, has worked on theories of massive gravity, which posit that the quantized units of gravity are massive particles:*

Einstein’s general theory of relativity correctly describes the behavior of gravity over close to 30 orders of magnitude, from submillimeter scales all the way up to cosmological distances. No other force of nature has been described with such precision and over such a variety of scales. With such a level of impeccable agreement with experiments and observations, general relativity could seem to provide the ultimate description of gravity. Yet general relativity is remarkable in that it predicts its very own fall.

General relativity yields the predictions of black holes and the Big Bang at the origin of our universe. Yet the “singularities” in these places, mysterious points where the curvature of space-time seems to become infinite, act as flags that signal the breakdown of general relativity. As one approaches the singularity at the center of a black hole, or the Big Bang singularity, the predictions inferred from general relativity stop providing the correct answers. A more fundamental, underlying description of space and time ought to take over. If we uncover this new layer of physics, we may be able to achieve a new understanding of space and time themselves.

If gravity were any other force of nature, we could hope to probe it more deeply by engineering experiments capable of reaching ever-greater energies and smaller distances. But gravity is no ordinary force. Try to push it into unveiling its secrets past a certain point, and the experimental apparatus itself will collapse into a black hole.

*Daniel Harlow**, a quantum gravity theorist at the Massachusetts Institute of Technology, is known for applying quantum information theory to the study of gravity and black holes:*

Black holes are the reason it’s difficult to combine gravity with quantum mechanics. Black holes can only be a consequence of gravity because gravity is the only force that is felt by all kinds of matter. If there were any type of particle that did not feel gravity, we could use that particle to send out a message from the inside of the black hole, so it wouldn’t actually be black.

The fact that all matter feels gravity introduces a constraint on the kinds of experiments that are possible: Whatever apparatus you construct, no matter what it’s made of, it can’t be too heavy, or it will necessarily gravitationally collapse into a black hole. This constraint is not relevant in everyday situations, but it becomes essential if you try to construct an experiment to measure the quantum mechanical properties of gravity.

Our understanding of the other forces of nature is built on the principle of locality, which says that the variables that describe what’s going on at each point in space — such as the strength of the electric field there — can all change independently. Moreover, these variables, which we call “degrees of freedom,” can only directly influence their immediate neighbors. Locality is important to the way we currently describe particles and their interactions because it preserves causal relationships: If the degrees of freedom here in Cambridge, Massachusetts, depended on the degrees of freedom in San Francisco, we may be able to use this dependence to achieve instantaneous communication between the two cities or even to send information backward in time, leading to possible violations of causality.

The hypothesis of locality has been tested very well in ordinary settings, and it may seem natural to assume that it extends to the very short distances that are relevant for quantum gravity (these distances are small because gravity is so much weaker than the other forces). To confirm that locality persists at those distance scales, we need to build an apparatus capable of testing the independence of degrees of freedom separated by such small distances. A simple calculation shows, however, that an apparatus that’s heavy enough to avoid large quantum fluctuations in its position, which would ruin the experiment, will also necessarily be heavy enough to collapse into a black hole! Therefore, experiments confirming locality at this scale are not possible. And quantum gravity therefore has no need to respect locality at such length scales.

Indeed, our understanding of black holes so far suggests that any theory of quantum gravity should have substantially fewer degrees of freedom than we would expect based on experience with the other forces. This idea is codified in the “holographic principle,” which says, roughly speaking, that the number of degrees of freedom in a spatial region is proportional to its surface area instead of its volume.

*Juan Maldacena**, a quantum gravity theorist at the Institute for Advanced Study in Princeton, New Jersey, is best known for discovering a hologram-like relationship between gravity and quantum mechanics:*

Particles can display many interesting and surprising phenomena. We can have spontaneous particle creation, entanglement between the states of particles that are far apart, and particles in a superposition of existence in multiple locations.

In quantum gravity, space-time itself behaves in novel ways. Instead of the creation of particles, we have the creation of universes. Entanglement is thought to create connections between distant regions of space-time. We have superpositions of universes with different space-time geometries.

Furthermore, from the perspective of particle physics, the vacuum of space is a complex object. We can picture many entities called fields superimposed on top of one another and extending throughout space. The value of each field is constantly fluctuating at short distances. Out of these fluctuating fields and their interactions, the vacuum state emerges. Particles are disturbances in this vacuum state. We can picture them as small defects in the structure of the vacuum.

When we consider gravity, we find that the expansion of the universe appears to produce more of this vacuum stuff out of nothing. When space-time is created, it just happens to be in the state that corresponds to the vacuum without any defects. How the vacuum appears in precisely the right arrangement is one of the main questions we need to answer to obtain a consistent quantum description of black holes and cosmology. In both of these cases there is a kind of stretching of space-time that results in the creation of more of the vacuum substance.

*Sera Cremonini**, a theoretical physicist at Lehigh University, works on string theory, quantum gravity and cosmology:*

There are many reasons why gravity is special. Let me focus on one aspect, the idea that the quantum version of Einstein’s general relativity is “nonrenormalizable.” This has implications for the behavior of gravity at high energies.

In quantum theories, infinite terms appear when you try to calculate how very energetic particles scatter off each other and interact. In theories that are renormalizable — which include the theories describing all the forces of nature other than gravity — we can remove these infinities in a rigorous way by appropriately adding other quantities that effectively cancel them, so-called counterterms. This renormalization process leads to physically sensible answers that agree with experiments to a very high degree of accuracy.

The problem with a quantum version of general relativity is that the calculations that would describe interactions of very energetic gravitons — the quantized units of gravity — would have infinitely many infinite terms. You would need to add infinitely many counterterms in a never-ending process. Renormalization would fail. Because of this, a quantum version of Einstein’s general relativity is not a good description of gravity at very high energies. It must be missing some of gravity’s key features and ingredients.

However, we can still have a perfectly good approximate description of gravity at lower energies using the standard quantum techniques that work for the other interactions in nature. The crucial point is that this approximate description of gravity will break down at some energy scale — or equivalently, below some length.

Above this energy scale, or below the associated length scale, we expect to find new degrees of freedom and new symmetries. To capture these features accurately we need a new theoretical framework. This is precisely where string theory or some suitable generalization comes in: According to string theory, at very short distances, we would see that gravitons and other particles are extended objects, called strings. Studying this possibility can teach us valuable lessons about the quantum behavior of gravity.