Change how you approach computation, or risk losing the space race. This ultimatum—which NASA faced in the 1960s, and is highlighted in the recent film Hidden Figures—has never been more relevant to all of science. As the “deep learning” revolution sweeps through computer science, even particle physicists like myself are rethinking how to analyze data. By doing so, we may finally be able to discover the fundamental nature of dark matter.

After completing my first year of undergraduate coursework at the Australian National University, it was time to decide what I should specialize in. Although I was tempted to expand on my longtime hobby of programming, I ultimately chose to study physics. I have always wanted to understand our world at its most fundamental level—and educational materials for computer science were much more freely available on the Internet.

With a letter of admission in hand, I jumped on a plane headed to MIT—one of the furthest places on Earth from my hometown of Canberra—to pursue nuclear and particle physics. Interdisciplinary curiosity drove my decision. MIT is such a large “sandbox’’—as my advisor, Professor Janet Conrad, puts it—that it is possible to learn from people working on nearly any problem in physics. In addition, nuclear and particle physics involve many fields of science—not just the computer science that I had left behind so many years ago, but everything from mechanical engineering to atmospheric science.

A simulation of a neutrino interacting with the liquid argon in MicroBooNE. A neutrino (nu_mu) enters from the bottom of the image, causing a nuclear reaction that produces a proton (p), a muon (mu) and two photons (gamma). The proton and muon leave behind tracks of ionization as they move through the argon. The photons are uncharged and cannot be seen directly. Instead, they are inferred by the shower of particles they create when they interact with the argon. Muons labelled “cosmic” are caused by cosmic rays. They are unrelated to the neutrino and appear in the background.

Soon I found myself working on the MicroBooNE detector. For this experiment, a collaboration of physicists operates a liquid argon time-projection chamber that is essentially an electronic version of the bubble chamber, a particle detector filled with superheated liquid such as hydrogen. When the argon that fills the detector interacts with fundamental particles called neutrinos, other particles are produced. As these particles fly through the detector, they leave behind traces of ionization that the chamber then reads out in the form of images. MicroBooNE was designed to test an anomaly seen in a previous experiment, itself inspired by other anomalies. It’s part of a grand chase for an entirely new kind of particle called a “sterile neutrino,” whose non-interacting nature could be the missing link to dark matter.

In the past, processing the detector’s raw data was performed by an army of graduate students, who pored over photographs coming out of the bubble chamber. MicroBooNE, on the other hand, reads data at a rate that far exceeds the capacity of even the most dedicated students. Instead, physicists must turn to automated data processing.

A similar transition occurred at NASA in the 1960s. Calculations for sensor readings originally used algorithms executed by the minds and hands of so-called “human computers”: women with degrees in science and mathematics. However, as the space race ramped up, the relentless need for faster and more complex simulations drove NASA to adopt electronic computers. As told in Hidden Figures, one of those human computers, Dorothy Vaughan, saw automation as the future of her field and taught herself to program the new machines.

Like Vaughan, particle physicists were quick to adopt computers and have been analyzing data with algorithms for decades. However, writing software to process images, like those produced by MicroBooNE, turns out to be very difficult.

For example, imagine two pictures side by side: one of a Chihuahua and one of a Great Dane. The differences between the two are far too subtle and numerous to program by hand. Instead, we could select random patterns and match them to thousands of images of dogs, reinforcing those patterns that strongly differentiate Chihuahuas from Great Danes while ensuring that those patterns that don’t reinforce the program are forgotten. This iterative refinement based on known examples is called “training,” and is the basis of a field called “machine learning.” But these algorithms struggle to catch up with humans, who can glance at an image and instantly determine the type of dog.

If we can’t beat the human brain at image recognition, can we copy it? For decades, computer scientists have been attempting exactly that by stacking a few layers of idealized neurons. These layers make decisions based on the output of previous layers, creating an artificial neural network.

Twenty years before becoming director of Facebook’s AI research, Yann LeCun and his colleagues merged a kind of image processing operation called a “convolution” with neural networks to form what are called convolutional neural networks (CNNs). Many of these convolutions can be chained together to form a “deep” neural network, whose multitude of layers can learn high-level abstractions. The real power of CNNs was not realized until the late 2000s, when advances in computer hardware allowed the networks to train on millions of photographs.

Throughout my undergraduate and early graduate career, I have made sure to keep an eye on the neighboring sandbox of computer science, and it has become apparent that physicists should be paying very close attention to the progress being made in deep learning. In 2012, CNNs became the best-performing image recognition algorithm, and in 2015 they beat humans for the first time.

Since then, my collaborators and I have shown that CNNs can be used to identify neutrinos in simulated MicroBooNE data. The experiment’s first published result will rely on these deep neural networks.

This is a profound change in how physicists approach data analysis. Traditionally, we would write an algorithm by visualizing raw data and developing abstractions for features like particle trajectories and deposited energy. With deep learning, we can feed the algorithm our raw data directly and have the neural network learn these abstractions for us. However, with the great power that deep learning gives us comes great scientific responsibility.

When electronic computers were first introduced at NASA, the human computers didn’t leave overnight. The engineers were skeptical of the electronics, and for many years the human and electronic computers worked side by side, checking each other’s results. When astronaut John Glenn’s life depended on the accuracy of his splashdown coordinates, he insisted, “Get the girl to do it,” asking for human computer Katherine Johnson to verify the electronic computer’s work.

We are still in the earliest days of applying deep learning to particle physics, and the question of how to determine a neural network output’s uncertainty—a measure that engineers and scientists require to build safe technologies and publish scientific results— is not well understood. But for the foreseeable future, physicists will continue to write traditional algorithms to cross-validate these new methods.

Just as physicists are borrowing from computer science, we are also working to give back. Elsewhere at MIT, Professors Max Tegmark and Marin Soljačić are asking how physics can help improve and understand deep neural networks. I have always thought that we physicists are at our best when we take what we have learned, and—in the words of my advisor—“find and help build someone else’s sandcastle.”