A lot has changed in the 15 years since Kaiming He was a PhD student.

“When you are in your PhD stage, there is a high wall between different disciplines and subjects, and there was even a high wall within computer science,” He says. “The guy sitting next to me could be doing things that I completely couldn’t understand.”

In the seven months since he joined MIT’s Schwarzman College of Computing as the Douglas Ross (1954) Career Development Professor of Software Technology in the Department of Electrical Engineering and Computer Science (EECS), He says he is experiencing something that in his opinion is “very rare in human scientific history,” — a lowering of the walls that expands across different scientific disciplines. 

“There is no way I could ever understand high energy physics, chemistry, or the frontier of biology research, but now we are seeing something that can help us to break these walls,” He says, “and that is the creation of a common language that has been found in AI.”

Building the AI bridge

According to He, this shift began in 2012 in the wake of the “deep learning revolution,” a point when it was realized that this set of machine learning methods based on neural networks was so powerful that it could be put to greater use.

“At this point computer vision — helping computers to see and perceive the world as if they are human beings — began growing very rapidly because as it turns out you can apply this same methodology to many different problems and many different areas,” says He. “So the computer vision community quickly grew really large because these different subtopics were now able to speak a common language and share a common set of tools.”

From there, He says the trend began to expand to other areas of computer science, including natural language processing, speech recognition, and robotics, creating the foundation for ChatGPT and other progress toward artificial general intelligence. 

“All of this has happened over the last decade, leading us to a new emerging trend that I am really looking forward to and that is watching AI methodology propagate other scientific disciplines,” says He. 

One of the most famous examples, He says, is AlphaFold, an artificial intelligence program developed by Google DeepMind, which performs predictions of protein structure.

“It’s a very different scientific discipline, a very different problem, but people are also using the same set of AI tools, the same methodology to solve these problems,” He says,  “and I think that is just the beginning.”

The future of AI in science

Since coming to MIT in February, He says he has talked to professors in almost every department. Some days he finds himself in conversation with two or more professors from very different backgrounds.

“I certainly don’t fully understand their area of research but they will just introduce some context and then we can start to talk about deep learning, machine learning, [and] neural network models in their problems,” He says. “In this sense, these AI tools are like a common language between these scientific areas: the machine learning tools translate their terminology and concepts into terms that I can understand, and then I can learn their problems and share my experience, and sometimes propose solutions or opportunities for them to explore.”

Expanding to different scientific disciplines has significant potential, from using video analysis to predict weather and climate trends to expediting the research cycle and reducing costs in relation to new drug discovery. 

While AI tools provide a clear benefit to the work of He’s scientist colleagues, He also notes the reciprocal effect they can and have had on the creation and advancement of AI.

“Scientists provide new problems and challenges that help us continue to evolve these tools,” says He. “But it is also important to remember that many of today’s AI tools stem from earlier scientific areas — for example, artificial neural networks were inspired by biological observations; diffusion models for image generation were motivated from the physics term.”

“Science and AI are not isolated subjects. We have been approaching the same goal from different perspectives, and now we are getting together.”

And what better place for them to come together than MIT.

“It is not surprising that MIT can see this change earlier than many other places,” He says.  “[The Schwarzman College of Computing] created an environment that connects different people and lets them sit together, talk together, work together, exchange their ideas, while speaking the same language — and I’m seeing this begin to happen.” 

In terms of when the walls will fully lower, He notes that this is a long-term investment that won’t happen overnight.

“Decades ago, computers were considered high tech and you needed specific knowledge to understand them, but now everyone is using a computer,” He says. “I expect in 10 or more years, everyone will be using some kind of AI in some way for their research — It’s just their basic tools, their basic language, and they can use AI to solve their problems.”

Kaitlin Provencher | School of Science