The top three AI challenges and trends on the mind of Microsoft Research’s president
What’s next for AI, and what are the main technical hurdles to overcome?
That was one of the questions we asked when working on our deep dive into Microsoft’s history and future in artificial intelligence, published last week to launch GeekWire’s series on the company’s 50th anniversary.
Peter Lee, the longtime computer scientist who leads Microsoft Research, gave the question some serious thought and offered three different issues that are on his mind. Listen to highlights from his responses in the third segment of this week’s GeekWire Podcast (below) and continue reading for his comments.
1: Developing AI systems for science that can learn and understand the “languages” of nature, such as the structures of proteins, molecules, and materials, and combining these with general language models to create powerful multimodal AI technologies.
“We’ve now learned that transformers are really good at learning from the language of human beings. And what we’re discovering over and over again is that the same architecture is equally good at learning the different languages of nature, the languages of proteins and molecules, the languages of air flows in the upper atmosphere, the languages of material lattice, electrolyte structures for batteries and on and on, and that is incredibly exciting. …
“But the big question is, if you can get true multi-modality, so that that works in conjunction with a language model, or in conjunction with a pathology model, do those things reinforce each other and give you some new superpowers.”
The work of a newly minted Nobel Prize winner, the University of Washington’s David Baker, stands out as a high-profile example of the potential of this field.
2: Advancing the field of autonomous AI agents that can plan and execute complex tasks, collaborate with other AI agents, and learn from their actions.
“[In the realm of agentic AI], there are three problems that we’ve been working on very hard. One is about autonomy.
“The second is about large action models. So that’s the idea of the model being able to plan an action, execute the action, and then understand and develop a reaction. It’s easiest to understand that in the context of robots. … But these even matter if you just want the machine to do stuff on your desktop for you, and things like that.
“The other aspect of agentic AI is the extent to which these things can collaborate with each other and with you, and be a part of a team of people and AIs working together with well-defined roles to do things.”
This is a hot area for AI research and emerging startups right now, with early examples including agents that can browse the web and book trips autonomously for users. It’s also an area of fierce competition for tech giants. Microsoft announced its own AI agents on Monday, ahead of the big Agentforce rollout by Salesforce this week.
3. Managing AI infrastructure, optimizing hardware architectures, phasing out old systems, and preparing data centers for increasingly powerful AI models and services.
“Our AI infrastructure is massive and growing even more massive all the time, and so just managing that turns out to be a wildly difficult research problem — even just managing the decommissioning of old hardware with new hardware, or even just the algorithmic challenge of routing cables into and out of these massive data centers, let alone really harnessing all the compute power and designing the optimal architectures for these things. And then in the future, what will these data centers look like five years from now, eight years from now, 15 years from now?”
Microsoft is investing heavily in AI and cloud infrastructure. Its overall capital spending reached a record $19 billion in the June quarter, and CFO Amy Hood told analysts that the number will keep growing in the years ahead.
Listen to the GeekWire Podcast and read our Microsoft @ 50 opener for more.