Blog Details

How do you tame AI? Scientist sees a need for regulating bots like drugs or airplanes

Have the risks of artificial intelligence risen to the point where more regulation is needed? Cognitive scientist Gary Marcus argues that the federal government — or maybe even international agencies — will need to step in.

The Food and Drug Administration or the Federal Aviation Administration could provide a model, Marcus said last week during a fireside chat with Seattle science-fiction author Ted Chiang at Town Hall Seattle.

“I think we would like to have something like an FDA-like approval process if somebody introduces a new form of AI that has considerable risks,” Marcus said. “There should be some way of regulating that and saying, ‘Hey, what are the costs? What are the benefits? Do the benefits to society really outweigh the costs?'”

In addition to getting regulatory approval for new strains of generative AI, software companies should be subject to outside auditing procedures to assess how AI tools are performing, Marcus said.

“For example, we know that large language models are now being used to make job decisions — who should be hired or get an interview — and we know that they have bias,” he said. “But there’s no way of really even auditing to find out how much that’s going on. We would like to have liability laws, so that if companies cause major harm to society, we would like the companies to bear some of the cost of that right now.”

AI safety is one of the principal topics that Marcus covers in his research as a professor emeritus at New York University, and in a newly published book titled “Taming Silicon Valley.” In the book, and during the Town Hall event, Marcus traced the troublesome issues surrounding generative AI, including concerns about plagiarism, hallucinations, disinformation and deepfakes, and lack of transparency.

The companies that are leading the AI charge insist they’re looking after the safety issues. For example, in April, the CEOs of leading tech companies — including Microsoft, OpenAI, Alphabet and Amazon Web Services — joined an AI safety and security board with the mission of advising the federal government on how to protect critical infrastructure.

But Marcus insisted the AI field needs independent oversight, with scientists in the loop. “Often the government leaders meet with the company leaders, but they don’t have any independent scientists there,” he said. “And so you get what’s known as regulatory capture, with the big companies regulating themselves.”

As an example, Marcus pointed to the debate over whether AI should be open-source, with Meta CEO Mark Zuckerberg saying yea … and Nobel laureate Geoffrey Hinton, the “Godfather of AI,” saying nay.

“It shouldn’t be up to Mark Zuckerberg and Yann LeCun, who’s the chief AI officer at Meta, to decide. But that’s exactly what happened. … They decided for all of us, and potentially put us at risk,” Marcus said. “So, all of the AI stuff that they put out is now being actively used by China, for example. If you accept that we’re in conflict with them, that’s maybe not a great idea.”

Marcus called for the creation of a Federal AI Administration, or perhaps even an International Civil AI Organization.

“A good model here is commercial airlines, which are incredibly safe, where you put people in a flying bus at 30,000 feet, and they’re much safer than they are in their own cars,” he said. “That’s because we have multiple layers of oversight. We have regulations about how you design an airplane, how you test it, how you maintain it, how you investigate accidents and so forth — and we’re going to need something like that for AI.”

But will we get it? Marcus is realistic about current political trends. “The chance that any of this is going to go through in the near term, given the change in regime, seems unlikely,” he said.

In his book, Marcus proposes a boycott of generative AI — an idea that drew some skepticism from Chiang.

“Microsoft has put AI into, like, even Notepad and Paint,” said Chiang, who writes about AI for The New Yorker. “It’s going to be hard to use any product that doesn’t have this in it, and it’s also going to be super hard to discourage children from using it to do their homework for them.”

Marcus acknowledged that a boycott would be a “heavy lift.”

“The analogy I would make is to things like fair-trade coffee, where you make some list and say, ‘Look, these products are better. These are OK, please use those,'” he said. “We should use generative AI for images, for example, only from companies that properly license all of the underlying stuff. And if we had enough consumer pressure, we might get one or two companies to do that.”

The way Marcus sees it, public pressure is the only way America will get good public policies on AI. “With AI, we’re facing something similar to what we’ve seen with climate change, which is, the government really doesn’t do anything unless people get really, really upset about it,” he said. “And we may need to get really upset about AI policy to address these issues.”

Other highlights from the talk:

  • Marcus suspects that the deep-learning curve for large language models such as ChatGPT is flattening out. “There was a trick that people used to make these systems better, which was to use larger and larger fractions of the internet to train models on,” he said. “But now the fraction is very close to 100%, and you can’t double that and get 200% of the internet. That doesn’t really exist, and so maybe there’s not enough data to keep going.”
  • Chiang agreed with Marcus that AI could be most helpful in fields such as materials science and biomedicine — for example, Nobel-worthy research into protein design. “They’re very large possibility spaces, but they’re fairly well-defined possibility spaces, and we have software which is better at searching them than humans are,” he said. “I can see us getting very good at that without actually making a lot of headway on, say, reasoning about the real world.”
  • Marcus said he thinks OpenAI is being pushed toward surveillance applications. “I think that they can’t make enough money on things like Copilot,” he said. “If their market niche becomes a lot like Facebook’s — which is selling your data, which is a kind of surveillance — it doesn’t have to work that well. They just have to collect the data.”