By Wendy Blake
When a door flew off a Boeing plane in flight this month, the Federal Aviation Administration stepped in immediately and grounded the entire fleet of 737 Max 9 jets until a complete investigation can be completed.
But if a malfunction occurs with artificial intelligence, whether it’s an accident involving a self-driving car or the wrongful arrest of a person because of bias built into digital systems, there is no such governmental authority to step in.
AI Ethics was the focus of the American Museum of Natural History’s first SciCafe conversation of the year, on January 10. The after-hours event, held under the iconic blue whale, had the Hall of Ocean Life packed to the gills to hear Dr. Rumman Chowdhury, co-founder of Humane Intelligence, a nonprofit that helps tech companies prevent problems related to incorrect information or “abusive content.”
The museum holds these “science socials” on the first Wednesday of every month, from October to June. “We encourage adults [21 and over] throughout the city to join us for these fun, engaging and free programs to enjoy a drink, exchange ideas, and hear from experts about the latest scientific issues of the day,” says Jacqueline Handy, director of public programs. Reservations are required.
Chowdhury’s talk, which was clear and accessible to this non-techie, provided an understanding of how AI functions and the concerns around it. Chowdhury noted that the term AI itself is misleading: We use it as if we were talking about an entity with agency. It’s important to remember, she said, that it is people who program these systems—which are simply math put into code—though the tech firms proclaim that it is AI that will cure cancer and eliminate poverty. The technology is not bigger and better than the humans behind it, she said; tech companies are making extravagant promises in order to secure money to develop systems and increase shareholder value, without being subject to any accountability.
Sam Altman, CEO of OpenAI, for instance, insists AI will cure poverty. “It’s not a Saturday project,” said Chowdhury. As for the ominous headlines that say AI will replace humans, she said, there are genuine threats to the public good, if new innovations are not employed responsibly.
Chowdhury, who addressed congressional lawmakers three times last year as Washington scrambles to keep up with the rapidly evolving technology, cited many examples of problems in the industry. She pointed to cases in which self-driving cars were involved in fatal crashes and the humans in the cars were held liable. Other incidents have involved the wrongful arrests of people of color because of faulty facial recognition matches.
There needs to be a “human-in-the-loop,” said Chowdhury, citing an incident in 1983 when scientist Stanislav Petrov, assigned to monitor the Soviet Union’s nuclear early-warning system, stepped in and averted catastrophic disaster. When the system reported that the U.S. had fired a missile, Petrov judged—correctly—that it was a false alarm.
As AI becomes increasingly prevalent, we need humans in the loop who are empowered, said Chowdhury. She called for independent assessing and auditing committees within the industry, as well as creation of a global governance body to focus on creating technical approaches that serve the public good.
Chowdhury said “technosolutionists” in Silicon Valley, who argue that humanity is the problem and technology will save us, see themselves as optimists and portray ethicists like her as pessimists. Yet, she said, she’s very optimistic about AI’s potential, as long as it’s employed ethically.
“As we put this technology into play, let’s be a little bit mindful as to whether or not it is even useful and beneficial for human beings, rather than assuming that tech is innately beneficial,” she told the SciCafe audience.
Current AI systems, she said, uphold existing structures, such as the prison industrial complex and our surveillance ssociety. “I don’t want existing systems, I want better systems,” said Chowdhury. “AI can help cure diseases and create better educational systems … but it’s being used for punishment, to determine whether students are cheating and paying attention.” Instead, she said, we should look toward positive uses: “How can we give every child their own AI tutor?”
In the current unregulated environment, there is little restraint on AI developers. “We can’t rely solely on tech CEOs to build AI that serves humanity, even if well intentioned,” Chowdhury said, noting the recent New York Times lawsuit filed against OpenAI and Microsoft to block them from “scraping” copyrighted material on the Internet. “They want to be exempt from laws that you and I are governed under,” said Chowdhury.
The next SciCafe, on February 7, will look at maternal mortality — its causes and the barriers to reducing it. The U.S has the highest rate of maternal mortality among all high-income countries. Details and registration for the session are HERE.
Subscribe to WSR’s free email newsletter here.
Very cool! I didn’t know about these events, thanks for alerting us here
What a super, ethical thing for the Museum to do. Glad to know about this series.
This is so exciting, I attended one years ago but it stopped and I thought it wouldn’t come back anymore. Looking forward to checking them out again!