Meet The Humans Trying To Keep Us Safe From AI

Meet The Humans Trying To Keep Us Safe From AI

A year ago , the idea of ​​using a computer to have a meaningful conversation was considered science fiction. But since the launch of OpenAI ChatGPT last November, Jeevan has turned into a techno thriller with a fast-paced plot. Chatbots and other generative AI tools are starting to fundamentally change the way people live and work. But whether this plot is inspirational or dystopian depends on who's helping to write it.

Fortunately, as AI evolves, so does the makeup of the people who create and explore it. It is a more diverse group of leaders, researchers, entrepreneurs and activists than those who founded ChatGPT. Although the AI ​​community remains largely male, in recent years some researchers and organizations have pushed to make it more accessible to women and other underrepresented groups. Thanks to a larger female-led movement addressing the ethical and social implications of technology, many people in the field are now doing more than just building algorithms or making money. Here are some of the people who make this fast-paced history. - Will Knight

About art

"I wanted to use generative artificial intelligence to capture the possibilities and fears we have as we explore our relationship with this new technology," says artist Sam Cannon, who worked with four photographers to create AI-derived portraits for enhancement. did "It was like a conversation: I gave him images and ideas and he offered them in return."


Rumman Chowdhury led the AI ​​ethics investigation at Twitter until Elon Musk bought the company and fired his team. He is the co-founder of Human Intelligence, a non-profit organization that uses crowdsourcing to discover vulnerabilities in artificial intelligence systems by holding competitions that challenge hackers to induce bad behavior in algorithms. Its first White House-sponsored event this summer will focus on testing generative AI systems from Google and OpenAI. Chowdhury says large-scale public testing is necessary because of the multiple implications of AI systems: "If the consequences are going to have a significant impact on society, aren't people in society the best experts?" - Harry Johnson


Sarah Bird's job at Microsoft is to keep the company from disrupting the artificial intelligence it's adding to its Office apps and other products. As you've seen the text generators behind Bing chatbots become more powerful and useful, you've also seen them get better at churning out biased content and malicious code. His team is working to curb this dark side of technology. AI has the potential to improve many people's lives, Bird says, but "all of this is impossible if people are afraid that the technology will produce stereotypical results." - KJ


Yejin Choi, a professor at the University of Washington's School of Computer Science and Engineering, is developing an open-source model called Delphi that aims to understand what's right and what's wrong. He is interested in how people make sense of Delphi's ethical statements. Choi wants the systems to be as efficient as OpenAI and Google, but without requiring huge resources "The current focus on scales is very unhealthy for a number of reasons," she says. "It's a total concentration of power, very expensive and rarely unique." - Great Britain


Margaret Mitchell founded the Ethical AI Research Group at Google in 2017 . Four years later, he was fired after a disagreement with executives over an article he co-authored. He warned that large language patterns, the technology behind ChatGPT, could reinforce stereotypes and cause other problems. Mitchell is now the head of ethics at Hugging Face, a startup that makes open source artificial intelligence software for programmers. He works to ensure startups don't spring any unpleasant surprises, and encourages experts to put people before algorithms. Generative models can be useful, he says, but they can also undermine people's sense of truth: "We risk losing touch with historical facts." - KJ


When Inioluva Deborah Raji started working on AI, she was working on a project that found flaws in facial analysis algorithms that were less accurate for women of color. The findings prompted Amazon, IBM and Microsoft to stop selling facial recognition technology. Raji is currently working with the Mozilla Foundation on open source tools that help people test AI systems for errors like biases and mistakes with large language models. Raji says these tools can help AI-influenced communities challenge the claims of dominant tech companies "People actively deny harm," he says, "so collecting evidence is an essential part of any progress in this field." - KJ


Daniela Amody previously worked on AI policy at OpenAI and helped found ChatGPT. But in 2021, he and a few others left to found Anthropic, a public company developing its own approach to protecting AI. The startup's chatbot, Claude, has a "constitution" that defines its behavior based on principles derived from sources like the United Nations' Universal Declaration of Human Rights. Amodius, president and co-founder of Anthropic, says ideas like these will reduce bad behavior today and potentially help limit more powerful AI systems in the future: "It's very important to think about the potential long-term impact of this technology." - United Kingdom


Leela Ibrahim is the COO of Google DeepMind, the lead research arm of Google's generative artificial intelligence project. For him, running one of the world's most powerful artificial intelligence labs is less work and more of a moral call. After nearly two decades at Intel, Ibrahim joined DeepMind five years ago with hopes of advancing AI for the benefit of society. One of his responsibilities is chairing an internal review panel that discusses how to maximize the benefits of DeepMind projects and avoid bad outcomes. "I thought if I could share my experience and knowledge to help bring this technology to the world in a more responsible way, then I should be here," he says. - Morgan Meeker


This article will appear in the July/August 2023 issue. Subscribe now .

Tell us what you think about this article. Send a letter to the editor at mail@wired.com .

Simple minded, right?

Post a Comment