An entire team responsible for making sure that Microsoft’s AI products are shipped with safeguards to mitigate social harms was cut during the company’s most recently layoff of 10,000 employees, Platformer reported.
Former employees said that the ethics and society team was a critical part of Microsoft's strategy to reduce risks associated with using OpenAI technology in Microsoft products. Before it was killed off, the team developed an entire “responsible innovation toolkit” to help Microsoft engineers forecast what harms could be caused by AI—and then to diminish those harms.
Platformer’s report came just before OpenAI released possibly its most powerful AI model yet, GPT-4, which is already helping to power Bing search, Reuters reported.
Emily Bender, a University of Washington expert on computational linguistics and ethical issues in natural-language processing, joined other critics tweeting to denounce Microsoft’s decision to dissolve the ethics and society team. Bender told Ars that, as an outsider, she thinks Microsoft’s decision was “short-sighted." She added, “Given how difficult and important this work is, any significant cuts to the people doing the work is damning.”
Brief history of the ethics and society team
It was too much work for a team that small, and Platformer reported that former team members said that Microsoft didn’t always act on their recommendations, such as mitigation strategies recommended for Bing Image Creator that might stop it from copying living artists’ brands. (Microsoft has disputed that claim, saying that it modified the tool before launch to address the team’s concerns.)
While the team was being reduced last fall, according to Platformer, Microsoft’s corporate vice president of AI, John Montgomery, said that there was great pressure to “take these most recent OpenAI models and the ones that come after them and move them into customers' hands at a very high speed.” Employees warned Montgomery of “significant” concerns they had about potential negative impacts of this speed-based strategy, but Montgomery insisted that “the pressures remain the same.”
Bender told Ars that the decision is particularly disappointing because Microsoft “managed to gather some really great people working on AI, ethics, and societal impact for technology." She said that, for a while, it seemed like the team was "actually even fairly empowered at Microsoft." But Bender said that with this move, Microsoft "basically says" that if the company perceives the ethics and society team recommendations "as contrary to what's gonna make us money in the short term, then they gotta go.”
To experts like Bender, it seems like Microsoft is now less interested in funding a team dedicated to telling the company to slow down when AI models might carry risks—including legal risks. One employee told Platformer that they wondered what would happen to both the brand and to users now that there was seemingly no one to say “no” when potentially irresponsible designs were pushed to users.
“The worst thing is we’ve exposed the business to risk and human beings to risk,” one former employee told Platformer.
The shaky future of responsible AI
When the company relaunched Bing with AI, users quickly discovered the Bing Chat tool had unexpected behaviors—generating conspiracies, spouting misinformation, and even seemingly slandering people. Up until now, tech companies like Microsoft and Google have been trusted to self-regulate releases of AI tools, identifying risks and mitigating harms. But Bender—who coauthored the paper with former Google ethics AI researcher Timnit Gebru that resulted in Gebru being fired for criticizing large language models that many AI tools depend on—told Ars that “self-regulation as a model is not going to work.”
“There has to be external pressure” to invest in responsible AI teams, Bender told Ars.
Bender advocates for regulators to get involved at this point, if society wants more transparency from companies amidst the “current wave of AI hype.” Otherwise users risk jumping on bandwagons to use popular tools—like they did with AI-powered Bing, which now has 100 million monthly active users—without a solid understanding of how users could be harmed by those tools.
“I think that every user who encounters this needs to have a really clear idea of what it is that they're working with,” Bender told Ars. “And I don't see any companies doing a good job of that.”
Bender said it’s “frightening” that companies seem consumed by capitalizing on AI hype, which claims that AI is “going to be as big and disruptive as the Internet was.” Instead, companies have a duty to “think about what could go wrong.”
“We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers,” Microsoft’s spokesperson said.
Platformer’s report came just before OpenAI released possibly its most powerful AI model yet, GPT-4, which is already helping to power Bing search, Reuters reported.
Emily Bender, a University of Washington expert on computational linguistics and ethical issues in natural-language processing, joined other critics tweeting to denounce Microsoft’s decision to dissolve the ethics and society team. Bender told Ars that, as an outsider, she thinks Microsoft’s decision was “short-sighted." She added, “Given how difficult and important this work is, any significant cuts to the people doing the work is damning.”
Brief history of the ethics and society team
It was too much work for a team that small, and Platformer reported that former team members said that Microsoft didn’t always act on their recommendations, such as mitigation strategies recommended for Bing Image Creator that might stop it from copying living artists’ brands. (Microsoft has disputed that claim, saying that it modified the tool before launch to address the team’s concerns.)
While the team was being reduced last fall, according to Platformer, Microsoft’s corporate vice president of AI, John Montgomery, said that there was great pressure to “take these most recent OpenAI models and the ones that come after them and move them into customers' hands at a very high speed.” Employees warned Montgomery of “significant” concerns they had about potential negative impacts of this speed-based strategy, but Montgomery insisted that “the pressures remain the same.”
Bender told Ars that the decision is particularly disappointing because Microsoft “managed to gather some really great people working on AI, ethics, and societal impact for technology." She said that, for a while, it seemed like the team was "actually even fairly empowered at Microsoft." But Bender said that with this move, Microsoft "basically says" that if the company perceives the ethics and society team recommendations "as contrary to what's gonna make us money in the short term, then they gotta go.”
To experts like Bender, it seems like Microsoft is now less interested in funding a team dedicated to telling the company to slow down when AI models might carry risks—including legal risks. One employee told Platformer that they wondered what would happen to both the brand and to users now that there was seemingly no one to say “no” when potentially irresponsible designs were pushed to users.
“The worst thing is we’ve exposed the business to risk and human beings to risk,” one former employee told Platformer.
The shaky future of responsible AI
When the company relaunched Bing with AI, users quickly discovered the Bing Chat tool had unexpected behaviors—generating conspiracies, spouting misinformation, and even seemingly slandering people. Up until now, tech companies like Microsoft and Google have been trusted to self-regulate releases of AI tools, identifying risks and mitigating harms. But Bender—who coauthored the paper with former Google ethics AI researcher Timnit Gebru that resulted in Gebru being fired for criticizing large language models that many AI tools depend on—told Ars that “self-regulation as a model is not going to work.”
“There has to be external pressure” to invest in responsible AI teams, Bender told Ars.
Bender advocates for regulators to get involved at this point, if society wants more transparency from companies amidst the “current wave of AI hype.” Otherwise users risk jumping on bandwagons to use popular tools—like they did with AI-powered Bing, which now has 100 million monthly active users—without a solid understanding of how users could be harmed by those tools.
“I think that every user who encounters this needs to have a really clear idea of what it is that they're working with,” Bender told Ars. “And I don't see any companies doing a good job of that.”
Bender said it’s “frightening” that companies seem consumed by capitalizing on AI hype, which claims that AI is “going to be as big and disruptive as the Internet was.” Instead, companies have a duty to “think about what could go wrong.”
“We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers,” Microsoft’s spokesperson said.
In a statement provided to Ars, Microsoft said that it remains “committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.”Calling the work of the ethics and society team “trailblazing,” Microsoft said that the company had focused more over the past six years on investing in and expanding the size of its Office of Responsible AI. That office remains active, along with Microsoft’s other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering.
Microsoft began focusing on teams dedicated to exploring responsible AI back in 2017, CNBC reported. By 2020, that effort included the ethics and society team with a peak size of 30 members, Platformer noted. But as the AI race with Google heated up, Microsoft began moving the majority of the ethics and society team members into specific product teams last October. That left just seven people dedicated to implementing the ethics and society team’s “ambitious plans,” employees told Platformer.
Even as the ethics and society team’s size dwindled, however, Microsoft told the team it wouldn’t be eliminated. The company announced a change on March 6, though, when the remnants of the team were told during a Zoom meeting that it was considered “business critical” to dissolve the team entirely.
To Bender, a better solution than depending on companies like Microsoft to do the right thing is for society to advocate for regulations—not to “micromanage specific technologies, but rather to establish and protect rights” in an enduring way, she tweeted.
Until there are proper regulations in place, more transparency about potential harms, and better information literacy among users, Bender recommends that users “never accept AI medical advice, legal advice, psychotherapy,” or other sensitive applications of AI.
“It strikes me as very, very short-sighted,” Bender said of the current AI hype.
Until there are proper regulations in place, more transparency about potential harms, and better information literacy among users, Bender recommends that users “never accept AI medical advice, legal advice, psychotherapy,” or other sensitive applications of AI.
“It strikes me as very, very short-sighted,” Bender said of the current AI hype.
Tags
News And Politics