Curated by THEOUTPOST
On Sun, 27 Apr, 4:00 PM UTC
6 Sources
[1]
Tech industry tried reducing AI's pervasive bias. Now Trump wants to end its 'woke AI' efforts
CAMBRIDGE, Mass. (AP) -- After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness," according to a copy of the document obtained by The Associated Press. In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work. But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive. Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to "see" and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light. "Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes," said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features. Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients. "Consumers definitely had a huge positive response to the changes," he said. Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere -- including camera phones, video games, AI image generators -- he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone. "Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. "But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly." Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration "coerced or colluded with" them to censor lawful speech. Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias. One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field. Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas." Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age. Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot -- and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on. Google's was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research. Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech." A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's "ideological bias" is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives. "Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time," said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications. But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives. "I think in this political space, unfortunately, that is quite unlikely," she said. "Problems that have been differently named -- algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other -- - will be regrettably seen us as two different problems."
[2]
Tech industry tried reducing AI's pervasive bias. Now Trump wants to end its 'woke AI' efforts
CAMBRIDGE, Mass. (AP) -- After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness," according to a copy of the document obtained by The Associated Press. In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work. But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive. Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to "see" and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light. "Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes," said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features. Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients. "Consumers definitely had a huge positive response to the changes," he said. Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere -- including camera phones, video games, AI image generators -- he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone. "Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. "But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly." Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration "coerced or colluded with" them to censor lawful speech. Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias. One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field. Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas." Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age. Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot -- and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on. Google's was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research. Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech." A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's "ideological bias" is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives. "Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time," said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications. But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives. "I think in this political space, unfortunately, that is quite unlikely," she said. "Problems that have been differently named -- algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other -- - will be regrettably seen us as two different problems."
[3]
Tech industry tried reducing AI's pervasive bias. Now Trump wants to end its 'woke AI' efforts
CAMBRIDGE, Mass. -- After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness," according to a copy of the document obtained by The Associated Press. In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work. But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive. Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to "see" and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light. "Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes," said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features. Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients. "Consumers definitely had a huge positive response to the changes," he said. Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere -- including camera phones, video games, AI image generators -- he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone. "Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. "But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly." Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration "coerced or colluded with" them to censor lawful speech. Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias. One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field. Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas." Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age. Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot -- and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on. Google's was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research. Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech." A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's "ideological bias" is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives. "Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time," said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications. But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives. "I think in this political space, unfortunately, that is quite unlikely," she said. "Problems that have been differently named -- algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other -- - will be regrettably seen us as two different problems."
[4]
Tech Industry Tried Reducing AI's Pervasive Bias. Now Trump Wants to End Its 'Woke AI' Efforts
CAMBRIDGE, Mass. (AP) -- After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness," according to a copy of the document obtained by The Associated Press. In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work. But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive. Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to "see" and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light. "Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes," said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features. Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients. "Consumers definitely had a huge positive response to the changes," he said. Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere -- including camera phones, video games, AI image generators -- he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone. "Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. "But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly." Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration "coerced or colluded with" them to censor lawful speech. Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias. One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field. Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas." Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age. Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot -- and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on. Google's was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research. Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech." A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's "ideological bias" is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives. "Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time," said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications. But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives. "I think in this political space, unfortunately, that is quite unlikely," she said. "Problems that have been differently named -- algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other -- - will be regrettably seen us as two different problems." Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[5]
Tech industry tried reducing AI's pervasive bias; now Trump wants to end its 'woke AI' efforts
In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation.After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness," according to a copy of the document obtained by The Associated Press. In some ways, tech workers are used to a whiplash of Washington-driven priorities affecting their work. But the latest shift has raised concerns among experts in the field, including Harvard University sociologist Ellis Monk, who several years ago was approached by Google to help make its AI products more inclusive. Back then, the tech industry already knew it had a problem with the branch of AI that trains machines to "see" and understand images. Computer vision held great commercial promise but echoed the historical biases found in earlier camera technologies that portrayed Black and brown people in an unflattering light. "Black people or darker skinned people would come in the picture and we'd look ridiculous sometimes," said Monk, a scholar of colorism, a form of discrimination based on people's skin tones and other features. Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones, replacing a decades-old standard originally designed for doctors treating white dermatology patients. "Consumers definitely had a huge positive response to the changes," he said. Now Monk wonders whether such efforts will continue in the future. While he doesn't believe that his Monk Skin Tone Scale is threatened because it's already baked into dozens of products at Google and elsewhere - including camera phones, video games, AI image generators - he and other researchers worry that the new mood is chilling future initiatives and funding to make technology work better for everyone. "Google wants their products to work for everybody, in India, China, Africa, et cetera. That part is kind of DEI-immune," Monk said. "But could future funding for those kinds of projects be lowered? Absolutely, when the political mood shifts and when there's a lot of pressure to get to market very quickly." Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Republican Rep. Jim Jordan, chair of the judiciary committee, said he wants to find out whether former President Joe Biden's administration "coerced or colluded with" them to censor lawful speech. Michael Kratsios, director of the White House's Office of Science and Technology Policy, said at a Texas event this month that Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Even before Biden took office, a growing body of research and personal anecdotes was attracting attention to the harms of AI bias. One study showed self-driving car technology has a hard time detecting darker-skinned pedestrians, putting them in greater danger of getting run over. Another study asking popular AI text-to-image generators to make a picture of a surgeon found they produced a white man about 98% percent of the time, far higher than the real proportions even in a heavily male-dominated field. Face-matching software for unlocking phones misidentified Asian faces. Police in U.S. cities wrongfully arrested Black men based on false face recognition matches. And a decade ago, Google's own photos app sorted a picture of two Black people into a category labeled as "gorillas." Even government scientists in the first Trump administration concluded in 2019 that facial recognition technology was performing unevenly based on race, gender or age. Biden's election propelled some tech companies to accelerate their focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot - and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Left to their own devices, AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual data they were trained on. Google's was no different, and when asked to depict people in various professions, it was more likely to favor lighter-skinned faces and men, and, when women were chosen, younger women, according to the company's own public research. Google tried to place technical guardrails to reduce those disparities before rolling out Gemini's AI image generator just over a year ago. It ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of men in 18th century attire who appeared to be Black, Asian and Native American. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google CEO Sundar Pichai sitting nearby, Vice President JD Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech." A former Biden science adviser who attended that speech, Alondra Nelson, said the Trump administration's new focus on AI's "ideological bias" is in some ways a recognition of years of work to address algorithmic bias that can affect housing, mortgages, health care and other aspects of people's lives. "Fundamentally, to say that AI systems are ideologically biased is to say that you identify, recognize and are concerned about the problem of algorithmic bias, which is the problem that many of us have been worried about for a long time," said Nelson, the former acting director of the White House's Office of Science and Technology Policy who co-authored a set of principles to protect civil rights and civil liberties in AI applications. But Nelson doesn't see much room for collaboration amid the denigration of equitable AI initiatives. "I think in this political space, unfortunately, that is quite unlikely," she said. "Problems that have been differently named - algorithmic discrimination or algorithmic bias on the one hand, and ideological bias on the other -- will be regrettably seen us as two different problems."
[6]
After a Campaign Against DEI, Trump Administration and Allies Turn Their Attention to 'Woke' Artificial Intelligence
After retreating from their workplace diversity, equity and inclusion programs, tech companies could now face a second reckoning over their DEI work in AI products. In the White House and the Republican-led Congress, "woke AI" has replaced harmful algorithmic discrimination as a problem that needs fixing. Past efforts to "advance equity" in AI development and curb the production of "harmful and biased outputs" are a target of investigation, according to subpoenas sent to Amazon, Google, Meta, Microsoft, OpenAI and 10 other tech companies last month by the House Judiciary Committee. And the standard-setting branch of the U.S. Commerce Department has deleted mentions of AI fairness, safety and "responsible AI" in its appeal for collaboration with outside researchers. It is instead instructing scientists to focus on "reducing ideological bias" in a way that will "enable human flourishing and economic competitiveness." President Trump has cut hundreds of science, technology and health funding grants touching on DEI themes, but its influence on commercial development of chatbots and other AI products is more indirect. In investigating AI companies, Congressman Jim Jordan, the Republican chair of the judiciary committee, said he wants to find out whether President Biden's administration "coerced or colluded with" them to censor lawful speech. The director of the White House's Office of Science and Technology Policy, Michael Kratsios, said at a Texas event this month that Mr. Biden's AI policies were "promoting social divisions and redistribution in the name of equity." The Trump administration noted several examples of what he meant. One was a line from a Biden-era AI research strategy that said: "Without proper controls, AI systems can amplify, perpetuate, or exacerbate inequitable or undesirable outcomes for individuals and communities." Mr. Biden's election propelled some tech companies to accelerate their focus on AI "fairness." The 2022 arrival of OpenAI's ChatGPT added new priorities, sparking a commercial boom in new AI applications for composing documents and generating images, pressuring companies like Google to ease its caution and catch up. Then came Google's Gemini AI chatbot -- and a flawed product rollout last year that would make it the symbol of "woke AI" that conservatives hoped to unravel. Google tried to place technical guardrails to reduce bias before rolling out Gemini's AI image generator just over a year ago, and the company ended up overcompensating for the bias, placing people of color and women in inaccurate historical settings, such as answering a request for American founding fathers with images of Black, Asian and Native American men in 18th century attire. Google quickly apologized and temporarily pulled the plug on the feature, but the outrage became a rallying cry taken up by the political right. With Google chief executive Sundar Pichai sitting nearby, Vice President Vance used an AI summit in Paris in February to decry the advancement of "downright ahistorical social agendas through AI," naming the moment when Google's AI image generator was "trying to tell us that George Washington was Black, or that America's doughboys in World War I were, in fact, women." "We have to remember the lessons from that ridiculous moment," Mr. Vance declared at the gathering. "And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens' right to free speech."
Share
Share
Copy Link
The Trump administration is shifting focus from addressing harmful algorithmic discrimination to combating 'woke AI', potentially impacting tech companies' efforts to reduce bias in AI systems.
The tech industry's efforts to address bias in artificial intelligence (AI) systems are facing new challenges as the Trump administration shifts its focus from combating harmful algorithmic discrimination to targeting what it calls "woke AI" 123. This change in policy direction has raised concerns among experts about the future of AI fairness initiatives.
The House Judiciary Committee, led by Republican Rep. Jim Jordan, has issued subpoenas to major tech companies including Amazon, Google, Meta, Microsoft, and OpenAI 123. The committee aims to investigate whether these companies were "coerced or colluded with" the Biden administration to censor lawful speech under the guise of advancing equity in AI development 123.
The U.S. Commerce Department's standard-setting branch has removed mentions of AI fairness, safety, and "responsible AI" from its appeal for collaboration with outside researchers 123. Instead, it is now instructing scientists to focus on "reducing ideological bias" to "enable human flourishing and economic competitiveness" 123.
Experts like Harvard University sociologist Ellis Monk, who previously worked with Google to improve AI inclusivity, are concerned about the potential impact of this policy shift on future AI fairness initiatives 123. While existing projects like the Monk Skin Tone Scale may not be immediately threatened, there are worries about reduced funding and support for similar projects in the future 123.
The article highlights several examples of AI bias that have occurred in recent years, including:
Tech companies have been working to address these biases, with efforts accelerating after Biden's election 123. However, the industry now faces pressure from the new administration's focus on "reducing ideological bias" rather than addressing systemic biases in AI systems 123.
As the political landscape shifts, there are growing concerns about the potential chilling effect on initiatives aimed at making AI technology more inclusive and equitable 123. The tension between rapid market deployment and ensuring fairness in AI systems remains a significant challenge for the tech industry 123.
Reference
[1]
[2]
[3]
[4]
U.S. News & World Report
|Tech Industry Tried Reducing AI's Pervasive Bias. Now Trump Wants to End Its 'Woke AI' EffortsMajor tech companies are lobbying the Trump administration for fewer AI regulations, reversing their previous stance on government oversight. This shift comes as Trump prioritizes AI development to compete with China.
5 Sources
5 Sources
President Donald Trump signs a new executive order on AI, rescinding Biden-era policies and calling for AI development free from 'ideological bias'. The move sparks debate on innovation versus safety in AI advancement.
44 Sources
44 Sources
The Trump administration revokes Biden's AI executive order, signaling a major shift towards deregulation and market-driven AI development in the US. This move raises concerns about safety, ethics, and international cooperation in AI governance.
4 Sources
4 Sources
Recent executive orders by former President Trump aim to remove 'ideological bias' from AI, potentially undermining safety measures and ethical guidelines in AI development.
2 Sources
2 Sources
OpenAI submits a proposal to the Trump administration's AI Action Plan, advocating for minimal regulation, federal preemption of state laws, and a focus on competing with China in AI development.
6 Sources
6 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved