Tech industry tried reducing AI’s pervasive bias. Now Trump wants to end its ‘woke AI’ efforts

By MATT O BRIEN AP Mechanism Writer CAMBRIDGE Mass AP After retreating from their workplace diversity equity and inclusion programs tech companies could now face a second reckoning over their DEI work in AI products In the White House and the Republican-led Congress woke AI has replaced harmful algorithmic discrimination as a complication that requirements fixing Past efforts to advance equity in AI advance and curb the production of harmful and biased outputs are a target of analysis according to subpoenas sent to Amazon Google Meta Microsoft OpenAI and other tech companies last month by the House Judiciary Committee And the standard-setting branch of the U S Commerce Department has deleted mentions of AI fairness safety and responsible AI in its appeal for collaboration with outside researchers It is instead instructing scientists to focus on reducing ideological bias in a way that will enable human flourishing and economic competitiveness according to a copy of the document obtained by The Associated Press In specific avenues tech workers are used to a whiplash of Washington-driven priorities affecting their work But the latest shift has raised concerns among experts in the field including Harvard University sociologist Ellis Monk who several years ago was approached by Google to help make its AI products more inclusive Back then the tech industry already knew it had a issue with the branch of AI that trains machines to see and understand images Computer vision held great commercial promise but echoed the historical biases identified in earlier camera technologies that portrayed Black and brown people in an unflattering light Black people or darker skinned people would come in the picture and we d look ridiculous sometimes disclosed Monk a scholar of colorism a form of discrimination based on people s skin tones and other features Google adopted a color scale invented by Monk that improved how its AI image tools portray the diversity of human skin tones replacing a decades-old standard originally designed for doctors treating white dermatology patients Consumers definitely had a huge positive response to the changes he commented Now Monk wonders whether such efforts will continue in the future While he doesn t believe that his Monk Skin Tone Scale is threatened because it s already baked into dozens of products at Google and elsewhere including camera phones video games AI image generators he and other researchers worry that the new mood is chilling future initiatives and funding to make machinery work better for everyone Google wants their products to work for everybody in India China Africa et cetera That part is kind of DEI-immune Monk noted But could future funding for those kinds of projects be lowered Absolutely when the political mood shifts and when there s a lot of pressure to get to sector very promptly Trump has cut hundreds of science innovation and soundness funding grants touching on DEI themes but its influence on commercial enhancement of chatbots and other AI products is more indirect In examining AI companies Republican Rep Jim Jordan chair of the judiciary committee commented he wants to find out whether former President Joe Biden s administration coerced or colluded with them to censor lawful speech Michael Kratsios director of the White House s Office of Science and Apparatus Procedures explained at a Texas event this month that Biden s AI policies were promoting social divisions and redistribution in the name of equity The Trump administration declined to make Kratsios available for an interview but quoted several examples of what he meant One was a line from a Biden-era AI research strategy that explained Without proper controls AI systems can amplify perpetuate or exacerbate inequitable or undesirable outcomes for individuals and communities Even before Biden took office a growing body of research and personal anecdotes was attracting attention to the harms of AI bias One examination exhibited self-driving car hardware has a hard time detecting darker-skinned pedestrians putting them in greater danger of getting run over Another research asking popular AI text-to-image generators to make a picture of a surgeon determined they produced a white man about percent of the time far higher than the real proportions even in a heavily male-dominated field Related Articles Does a plan affect financial aid China shrugs off threat of US tariffs to economic activity says it has tools to protect jobs Wall Street slips ahead of another week full of likely swings Trump administration cuts more than M in grants from Minnesota museums institutions Business People MPR custom and arts reporter Euan Kerr retiring after years Face-matching system for unlocking phones misidentified Asian faces Police in U S cities wrongfully arrested Black men based on false face recognition matches And a decade ago Google s own photos app sorted a picture of two Black people into a category labeled as gorillas Even establishment scientists in the first Trump administration concluded in that facial recognition innovation was performing unevenly based on race gender or age Biden s poll propelled various tech companies to accelerate their focus on AI fairness The arrival of OpenAI s ChatGPT added new priorities sparking a commercial boom in new AI applications for composing documents and generating images pressuring companies like Google to ease its caution and catch up Then came Google s Gemini AI chatbot and a flawed product rollout last year that would make it the symbol of woke AI that conservatives hoped to unravel Left to their own devices AI tools that generate images from a written prompt are prone to perpetuating the stereotypes accumulated from all the visual facts they were trained on Google s was no different and when inquired to depict people in various professions it was more likely to favor lighter-skinned faces and men and when women were chosen younger women according to the company s own citizens research Google tried to place technical guardrails to reduce those disparities before rolling out Gemini s AI image generator just over a year ago It ended up overcompensating for the bias placing people of color and women in inaccurate historical settings such as answering a request for American founding fathers with images of men in th century attire who appeared to be Black Asian and Native American Google promptly apologized and temporarily pulled the plug on the feature but the outrage became a rallying cry taken up by the political right With Google CEO Sundar Pichai sitting nearby Vice President JD Vance used an AI summit in Paris in February to decry the advancement of downright ahistorical social agendas through AI naming the moment when Google s AI image generator was trying to tell us that George Washington was Black or that America s doughboys in World War I were in fact women We have to remember the lessons from that ridiculous moment Vance declared at the gathering And what we take from it is that the Trump administration will ensure that AI systems developed in America are free from ideological bias and never restrict our citizens right to free speech A former Biden science adviser who attended that speech Alondra Nelson noted the Trump administration s new focus on AI s ideological bias is in various avenues a recognition of years of work to address algorithmic bias that can affect housing mortgages healthcare care and other aspects of people s lives Fundamentally to say that AI systems are ideologically biased is to say that you identify recognize and are concerned about the dilemma of algorithmic bias which is the complication that numerous of us have been worried about for a long time commented Nelson the former acting director of the White House s Office of Science and Hardware Initiative who co-authored a set of principles to protect civil rights and civil liberties in AI applications But Nelson doesn t see much room for collaboration amid the denigration of equitable AI initiatives I think in this political space unfortunately that is quite unlikely she mentioned Problems that have been differently named algorithmic discrimination or algorithmic bias on the one hand and ideological bias on the other - will be regrettably seen us as two different problems