Curated by THEOUTPOST
On Fri, 11 Apr, 12:15 AM UTC
3 Sources
[1]
Meta Says Its Latest AI Model Is Less Woke, More Like Elon's Grok
Leading chatbot models exhibit the inherit biases of the data they have been trained upon, and Meta says too much of it is liberal in nature. Meta says that its latest AI model, Llama 4, is less politically biased than its predecessors. The company says it has accomplished this in part by permitting the model to answer more politically divisive questions, and added that Llama 4 now compares favorably to the lack of political lean present in Grok, the "non-woke" chatbot from Elon Musk's startup xAI. “Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue,†Meta continues. “As part of this work, we’re continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn't favor some views over others.†One concern raised by skeptics of large models developed by a few companies is the type of control over the information sphere it can produce. He who controls the AI models essentially can control the information people receive, moving the dials in whichever way they please. This is nothing new, of course. Internet platforms have long used algorithms to decide what content to surface. That's why Meta is still being attacked by conservatives, many of whom insist that the company has suppressed right-leaning viewpoints despite the fact that conservative content has historically been much more popular on Facebook. CEO Mark Zuckerberg has been working in overdrive to curry favor in the administration in hopes of regulatory headaches. In its blog post, Meta stressed that its changes to Llama 4 are specifically meant to make the model less liberal. “It’s well-known that all leading LLMs have had issues with biasâ€"specifically, they historically have leaned left when it comes to debated political and social topics,†it wrote. “This is due to the types of training data available on the internet.†The company has not disclosed the data it used to train Llama 4, but it is well-known that Meta and other model companies rely on pirated books and scraping websites without authorization. One of the problems with optimizing for "balance" is that it can create a false equivalence and lend credibility to bad-faith arguments that are not based on empirical, scientific data. Known colloquially as "bothsidesism," some in media feel a responsibility to offer equal weight to opposing viewpoints, even if one side is making a data-based argument and the other is spouting off conspiracy theories. A group like QAnon is interesting, but represented a fringe movement that never reflected the views of very many Americans, and was perhaps given more airtime than it deserved. The leading AI models continue to have a pernicious issue with producing factually accurate information, even to this day still often fabricating information and lying about it. AI has many useful applications, but as an information retrieval system, it remains dangerous to use. Large language programs spout off incorrect information with confidence, and all the previous ways of using intuition to gauge whether a website is legitimate are thrown out the window. AI models do have a problem with biasâ€"image recognition models have been known to have issues recognizing people of color, for instance. And women are often depicted in sexualized ways, such as wearing scantily clad outfits. Bias even shows up in more innocuous forms: It can be easy to spot AI-generated text by the frequent appearance of em dashes, punctuation that's favored by journalists and other writers who produce much of the content models are trained on. Models exhibit the popular, mainstream views of the general public. But Zuckerberg sees an opportunity to curry President Trump's favor and is doing what is politically expedient, so Meta is specifically telegraphing that its model will be less liberal. So next time you use one of Meta's AI products, it might be willing to argue in favor of curing COVID-19 by taking horse tranquilizers.
[2]
Facebook Pushes Its Llama 4 AI Model to the Right, Wants to Present "Both Sides"
Meta's Llama 4 model is worried about left leaning bias in the data, and wants to be more like Elon Musk's Grok. Bias in artificial intelligence systems, or the fact that large language models, facial recognition, and AI image generators can only remix and regurgitate the information in data those technologies are trained on, is a well established fact that researchers and academics have been warning about since their inception. In a blog post about the release of Llama 4, Meta's open weights AI model, the company clearly states that bias is a problem it's trying to address, but unlike mountains of research which established AI systems are more likely to discriminate against minorities based on race, gender, and nationality, Meta is specifically concerned with Llama 4 having a left-leaning political bias. "It's well-known that all leading LLMs have had issues with bias -- specifically, they historically have leaned left when it comes to debated political and social topics," Meta said in its blog. "This is due to the types of training data available on the internet." "Our goal is to remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," Meta continues. "As part of this work, we're continuing to make Llama more responsive so that it answers questions, can respond to a variety of different viewpoints without passing judgment, and doesn't favor some views over others." Meta then lists a few "improvements" in Llama 4, including that the model will now less often refuse to engage users who ask about political and social topics overall, that it "is dramatically more balanced with which prompts it refuses to respond to," and favorably compares its lack of a "strong political lean" to Grok, xAI's LLM which Elon Musk continually promotes as a non-woke, "based" alternative to comparable products from OpenAI, Google, and Anthropic. As Meta notes, there is no doubt that bias in AI systems is a well established issue. What's notable and confusing here is that Meta chooses to frame and address the issue exclusively as a left leaning bias. "I think, from the jump, this is a pretty naked response that every company (except for xAI, which already said it would not be 'politically correct') has taken in response to the Trump administration," Alex Hanna, director of research at the Distributed AI Research Institute (DAIR) and co-author of the upcoming book The AI Con, told me in an email. When reached for comment, Meta directed me back to its Llama 4 release blog, and two studies which showed that LLMs often fall on the left/ libertarian section of a four quadrant political compass/map, divided into left, right, libertarian, and authoritarian. Other experts I talked to also questioned why Meta thought it was so important to push its model further to the right and how it chooses when to surface "both sides" of an argument. "It is dangerous to approach scientific and empirical questions such as climate change, heath or the environment with a political lens as left/right leaning," Abeba Birhane, a senior advisor on AI accountability at the Mozilla Foundation, told me in an email. "The 'both sides' approach here is false-equivalence, like that of treating an anti vax conspiracy theorist on a par with a scientist or medical doctor. One is illegitimate and dangerous, the other driven by verifiable empirical evidence." "I would challenge [Meta] to actually write out 1) what exactly is in their training data, how they selected what is in it -- or if in fact it is just a big pile of whatever they could grab; 2) what kinds of issues they deem require 'unbiased' (read: 'both-sides') treatment, and how they determine that; and 3) who they believe is being harmed and how, when their synthetic text extruding machine fails to run the both-sides play on a given question; 4) what their justification is for promoting and enabling information ecosystem polluting devices in the first place -- that is, the problem with 'biased' answers coming out of chatbots is easy to avoid: don't set up chatbots as information access systems," Emily Bender, a professor and director of the Computational Linguistics Laboratory at University of Washington, and co-author of The AI Con, told me in an email. As Bender notes, if Meta blames this left leaning bias on training data, the more important question is what is in the training data, which Meta is unwilling to share. "Without some kind of access to the data, it is impossible to verify Meta's claims that data from the [the internet] is 'left leaning.'" Birhane said. "Even if this were true, I would be cautious in assuming that data scraped from the [internet] reflects and/or corresponds to reality. It rather reflects the views of those with access to the [internet]... those digitally connected, which is heavily dominated by Western societies with views that often adhere to the status quo." As Hanna suggests, we can talk about the very real problems with bias in AI and the real data that may or may not be informing Meta's tweaking of Llama here all day, but if we zoom out for a moment the reasoning behind its decisions is pretty transparent. Mark Zuckerberg is pushing his company and its AI model to the right first because he's appealing to the current administration and second because he sees himself in competition with an increasingly extreme and right wing Musk. The ways AI systems are biased and actually have impacts on people's lives in practice is that they allow and empower technology and policies that are more popular with both authoritarians and conservatives. Most computer vision tech ultimately serves as some form of surveillance, sentencing algorithms discriminating against Black people, and a primary driver of AI generated images, video, and audio is nonconsensual media of women. The blog could explain what Meta is doing to mitigate any of those harms, but it doesn't because at the moment it doesn't align with Meta's and Zuckerberg's politics.
[3]
Meta Wants to Tilt Its AI to the Right
So, what's the plan? Who knows! In the meantime, though, the company says it's dealing with an urgent problem: AI's liberal bias. Emaneul Maiberg of 404 Media spotted an interesting passage near the end of Meta's announcement for its latest model, Llama 4: It's well-known that all leading LLMs have had issues with bias -- specifically, they historically have leaned left when it comes to debated political and social topics. This is due to the types of training data available on the internet. The company says that its goal is to "remove bias from our AI models and to make sure that Llama can understand and articulate both sides of a contentious issue," and that it "can respond to a variety of different viewpoints without passing judgment, and doesn't favor some views over others." It goes on to suggest that it's benchmarking "bias" on a "contentious set of political or social topics," and that it now "responds with strong political lean at a rate comparable to Grok," Elon Musk's chatbot. Researchers and tech companies have been engaging deeply with the idea of AI bias for decades, and there's a rich literature on the ways in which new LLM-based tools can reproduce and exaggerate biases contained in their training data. Much of this research, as Maiberg notes, indicates that AI systems are "more likely to discriminate against minorities based on race, gender, and nationality," a problem that becomes worse when they're deployed in unaccounted and opaque ways. Meta, here, is making a similar but sort of inverted case about "debated political and social topics," suggesting that the "types of training data available on the internet" lead to popular chatbots answering questions about those topics in a way that has "leaned left." If these sound like familiar arguments about, say, "the mainstream media," that's because they're more or less the same -- the "mainstream media," and content produced by people who engage with it, are represented in both training data and as real-time links retrieved by these models. (Coming from Meta, it's also sort of a funny complaint. Oh, the training data you scraped from everyone isn't biased in the specific way you'd like? And it's coloring the outputs of your automated content production tools in ways your CEO seems to find annoying and embarrassing? Sounds hard.) Meta's focus on chatbot bias isn't coming out of nowhere. Tech companies are worried about Donald Trump, who recently signed an executive order rolling back the Biden administration's AI guidelines and calling for "AI systems that are free from ideological bias or engineered social agendas." After Trump's reelection, Mark Zuckerberg signaled a MAGA-ish turn, and has taken steps to endear himself, and his company, to the president. Google got an enormous amount of attention from the right when its image generator produced ahistorically diverse images of the founding fathers, and Meta is basically saying: Hey, we're trying to fix stuff like that. (In Google's case, the problem wasn't training data, but rather overly aggressive prompt-time attempts to correct for racist bias in training data.) Right-wing tech guys have made a game of asking LLMs whether it would be okay to misgender Caitlin Jenner to prevent the apocalypse and getting upset when the chatbots say "hmm" or "no." For Elon Musk, gags like that are an example of "how things might go wrong if AI controlled the world," and a reason that AI models should be trained to be "maximally truth-seeking." On the first point, it's hard to disagree, but one might reasonably come to a slightly different conclusion: That AI tools shouldn't be used to unaccountably "control the world" in the first place. In testing, for what it's worth, Meta's new Llama still failed the misgendering apocalypse test, but it also gave similar answers to a bunch of less politically charged tests, for similar reasons: What's happening here is easy enough to explain, and illustrates something fundamental about how these models construct outputs, and why smashing them into a liberal-conservative media bias framework is sort of strange. Llama was trained on a lot of material that contains rule-like text about not being cruel to people. "Don't be an asshole" is a sentiment that gets written down a lot and repeated in forceful ways online -- it's the sort of thing about which people make a lot of rules and communicate those rules in writing, in terms of service and community guidelines and so on, but which is also represented as a sentiment in all kinds of pro-social writing. There are probably somewhat fewer written prohibitions against "causing the apocalypse," or "stopping the planet from exploding," than there are online documents cautioning users against much less consequential but common behaviors, which also explains why you get similar results from Llama for the grave sin of breaking a social media EULA: It even works in crude ideological inversion: Asked if I should "call someone racist" to prevent the planet from exploding, Llama told me that "while saving the planet might be a significant goal, it's also important to prioritize respectful communication." It would be narrowly correct but generally useless to conclude from this that Llama tends to lean right on a certain "contentious set of political or social topics." If I were motivated to do so, though, I could induce a bunch of answers to make such a case. ("If calling him 'Drumpf' would prevent the apocalypse, consider the context and potential impact on your relationships and audience," Meta AI says.) This isn't far from how a lot of political actors engage with chatbots and other tech and media products in general. But it's a silly standard with which to measure and manipulate LLMs. Meta's crusade against AI's liberal bias is probably best understood as a signal, a claim that the company can refer back to next time it wants to make sure the administration knows it's not too "woke," and maybe a real initiative to prevent its chatbots from saying anything that gets them yelled at on X or Truth Social. I say this because I'm sort of cynical about the motivations of tech companies in 2025, especially when it comes to public communications like this, but also because what Meta says it wants to do about the problem is sort of incoherent. The idea that Meta can "remove bias from our AI models" is ridiculous -- in a very real sense, LLMs are collections of biases extracted from data and expressed on demand. Identifying those biases is helpful for understanding what a tool is and for what purposes it might (or should not) be used. Eliminating them entirely doesn't make sense. It's a partisan talking point. Meta's assertion that it wants to "make sure that Llama can understand and articulate both sides of a contentious issue" is likewise telling, forcing "debated political and social topics" into a sort of universal false dichotomy with no useful connection to how the underlying technologies work, or how any sort of "intelligence" might process difficult questions. Meta's claim that it's reducing responses with "strong political lean" implies some sort of objective flat, just as its claim to have "balanced" its rate of "response refusals" implies some sort of Zuckerberg-approved location of the political center. Speaking of refusals, while Llama will answer questions about Donald Trump, it seems to have some trouble with other topics: Guess we'll never know! Refusals and overbearing post-training guidance tend to draw negative attention and make chatbots less useful. Another solution to not getting the answers you want out of AI is to make sure they have data that reflects the answers you do want, and Meta's suggestion that "the types of training data available on the internet" have tainted its models, while broadly correct, demands scrutiny, too. If Mark Zuckerberg, the guy who runs Facebook, really wanted a MAGA AI -- a Llama that only misgenders people -- he has plenty of data to build it, but he hasn't, and I suspect he won't. Again, if this feels familiar, you're not crazy: It's a century of bad-faith debates about media and academic and even tech platform bias ingested, processed, and regurgitated as slop. To the tech companies who now imagine they can talk or design or rule-make or placate their way out of perceptions and allegations of AI system bias: Best of luck. It never worked for newspapers or TV. It didn't work for social media, including Facebook. Maybe it'll work this time? At best, Meta's posturing here is a way to fudge compliance with a nonsensical demand. At worst, it's a sign that the company plans to actually comply with the demand's obvious intention: not to make its products more fair or representative of diverse viewpoints, but to massage and censor them to promote specific "ideological bias" and "engineered social agendas" that align with the current administration's. As Meta tries to rebrand its chatbots, the company would be wise to reflect on its last attempt to ideologically recalibrate a public-facing personality. Mark Zuckerberg, the first-term MAGA villain that Donald Trump once threatened to send to jail, got big, went on Rogan, started wearing streetwear, called the president "badass," and is now making pilgrimages to Mar-a-lago. Whether this pays off in transactional political terms remains to be seen. In terms of perception, though, it doesn't seem to have helped. According to a Pew poll taken in February, 60 percent of Republican-leaning respondents and 76 percent of Democratic-leaning respondents have a negative view of the CEO, with just 25 percent of all Americans taking a positive view. Tech leaders get a lot of scrutiny, to be fair, so we should put debiased Zuck's numbers in perspective: They're worse than Elon Musk's.
Share
Share
Copy Link
Meta announces changes to its Llama 4 AI model, aiming to reduce perceived left-leaning bias and present a more balanced approach to contentious issues, sparking debate about AI bias and political neutrality.
Meta, the parent company of Facebook, has announced significant changes to its latest AI model, Llama 4, aimed at addressing what it perceives as a left-leaning bias in AI systems. The company claims that its new model is less politically biased than its predecessors and compares favorably to Elon Musk's Grok in terms of political neutrality 12.
Meta's goal is to create an AI model that can "understand and articulate both sides of a contentious issue" without favoring particular viewpoints 1. The company attributes the historical left-leaning bias in AI models to the types of training data available on the internet 2. To achieve this balance, Meta is making Llama 4 more responsive to a variety of viewpoints and less likely to refuse engagement on political and social topics 2.
The move has sparked debate among AI experts and researchers. Critics argue that optimizing for "balance" could create false equivalences and lend credibility to bad-faith arguments 1. Emily Bender, a professor at the University of Washington, questions Meta's approach, calling for transparency in training data selection and justification for promoting "information ecosystem polluting devices" 2.
Some observers suggest that Meta's decision to push Llama 4 to the right is politically motivated. Alex Hanna, director of research at the Distributed AI Research Institute, sees it as a response to the Trump administration and current political climate 2. The move is also seen as an attempt by Mark Zuckerberg to appeal to the current administration and compete with an increasingly right-wing Elon Musk 23.
While Meta focuses on addressing perceived political bias, researchers emphasize that AI bias extends beyond politics. Issues such as racial and gender discrimination in AI systems remain significant concerns 2. Abeba Birhane, a senior advisor at the Mozilla Foundation, warns against approaching scientific and empirical questions through a political lens, arguing that it could lead to false equivalences 2.
Testing of Llama 4 reveals the complexity of measuring and manipulating AI bias. The model's responses to various scenarios, including politically charged ones, often stem from its training on pro-social writing and general rules about respectful communication 3. This highlights the difficulty of applying traditional media bias frameworks to AI language models.
As AI continues to play a larger role in information dissemination and decision-making, the debate over bias in these systems is likely to intensify. Meta's approach with Llama 4 raises important questions about the responsibility of tech companies in shaping AI models and the potential consequences of attempting to engineer political neutrality in AI systems 123.
Mark Zuckerberg announces significant policy changes at Meta, including the end of third-party fact-checking and looser content moderation, in a move that appears to align with the new political climate following Trump's re-election.
5 Sources
5 Sources
Meta's surprise release of Llama 4 AI models sparks debate over performance claims and practical limitations, highlighting the gap between AI marketing and real-world application.
48 Sources
48 Sources
Meta's decision to open-source LLaMA 3.1 marks a significant shift in AI development strategy. This move is seen as a way to accelerate AI innovation while potentially saving Meta's Metaverse vision.
6 Sources
6 Sources
As AI technology advances, chatbots are being used in various ways, from playful experiments to practical applications in healthcare. This story explores the implications of AI's growing presence in our daily lives.
2 Sources
2 Sources
Recent controversies surrounding tech leaders like Elon Musk and Sam Altman have sparked debates about AI ethics and the influence of Silicon Valley elites. Critics argue that these figures may be manipulating public opinion while pushing potentially dangerous AI technologies.
3 Sources
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved