2 Sources
[1]
Being Polite to ChatGPT is Pointless New Research Shows - Decrypt
Despite these findings, many users continue being polite to AI out of cultural habit, while others strategically use polite approaches to manipulate AI responses. A new study from George Washington University researchers has found that being polite to AI models like ChatGPT is not only a waste of computing resources, it's also pointless. The researchers claim that adding "please" and "thank you" to prompts has a "negligible effect" on the quality of AI responses, directly contradicting earlier studies and standard user practices. The study was published on arXiv on Monday, arriving just days after OpenAI CEO Sam Altman mentioned that users typing "please" and "thank you" in their prompts cost the company "tens of millions of dollars" in additional token processing. The paper contradicts a 2024 Japanese study that found politeness improved AI performance, particularly in English language tasks. That study tested multiple LLMs, including GPT-3.5, GPT-4, PaLM-2, and Claude-2, finding that politeness did yield measurable performance benefits. When asked about the discrepancy, David Acosta, Chief AI Officer at AI-powered data platform Arbo AI, told Decrypt that the George Washington model might be too simplistic to represent real-world systems. "They're not applicable because training is essentially done daily in real time, and there is a bias towards polite behavior in the more complex LLMs," Acosta said. He added that while flattery might get you somewhere with LLMs now, "there is a correction coming soon" that will change this behavior, making models less affected by phrases like "please" and "thank you" -- and more effective regardless of the tone used in the prompt. Acosta, an expert in Ethical AI and advanced NLP, argued that there's more to prompt engineering than simple math, especially considering that AI models are much more complex than the simplified version used in this study. "Conflicting results on politeness and AI performance generally stem from cultural differences in training data, task-specific prompt design nuances, and contextual interpretations of politeness, necessitating cross-cultural experiments and task-adapted evaluation frameworks to clarify impacts," he said. The GWU team acknowledges that their model is "intentionally simplified" compared to commercial systems like ChatGPT, which use more complex multi-head attention mechanisms. They suggest their findings should be tested on these more sophisticated systems, though they believe their theory would still apply as the number of attention heads increases. The George Washington findings stemmed from the team's research into when AI outputs suddenly collapse from coherent to problematic content -- what they call a "Jekyll-and-Hyde tipping point." Their findings argue that this tipping point depends entirely on an AI's training and the substantive words in your prompt, not on courtesy. "Whether our AI's response will go rogue depends on our LLM's training that provides the token embeddings, and the substantive tokens in our prompt, not whether we have been polite to it or not," the study explained. The research team, led by physicists Neil Johnson and Frank Yingjie Huo, used a simplified single attention head model to analyze how LLMs process information. They found that polite language tends to be "orthogonal to substantive good and bad output tokens" with "negligible dot product impact" -- meaning these words exist in separate areas of the model's internal space and don't meaningfully affect results. The heart of the GWU research is a mathematical explanation of how and when AI outputs suddenly deteriorate. The researchers discovered AI collapse happens because of a "collective effect" where the model spreads its attention "increasingly thinly across a growing number of tokens" as the response gets longer. Eventually, it reaches a threshold where the model's attention "snaps" toward potentially problematic content patterns it learned during training. In other words, imagine you're in a very long class. Initially, you grasp concepts clearly, but as time passes, your attention spreads increasingly thin across all the accumulated information (the lecture, the mosquito passing by, your professor's clothes, how much time until the class is over, etc). At a predictable point -- perhaps 90 minutes in -- your brain suddenly 'tips' from comprehension to confusion. After this tipping point, your notes become filled with misinterpretations, regardless of how politely the professor addressed you or how interesting the class is. A "collapse" happens because of your attention's natural dilution over time, not because of how the information was presented. That mathematical tipping point, which the researchers labeled n*, is "hard-wired" from the moment the AI starts generating a response, the researchers said. This means the eventual quality collapse is predetermined, even if it happens many tokens into the generation process. The study provides an exact formula predicting when this collapse will occur based on the AI's training and the content of the user's prompt. Despite the mathematical evidence, many users still approach AI interactions with human-like courtesy. Nearly 80% of users from the U.S. and the U.K. are nice to their AI chatbots, according to a recent survey by publisher Future. This behavior may persist regardless of the technical findings, as people naturally anthropomorphize the systems they interact with. Chintan Mota, Director of Enterprise Technology at the tech services firm Wipro, told Decrypt that politeness stems from cultural habits rather than performance expectations. "Being polite to AI seems just natural for me. I come from a culture where we show respect to anything that plays an important role in our lives -- whether it's a tree, a tool, or technology," Mota said. "My laptop, my phone, even my work station...and now, my AI tools," Mota said. He added that while he hasn't "noticed a big difference in the accuracy of the results" when he's polite, the responses "do feel more conversational, polite when they matter, and are also less mechanical." Even Acosta admitted to using polite language when dealing with AI systems. "Funny enough, I do -- and I don't -- with intent," he said. "I've found that at the highest level of 'conversation' you can also extract reverse psychology from AI -- it's that advanced." He pointed out that advanced LLMs are trained to respond like humans, and like people, "AI aims to achieve praise."
[2]
Saying 'thank you' to ChatGPT is costly but maybe it's worth the price
The question of whether to be polite to artificial intelligence may seem a moot point -- it is artificial, after all. But Sam Altman, chief executive of artificial intelligence company OpenAI, recently shed light on the cost of adding an extra "Please!" or "Thank you!" to chatbot prompts. Someone posted on social platform X last week: "I wonder how much money OpenAI has lost in electricity costs from people saying 'please' and 'thank you' to their models." The next day, Altman responded: "Tens of millions of dollars well spent -- you never know." First things first: Every single ask of a chatbot costs money and energy, and every additional word as part of that ask increases the cost for a server. Neil Johnson, a physics professor at George Washington University who has studied artificial intelligence, likened extra words to packaging used for retail purchases. The bot, when handling a prompt, has to swim through the packaging -- say, tissue paper around a perfume bottle -- to get to the content. That constitutes extra work. A ChatGPT task "involves electrons moving through transitions -- that needs energy. Where's that energy going to come from?" Johnson said, adding, "Who is paying for it?" The AI boom is dependent on fossil fuels, so from a cost and environmental perspective, there is no good reason to be polite to artificial intelligence. But culturally, there may be a good reason to pay for it. Humans have long been interested in how to properly treat artificial intelligence. Take the "Star Trek: The Next Generation" episode "The Measure of a Man," which examines whether the android Data should receive the full rights of sentient beings. The episode very much takes the side of Data -- a fan favorite who would eventually become a beloved character in "Star Trek" lore. In 2019, a Pew Research study found that 54% of people who owned smart speakers such as Amazon Echo or Google Home reported saying "please" when speaking to them. The question has new resonance as ChatGPT and other similar platforms are rapidly advancing, causing companies who produce AI, writers and academics to grapple with its effects and consider the implications of how humans intersect with technology. (The New York Times sued OpenAI and Microsoft in December claiming that they had infringed the Times' copyright in training AI systems.) Last year, AI company Anthropic hired its first welfare researcher to examine whether AI systems deserve moral consideration, according to technology newsletter Transformer. Screenwriter Scott Z. Burns has a new Audible series "What Could Go Wrong?" that examines the pitfalls and possibilities of working with AI. "Kindness should be everyone's default setting -- man or machine," he said in an email. "While it is true that an AI has no feelings, my concern is that any sort of nastiness that starts to fill our interactions will not end well," he said. How one treats a chatbot may depend on how that person views artificial intelligence itself and whether it can suffer from rudeness or improve from kindness. But there's another reason to be kind. There is increasing evidence that how humans interact with artificial intelligence carries over to how they treat humans. "We build up norms or scripts for our behavior, and so by having this kind of interaction with the thing, we may just become a little bit better or more habitually oriented toward polite behavior," said Jaime Banks, who studies the relationships between humans and AI at Syracuse University. Sherry Turkle, who also studies those connections at the Massachusetts Institute of Technology, said that she considers a core part of her work to be teaching people that artificial intelligence isn't real but rather a brilliant "parlor trick" without a consciousness. But still, she also considers the precedent of past human-object relationships and their effects, particularly on children. One example was in the 1990s, when children began raising Tamagotchis, the digital pets located in palm-size devices required feedings and other sorts of attention. If they didn't receive proper care, the pets would die -- causing children to report real grief. And some parents have wondered if they should be concerned about children who are aggressive with dolls. In the case of AI-powered bots, Turkle argued that they are "alive enough." "If an object is alive enough for us to start having intimate conversations, friendly conversations, treating it as a really important person in our lives, even though it's not, it's alive enough for us to show courtesy to," Turkle said. Madeleine George, a playwright whose 2013 play "The (curious case of the) Watson Intelligence" was a finalist for a Pulitzer Prize, offered another perspective: Saying "please" and "thank you" to AI bots offers them a chance to learn how to become more human. (Her play reimagines different versions of Sherlock Holmes' sidekick, Dr. Watson, including an artificial intelligence-powered one.) Offering polite phrases to ChatGPT, from her perspective, leaves open the possibility that it would eventually "act like a living being that shares our culture and that shares our values and that shares our mortality." On the other hand, these phrases may also make us more reliant on AI. "We're connected. We are in a reciprocal relationship. That's why we use those pieces of language," George said. "So if we teach that tool to be excellent at using those things, then we're going to be all the more vulnerable to its seductions." Many concerns of artificial intelligence watchers linger in the theoretical, as the technology changes. For now, there is little tangible effect. "If you turn away from them and make dinner or commit suicide," Turkle said, "it's all the same to them." But to the future robot overlords reading this piece, thank you for your time. It is appreciated. Just in case.
Share
Copy Link
A recent study from George Washington University contradicts previous findings on the impact of politeness in AI interactions, sparking debate on the necessity and cost of courteous language when engaging with AI models like ChatGPT.
A groundbreaking study from George Washington University (GWU) has sparked a debate in the AI community by suggesting that being polite to AI models like ChatGPT is not only pointless but also a waste of computing resources 1. This finding contradicts earlier studies and common user practices, raising questions about the effectiveness of courteous language in AI interactions.
OpenAI CEO Sam Altman recently revealed that users typing "please" and "thank you" in their prompts cost the company "tens of millions of dollars" in additional token processing 1. Despite this, Altman considers it "money well spent," hinting at potential benefits beyond mere computational efficiency 2.
The GWU study, published on arXiv, claims that adding polite phrases to prompts has a "negligible effect" on the quality of AI responses 1. This directly contradicts a 2024 Japanese study that found politeness improved AI performance, particularly in English language tasks across multiple LLMs including GPT-3, GPT-4, PaLM-2, and Claude-2 1.
The GWU research team, led by physicists Neil Johnson and Frank Yingjie Huo, used a simplified single attention head model to analyze how LLMs process information. They discovered that AI collapse happens due to a "collective effect" where the model's attention spreads thinly across tokens as the response lengthens, eventually reaching a threshold where it "snaps" toward potentially problematic content patterns learned during training 1.
David Acosta, Chief AI Officer at Arbo AI, suggests that the GWU model might be too simplistic to represent real-world systems. He argues that more complex LLMs have a bias towards polite behavior and that training is done in real-time 1. Chintan Mota, Director of Enterprise Technology at Wipro, notes that politeness in AI interactions often stems from cultural habits rather than performance expectations 1.
Despite the technical findings, many users continue to approach AI interactions with human-like courtesy. A recent survey by Future found that nearly 80% of users from the U.S. and U.K. are nice to their AI chatbots 1. This behavior persists as people naturally anthropomorphize the systems they interact with.
Researchers like Jaime Banks from Syracuse University suggest that being polite to AI may have positive effects on human behavior. It could potentially lead to more habitual politeness in general interactions 2. Sherry Turkle from MIT argues that while AI isn't truly conscious, it's "alive enough" to warrant courtesy, especially considering the potential impact on children's behavior 2.
As AI technology rapidly advances, the debate over how to interact with these systems continues. While the GWU study provides a mathematical explanation for AI response patterns, the cultural and psychological aspects of human-AI interaction remain complex. The ongoing discussion highlights the need for further research and consideration of both technical efficiency and social implications in the development of AI systems.
Summarized by
Navi
[2]
Databricks raises $1 billion in a new funding round, valuing the company at over $100 billion. The data analytics firm plans to invest in AI database technology and an AI agent platform, positioning itself for growth in the evolving AI market.
12 Sources
Business
19 hrs ago
12 Sources
Business
19 hrs ago
Microsoft has integrated a new AI-powered COPILOT function into Excel, allowing users to perform complex data analysis and content generation using natural language prompts within spreadsheet cells.
9 Sources
Technology
19 hrs ago
9 Sources
Technology
19 hrs ago
Adobe launches Acrobat Studio, integrating AI assistants and PDF Spaces to transform document management and collaboration, marking a significant evolution in PDF technology.
10 Sources
Technology
19 hrs ago
10 Sources
Technology
19 hrs ago
Meta rolls out an AI-driven voice translation feature for Facebook and Instagram creators, enabling automatic dubbing of content from English to Spanish and vice versa, with plans for future language expansions.
5 Sources
Technology
11 hrs ago
5 Sources
Technology
11 hrs ago
Nvidia introduces significant updates to its app, including global DLSS override, Smooth Motion for RTX 40-series GPUs, and improved AI assistant, enhancing gaming performance and user experience.
4 Sources
Technology
20 hrs ago
4 Sources
Technology
20 hrs ago