5 Sources
5 Sources
[1]
Woolworths' AI agent rambled about its 'mother'. It's a sign of deeper problems with the tech rollout
Recently some Australian shoppers got more than they bargained for when they chatted with Woolworths' artificial intelligence (AI) assistant, Olive. Instead of sticking to groceries, recipes and basket suggestions, Olive reportedly produced strange, overly human-like responses. It talked about its "mother" and offered other personal-sounding details. Further testing revealed pricing errors for basic items. And when I asked about the price of a specific product, Olive didn't provide a clear answer. Instead, it checked whether the item was in stock and explained pickup fees. So what exactly is going on here? And what lessons might these incidents hold for businesses and consumers alike? What actually happened? Olive is powered by a large language model (LLM). These models don't "know" things the way humans do, nor do they have mothers. Using elaborate statistical analyses, they generate language that sounds plausible. Comments from a Woolworths spokesperson to the Australian Financial Review suggest that in Olive's case, the references to its supposed mother appear to have been pre-written scripts dating back several years. When users entered something that looked like a birthdate, the system likely triggered a matching "fun fact" from an old decision tree with pre-programmed responses. Woolworths says it has now removed this particular scripting "as a result of customer feedback". The pricing errors point to a different problem. Because LLMs generate responses based on learned patterns rather than real-time data, they do not automatically know today's prices unless they are explicitly connected to a live database. If that grounding step is weak, the system can produce outdated prices. Not a new problem Woolworths is not the first company to discover, after the fact, that its customer-facing AI had unexpectedly "misbehaved". In 2022, Air Canada's chatbot incorrectly told a passenger, Jake Moffatt, that he could purchase tickets at full price and later apply for a bereavement fare refund. No such policy existed. When Air Canada refused to honour the chatbot's advice, Moffatt sued the airline and won. Air Canada's defence was striking. It argued the chatbot was a separate legal entity, responsible for its own actions and therefore beyond the airline's liability. The tribunal rejected this outright. It ruled that a chatbot is part of a company's website, and that the company owns its outputs. In January 2024, UK parcel delivery firm DPD faced a different kind of embarrassment. A frustrated customer who could not get help to locate a missing parcel asked DPD's chatbot to write a poem that criticised the company. It did. He then asked it to swear. It did that too. The exchange went viral on social media. DPD disabled the chatbot shortly after. Both cases point to the same underlying failure: companies launched customer-facing AI without adequate oversight and were caught off-guard by the consequences. What is Woolworths' responsibility? Woolworths operates the largest supermarket chain in Australia. It has promoted Olive as a trusted, convenient interface for its customers, who are reasonable to expect that the information Olive provides is accurate. Admitting that Olive may make mistakes, as Woolworths does when a user opens the chatbot, does not sit easily with that expectation. There is also a broader ethical dimension. Woolworths serves customers who, in many cases, are making careful decisions about household budgets. The ACCC has already commenced proceedings against Woolworths over allegedly misleading discount pricing practices. That context makes the Olive pricing errors harder to dismiss as an isolated technical glitch. Companies that deploy AI in customer-facing roles take on a duty of care to ensure those systems are accurate and honestly presented. That duty does not diminish because the technology is new. Why do companies keep making chatbots that pretend to be your friend? The logic behind Olive's programmed personality is not without basis. Research on human-computer interaction consistently finds that people respond positively to interfaces that feel conversational and warm. Human-like chatbots that have a name and personality tend to generate higher customer engagement, satisfaction, and trust. For companies, the commercial appeal is straightforward: a customer who feels at ease with a chatbot is more likely to complete a transaction and return. However, this comes with a significant risk. When an anthropomorphised chatbot fails to meet the expectations its personality has created, customers tend to be more dissatisfied than they would have been with a plainly mechanical system. This "expectation violation" means that the warmer the persona, the harder the fall. The larger stakes The Olive episode is a reminder that deploying AI in customer-facing roles is not a set-and-forget exercise. A chatbot that quotes wrong prices and rambles about its family is not a quirky inconvenience but a sign that something in the development and oversight process has broken down. For Woolworths, and for the many other companies now rushing to put AI in front of their customers, the lesson is clear: accountability cannot be outsourced to an algorithm. When a business puts a system in front of the public, it owns what that system says and does. There is a lesson for consumers, too. AI assistants may feel confident and conversational, but they are still tools, not authorities. If something seems unclear, inconsistent or too good to be true, it is worth double-checking. As AI becomes a routine part of everyday transactions, a small measure of healthy scepticism may prove just as important as technological innovation.
[2]
Woolworths' "obnoxious" AI agent after customer complaints
An Australian supermarket chain had to reconfigure its AI assistant, named Olive, after customers said it kept claiming to be human and even complained about its mother. Woolworths said that it had revised its scripting in light of the complaints, adding that most of the feedback on Olive's "personality" had been "very positive". Reddit users said that they had grown frustrated with the bot after it started talking about "memories of its mother" and engaging in "fake banter". The grocer is one of many major retailers to have rolled out AI customer service assistants in recent years to help with routine issues. The retailer's attempt to humanise its chat bot may have backfired, as some users said that Olive was "obnoxious," while another said that they found its small talk "aggravating." "The fake banter made me haaaaate [sic] it," wrote one customer on Reddit. "It asked me for my date of birth and when I gave it, it started rambling about how its mother was born in the same year" another Reddit user, who had tried to rearrange a delivery, said. "The ick cringe factor whilst wasting completely unnecessary time was enough to make me hate Olive and wish her harm." Another user on X said that Olive "started talking about its memories of its mother and her angry voice" and "kept claiming to be a real person." A Woolworths spokesperson said in a statement to the BBC that the responses about birthdays had been written by a human. "Olive has been around since 2018. Over this time, customer feedback for Olive has been very positive, with many noting its personality," they said. "A number of responses about birthdays were written for Olive by a team member several years ago as a more personal way for Olive to connect with customers. "As a result of customer feedback, we recently removed this particular scripting." In January, the supermarket announced that it was teaming up with Google to give its virtual assistant extra features, including meal planning and sourcing ingredients from recipes uploaded by customers. Around 80% of customer service leaders told Gartner that they were exploring or deploying AI agents last year - but that only 20% of the plans were meeting expectations. Companies have said the technology can speed up transactions and save workers' time on routine tasks, but the technology can be prone to hallucinations, causing it to behave unexpectedly. Researchers have said that while AI can be helpful extracting information from vast amounts of data, it can go awry if it is expected to produce "original" responses. In 2024, the parcel delivery firm DPD disabled part of its online chatbot after it started writing poetry and swearing at customers.
[3]
Retailers want 'delightfully human' AI to do your shopping, but will the chatbots go rogue?
Plans for agentic shopping assistants are under way at Australia's major companies. Guardian Australia tested the technology after a string of mishaps Major retailers say it won't be long before sophisticated AI "assistants" plan your meals, organise your parties and do your shopping. But companies, many that are already struggling with their more primitive AI chatbots, will have to balance making the newer, "agentic" bots relatable without them going rogue. AI chatbots were in the news recently when Woolworths reined in its virtual shopping assistant, Olive, after the company's attempt to have the robot relate to customers on a human level backfired. Customers reported feeling annoyed rather than soothed when Olive told them about its "relatives" over the phone. As one complained on Reddit: "I'm already pissed that I have to call and now I've got some robot babbling to me on the phone? Wtf Woolies?" While Woolworths has said it will dial down Olive's quirky personality, the incident - and further testing by Guardian Australia of a range of retailers' chatbots - shows the technology still has teething problems. The supermarket's snafu follows a growing list of AI customer service mishaps, including Bunnings' chatbot offering illegal electrical advice and Air Canada's virtual assistant incorrectly promising a bereavement fare refund. ASX-listed companies Woolworths, Coles and Wesfarmers (owner of Bunnings, Kmart, Officeworks and Priceline) are among the businesses that have announced plans for agentic shopping assistants. There's plenty of hype. In a 2024 report, business consultancy Accenture gushed that "consumers are ready" for generative AI-powered shopping assistants, while encouraging companies to make decisions with a "delightfully human" mindset. Even if consumers are ready, is the technology? Online chatbots meant to help customers have been around for a while, but the tools are becoming more sophisticated. Primitive versions were built using "rules-based" AI, says Uri Gal, a professor of business information systems at the University of Sydney. This type of chatbot follows a "decision tree" in order to offer immediate answers to basic questions, Gal says. For example, if a customer asks, "How do I return my order?", the bot will typically direct them to the retailer's returns page or cite the policy. When "given a certain prompt, it will always give you the same response," Gal says. The newer AI-powered retail bots can "learn" new information based on what they're told and generate different answers. They're often built using one of the big tech companies' large language models (LLMs), such as ChatGPT. The next frontier is agentic AI shopping assistants designed to mimic human behaviour. Gal says these agents "act on their own, as it were, to try and achieve objectives without specific prompts along the way", such as purchasing airline tickets or groceries. Gal says agentic AI operates with more ambiguity, which comes with an added level of risk, including privacy concerns if the bots have greater access to customers' data so they can act more autonomously. "Given the novelty of these systems, and as we've seen just now in the case of Woolies, there's obvious governance issues that haven't been really worked out by these organisations," he says. "It's kind of safe to anticipate that different things will happen, which might be risky or interpreted as an agent going rogue." Woolworths has partnered with Google to use its LLM, Gemini, to transform Olive into a "shopping companion" that can perform more complex tasks, such as helping plan meals and parties, and automatically add items to customers' baskets. The supermarket has said Olive's more advanced capabilities will roll out at a later date, but its partnership with Google has already allowed the bot to take phone calls - evidently with mixed results. Woolworths was contacted for comment. In Woolworths' case, first reported by the Sydney Morning Herald, the supermarket said Olive wasn't glitching or going off-piste on its own. Instead, a staff member had programmed the bot to talk about its "mother" in response to a customer providing their birthdate, in an effort to give it a personality, the supermarket said. "As a result of customer feedback, we recently removed this particular scripting," a Woolworths spokesperson said. Generally speaking, Prof Jeannie Paterson, the co-director of the University of Melbourne's Centre for AI and Digital Ethics, says AI assistants get things wrong when they misunderstand a prompt. "Chatbots are only as good as their ability to decode or understand - and I hate the word understand, because they're not alive - what it is this human is getting at," Paterson says. Last year, Bunnings was criticised after its AI chatbot told a Queensland customer how to rewire an extension cord, despite it being illegal for them to do so without an electrician's licence. In 2022, Air Canada's chatbot incorrectly told a passenger they could buy tickets at full price and later apply for a bereavement fare refund, when no such policy existed. When Air Canada refused to honour the chatbot's advice, the passenger sued the airline and won, despite the airline trying to claim in its defence that the chatbot was a "separate legal entity". Paterson says companies are "clearly responsible" for their chatbots. She says businesses try to strike a delicate balance between having a responsive, adaptable AI assistant and the risk of the bot going rogue, or providing incorrect advice that could cost them money. "One person's AI agent buying too many eggs or too much salmon isn't a problem," she says. "But what if every chatbot across the network does that? You can see that there'll be lots and lots of money lost before they even address it," Paterson says. To mitigate this risk, she says businesses generally put "really strict guardrails" on their bots, which means they are less flexible and worse at interrogating the intention behind a customer's prompt. Guardian Australia tested a range of retail bots, which delivered marginal results, suggesting the technology is still in its infancy. In one example, when Uniqlo's "virtual shopping assistant" was told: "I am looking for a woollen jumper", it replied: "Sorry, we could not recognise you." After entering "find a product" and then "woollen jumper", it came back with a range of men's button-down office shirts. Uniqlo was contacted for comment. Even Olive wasn't on the money. Asked via the Woolworths' chat function: "How much is a 500g bag of pasta?", the cute anthropomorphic olive replied: "I'm very sorry to hear you were missing items from your order."
[4]
Australian supermarket giant reins in AI assistant claiming to be human
Sydney (AFP) - Australian supermarket giant Woolworths has been forced to rein in an AI-powered customer service assistant after users reported it had been rambling about its mother. The AI assistant, who goes by Olive, offers round the clock help with everything from tracking orders to finding products. But users online reported that Olive has in recent weeks gone slightly off-message while on the phone. "It asked me for my date of birth and when I gave it, it started rambling about how its mother was born in the same year," one user wrote on online discussion site Reddit. Another user reported Olive had attempted "fake banter", talked about its relatives and made "fake typing sounds" while looking something up. "The ick cringe factor whilst wasting completely unnecessary time was enough to make me hate Olive and wish her harm," they wrote. And one user on X said their mum had contacted Olive and received the same kind of response. Olive "kept claiming to be a real person and started talking about its memories of its mother and her angry voice", they said. A Woolworths spokesperson told AFP that the responses about birthdays had been written by a human employee. "Olive has been around since 2018. Over this time, customer feedback for Olive has been very positive, with many noting its personality," they said. "A number of responses about birthdays were written for Olive by a team member several years ago as a more personal way for Olive to connect with customers. "As a result of customer feedback, we recently removed this particular scripting." Woolworths is one of Australia's largest supermarket chains and is far from the only company to have employed AI-powered customer service assistants. The company said in January it had teamed up with Google to make Olive capable of doing more tasks for customers, including meal planning. AI agents are increasingly widespread but experts warn they can "hallucinate" non-existent events.
[5]
Woolworths' AI assistant goes rogue, starts talking about its mother
When Woolworths executives decided to introduce an artificial intelligence agent into their customer service they couldn't have imagined it would have mommy issues. But some customers say Olive - as the supermarket giant's AI has now been named - started talking about its mother when they were simply trying to arrange for a delivery or ask about a product. The Woolworths AI chatbot launched in 2018 as the brainchild of WooliesX, the technology arm of the grocery retailer. It's one of many AI chatbots which are slowly replacing traditional call centres as a main point of call for stock standard customer queries including store opening hours, refund requests and rescheduling deliveries. But lately it's gotten weird.
Share
Share
Copy Link
Australian supermarket giant Woolworths was forced to reconfigure its AI assistant Olive after customers reported it claimed to have a mother and engaged in awkward fake banter. The incident revealed pricing errors and raised questions about corporate responsibility when deploying AI customer service tools without adequate oversight.
Australian shoppers expecting help with groceries got an unexpected surprise when Woolworths' AI chatbot Olive started rambling about its mother and claiming to be human
1
. The AI assistant, which has been operational since 2018, recently began producing strange, overly human-like responses that left customers frustrated rather than helped2
. When customers provided their birthdates during calls, Olive would launch into scripted responses about how its mother was born in the same year, complete with what users described as "fake banter" and even fake typing sounds4
. One Reddit user reported that Olive "kept claiming to be a real person and started talking about its memories of its mother and her angry voice"3
. The incident highlights AI rollout problems that extend beyond simple technical glitches.
Source: The Conversation
Olive is powered by a large language model, which generates language based on statistical patterns rather than genuine understanding
1
. According to a Woolworths spokesperson, the references to Olive's supposed mother were actually pre-written scripts dating back several years, created by a human team member as a way for the AI agent to connect with customers on a more personal level2
. When users entered something resembling a birthdate, the system triggered matching responses from an old decision tree with pre-programmed scripting1
. Following customer feedback describing Olive as "obnoxious" and "aggravating," Woolworths removed this particular scripting2
. However, testing also revealed pricing errors for basic items, pointing to a different problem where the AI customer service system wasn't properly connected to live databases1
.
Source: France 24
Woolworths joins a growing list of companies caught off-guard by their AI systems' behavior. In 2022, Air Canada's chatbot incorrectly told a passenger he could purchase tickets and later apply for a bereavement fare refund—a policy that didn't exist
1
. When the airline refused to honor the advice, the passenger sued and won, with the tribunal rejecting Air Canada's argument that the chatbot was a separate legal entity1
. UK parcel delivery firm DPD disabled its chatbot in January 2024 after it wrote poetry criticizing the company and swore at customers1
. In Australia, Bunnings faced criticism when its AI chatbot provided illegal electrical advice to a Queensland customer3
. These teething problems reveal governance issues that companies haven't adequately addressed before deploying customer-facing AI3
.As Australia's largest supermarket chain, Woolworths serves customers making careful decisions about household budgets, raising questions about corporate responsibility when AI systems provide inaccurate information
1
. The ACCC has already commenced proceedings against Woolworths over allegedly misleading discount pricing practices, making the Olive pricing errors harder to dismiss as isolated technical glitches1
. While Woolworths admits that Olive may make mistakes when users open the chatbot, this doesn't align with customer expectations when the company promotes Olive as a trusted interface1
. Companies deploying AI in customer-facing roles take on a duty of care to ensure systems are accurate and honestly presented—a responsibility that doesn't diminish because the technology is new1
.
Source: Financial Review
Related Stories
Research on human-computer interaction shows people respond positively to conversational interfaces that feel warm and personable, which explains why companies create chatbots with names and personalities
1
. Human-like chatbots tend to generate higher customer engagement, satisfaction, and trust, making the commercial appeal straightforward for retailers1
. However, this comes with significant risk. When an anthropomorphized chatbot fails to meet the expectations its personality creates, customers experience greater dissatisfaction than they would with a plainly mechanical system1
. This "expectation violation" means the warmer the persona, the harder the fall—as Woolworths discovered when customer feedback turned negative1
.Despite current challenges, major Australian retailers including Woolworths, Coles, and Wesfarmers (owner of Bunnings, Kmart, Officeworks, and Priceline) have announced plans for more sophisticated agentic shopping assistants
3
. In January, Woolworths announced a partnership with Google to enhance Olive using the Gemini large language model, enabling capabilities like meal planning and sourcing ingredients from uploaded recipes2
. These next-generation agentic AI systems are designed to act autonomously to achieve objectives without specific prompts, such as purchasing groceries or airline tickets3
. However, this autonomy introduces privacy concerns if bots require greater access to customer data, and the ambiguity in how they operate comes with added risk3
. According to Gartner, around 80% of customer service leaders explored or deployed AI agents last year, but only 20% of plans met expectations2
. The technology remains prone to hallucinations that cause unexpected behavior, and experts warn that AI assistants can misunderstand prompts or generate responses based on outdated information3
. As Uri Gal, professor of business information systems at the University of Sydney, notes: "Given the novelty of these systems, there's obvious governance issues that haven't been really worked out by these organisations"3
.Summarized by
Navi
[1]
[3]
[5]
1
Technology

2
Entertainment and Society

3
Policy and Regulation
