Curated by THEOUTPOST
On Mon, 30 Dec, 8:00 AM UTC
3 Sources
[1]
AI Tools Could Sell User Data About Opinions and Preferences, Says Study
Intention economy is considered the successor to the attention economy Artificial intelligence (AI) tools could soon start predicting and manipulating users with the large pool of "intent data" they have, a study has claimed. Conducted by the University of Cambridge, the research paper also highlights that in the future, an "intention economy" could be formed which could create a marketplace for selling "digital signals of intent" of a large user base. Such data can be used in a variety of ways, from creating customised online ads to using AI chatbots to persuade and convince users to buy a product or service, the paper warned. It is undeniable that AI chatbots such as ChatGPT, Gemini, Copilot, and others have access to a massive dataset that comes from users having conversations with them. Many users talk about their opinions, preferences, and values with these AI platforms. Researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) claim that this massive data can be used in dangerous ways in the future. The paper describes an intention economy as a new marketplace for "digital signals of intent", where AI chatbots and tools can understand, predict, and steer human intentions. Researchers claim these data points will also be sold to companies who can profit from them. Researchers behind the paper believe the intention economy would be the successor to the existing "attention economy" which is exploited by social media platforms. In an attention economy, the goal is to keep the user hooked on the platform while a large volume of ads could be fed to them. These ads are targeted based on users' in-app activity, which reveals information about their preferences and behaviour. The intention economy, the research paper claims, could be far more pervasive in its scope and exploitation as it can gain insight into users by directly conversing with them. As such, they could know their fears, desires, insecurities, and opinions. "We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition before we become victims of its unintended consequences," Dr. Jonnie Penn, a Historian of Technology at LCFI told The Guardian. The study also claimed that with this large volume of "intentional, behavioural, and psychological data", large language models (LLMs) could also be taught to use such information to anticipate and manipulate people. The paper claimed that future chatbots could recommend users to watch a movie, and could use access to their emotions as a way to convince them to watch it. "You mentioned feeling overworked, shall I book you that movie ticket we'd talked about?", it cited an example. Expanding upon the idea, the paper claimed that in an intention economy, LLMs could also build psychological profiles of users and then sell them to advertisers. Such data could include information about a user's cadence, political inclinations, vocabulary, age, gender, preferences, opinions, and more. Advertisers will then be able to make highly customised online ads knowing what could encourage a person to buy a certain product. Notably, the research paper offers a bleak outlook on how private user data in the age of AI can be used. However, given the proactive stance of various governments across the world in limiting AI companies' access to such data, the reality might be brighter than the one projected by the study.
[2]
AI tools may soon manipulate people's online decision-making, say researchers
Study predicts an 'intention economy' where companies bid for accurate predictions of human behaviour Artificial intelligence (AI) tools could be used to manipulate online audiences into making decisions - ranging from what to buy to who to vote for - according to researchers at the University of Cambridge. The paper highlights an emerging new marketplace for "digital signals of intent" - known as the "intention economy" - where AI assistants understand, forecast and manipulate human intentions and sell that information on to companies who can profit from it. The intention economy is touted by researchers at Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) as a successor to the attention economy, where social networks keep users hooked on their platforms and serve them adverts. The intention economy involves AI-savvy tech companies selling what they know about your motivations, from plans for a stay in a hotel to opinions on a political candidate, to the highest bidder. "For decades, attention has been the currency of the internet," said Dr Jonnie Penn, an historian of technology at LCFI. "Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy." He added: "Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions. "We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences." The study claims that large language models (LLMs), the technology that underpins AI tools such as the ChatGPT chatbot, will be used to "anticipate and steer" users based on "intentional, behavioural and psychological data". The authors said the attention economy allows advertisers to buy access to users' attention in the present via real-time bidding on ad exchanges or buy it in the future by acquiring a month's-worth of ad space on a billboard. LLMs will be able to access attention in real-time as well, by, for instance, asking if a user has thought about seeing a particular film - "have you thought about seeing Spider-Man tonight?" - as well as making suggestions relating to future intentions, such as asking: "You mentioned feeling overworked, shall I book you that movie ticket we'd talked about?" The study raises a scenario where these examples are "dynamically generated" to match factors such as a user's "personal behavioural traces" and "psychological profile". "In an intention economy, an LLM could, at low cost, leverage a user's cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximise the likelihood of achieving a given aim (eg to sell a film ticket)," the study suggests. In such a world, an AI model would steer conversations in the service of advertisers, businesses and other third parties. Advertisers will be able to use generative AI tools to create bespoke online ads, the report claims. It also cites the example of an AI model created by Mark Zuckerberg's Meta, called Cicero, that has achieved the "human-level" ability to play the board game Diplomacy - a game that the authors say is dependent on inferring and predicting the intent of opponents. AI models will be able to tweak their outputs in response to "streams of incoming user-generated data", the study added, citing research showing that models can infer personal information through workaday exchanges and even "steer" conversations in order to gain more personal information. The study then raises a future scenario where Meta will auction off to advertisers a user's intent to book a restaurant, flight or hotel. Although there is already an industry devoted to forecasting and bidding on human behaviour, the report said, AI models will distill those practices into a "highly quantified, dynamic and personalised format". The study quotes the research team behind Cicero warning that an "[AI] agent may learn to nudge its conversational partner to achieve a particular objective". The research refers to tech executives discussing how AI models will be able to predict a user's intent and actions. It quotes the chief executive of the largest AI chipmaker, Jensen Huang of Nvidia, who said last year that models will "figure out what is your intention, what is your desire, what are you trying to do, given the context, and present the information to you in the best possible way".
[3]
AI's next frontier: Selling your intentions before you know them
The near future could see AI assistants that forecast and influence our decision-making at an early stage, and sell these developing "intentions" in real-time to companies that can meet the need -- before we even realize we have made up our minds. This is according to AI ethicists from the University of Cambridge, who say we are at the dawn of a "lucrative yet troubling new marketplace for digital signals of intent," from buying movie tickets to voting for candidates. They call this the "intention economy." Researchers from Cambridge's Leverhulme Center for the Future of Intelligence (LCFI) argue that the explosion in generative AI, and our increasing familiarity with chatbots, opens a new frontier of "persuasive technologies" -- one hinted at in recent corporate announcements by tech giants. "Anthropomorphic" AI agents, from chatbot assistants to digital tutors and girlfriends, will have access to vast quantities of intimate psychological and behavioral data, often gleaned via informal, conversational spoken dialogue. This AI will combine knowledge of our online habits with an uncanny ability to attune to us in ways we find comforting -- mimicking personalities and anticipating desired responses -- to build levels of trust and understanding that allow for social manipulation on an industrial scale, say researchers. "Tremendous resources are being expended to position AI assistants in every area of life, which should raise the question of whose interests and purposes these so-called assistants are designed to serve," said LCFI Visiting Scholar Dr. Yaqub Chaudhary. "What people say when conversing, how they say it, and the type of inferences that can be made in real-time as a result, are far more intimate than just records of online interactions" "We caution that AI tools are already being developed to elicit, infer, collect, record, understand, forecast, and ultimately manipulate and commodify human plans and purposes." Dr. Jonnie Penn, an historian of technology from Cambridge's LCFI, said, "For decades, attention has been the currency of the internet. Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy." "Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer, and sell human intentions." "We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press, and fair market competition, before we become victims of its unintended consequences." In a Harvard Data Science Review paper, Penn and Chaudhary write that the intention economy will be the attention economy "plotted in time": profiling how user attention and communicative style connects to patterns of behavior and the choices we end up making. "While some intentions are fleeting, classifying and targeting the intentions that persist will be extremely profitable for advertisers," said Chaudhary. In an intention economy, Large Language Models or LLMs could be used to target, at low cost, a user's cadence, politics, vocabulary, age, gender, online history, and even preferences for flattery and ingratiation, write the researchers. This information-gathering would be linked with brokered bidding networks to maximize the likelihood of achieving a given aim, such as selling a cinema trip ("You mentioned feeling overworked, shall I book you that movie ticket we'd talked about?"). This could include steering conversations in the service of particular platforms, advertisers, businesses, and even political organizations, argue Penn and Chaudhary. While researchers say the intention economy is currently an "aspiration" for the tech industry, they track early signs of this trend through published research and the hints dropped by several major tech players. These include an open call for "data that expresses human intention... across any language, topic, and format" in a 2023 OpenAI blogpost, while the director of product at Shopify -- an OpenAI partner -- spoke of chatbots coming in "to explicitly get the user's intent" at a conference the same year. Nvidia's CEO has spoken publicly of using LLMs to figure out intention and desire, while Meta released "Intentonomy" research, a dataset for human intent understanding, back in 2021. In 2024, Apple's new "App Intents" developer framework for connecting apps to Siri (Apple's voice-controlled personal assistant), includes protocols to "predict actions someone might take in future" and "to suggest the app intent to someone in the future using predictions you [the developer] provide." "AI agents such as Meta's CICERO are said to achieve human level play in the game Diplomacy, which is dependent on inferring and predicting intent, and using persuasive dialogue to advance one's position," said Chaudhary. "These companies already sell our attention. To get the commercial edge, the logical next step is to use the technology they are clearly developing to forecast our intentions, and sell our desires before we have even fully comprehended what they are." Penn points out that these developments are not necessarily bad, but have the potential to be destructive. "Public awareness of what is coming is the key to ensuring we don't go down the wrong path," he said.
Share
Share
Copy Link
Researchers warn of a new 'intention economy' where AI tools could predict, manipulate, and sell user intentions, raising concerns about privacy and ethical implications in the evolving landscape of artificial intelligence.
Researchers from the University of Cambridge's Leverhulme Centre for the Future of Intelligence (LCFI) have raised concerns about the potential emergence of an 'intention economy,' where artificial intelligence (AI) tools could be used to predict, manipulate, and sell user intentions 1. This new marketplace for "digital signals of intent" is being touted as the successor to the current "attention economy" dominated by social media platforms 2.
The study highlights that AI chatbots and tools, such as ChatGPT, Gemini, and Copilot, have access to vast amounts of user data through conversations. Users often share their opinions, preferences, and values with these platforms, creating a rich dataset that could be exploited 1. This data includes not only what people say but also how they say it, providing intimate psychological and behavioral information 3.
Researchers warn that in an intention economy, large language models (LLMs) could be used to:
Several major tech companies have shown interest in intention-related technologies:
Dr. Jonnie Penn, a historian of technology at LCFI, emphasizes the need for regulation to prevent the commodification of user motivations 2. The researchers call for consideration of the potential impact on:
While the intention economy is currently an "aspiration" for the tech industry, the researchers argue that we are at the dawn of a new era of "persuasive technologies" 3. As AI assistants become more integrated into various aspects of life, questions arise about whose interests these assistants truly serve.
Dr. Yaqub Chaudhary from LCFI warns that AI tools are already being developed to elicit, infer, collect, and ultimately manipulate human plans and purposes 3. The researchers emphasize the importance of public awareness to ensure that these developments do not lead down a destructive path.
Reference
[1]
An exploration of how generative AI and social media could be used to manipulate language and control narratives, drawing parallels to Orwell's 'Newspeak' and examining the potential beneficiaries of such manipulation.
2 Sources
As artificial intelligence continues to evolve at an unprecedented pace, experts debate its potential to revolutionize industries while others warn of the approaching technological singularity. The manifestation of unusual AI behaviors raises concerns about the widespread adoption of this largely misunderstood technology.
2 Sources
A new study reveals that the potential negative consequences of AI in marketing are largely overlooked in academic research, raising concerns about ethical implications and societal impact.
3 Sources
Meta's plan to introduce AI-generated personas on Facebook and Instagram sparks debate about authenticity, user engagement, and the future of social media interactions.
16 Sources
AI shopping agents are emerging as powerful tools in e-commerce, offering personalized recommendations and autonomous purchasing. While they promise convenience and efficiency, concerns about privacy, manipulation, and consumer dependency persist.
3 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved