3 Sources
[1]
Do Customers Perceive AI-Written Communications as | Newswise
OLD WESTBURY, N.Y. -- From Nike and Google to Coca-Cola and McDonald's, major brands are incorporating artificial intelligence (AI) into their advertising campaigns. But how do consumers feel about robots generating emotionally charged marketing content? That's the question a New York Institute of Technology professor raises in a new Journal of Business Research study. Whereas predictive AI allows marketers to forecast consumer behavior, generative AI enables them to produce novel content, including text, images, videos, or audio. For example, a recent AI-generated Toys"R"Us commercial featured video content of the company's founder as a boy alongside its brand mascot Geoffrey the Giraffe. While many brands have trumpeted their AI-driven campaigns as a mark of innovation, others may fail to disclose AI use, leading to ethical concerns and calls for government regulation. However, even transparent brands receive backlash, as Google experienced when viewers were offended by its "Dear Sydney" ad, in which a father uses AI to help his daughter draft a fan letter to her favorite Olympic athlete. "AI is a new territory for brand marketers, but what we do know is that consumers highly value authentic interactions with brands," says the study's lead author Colleen Kirk, D.P.S., professor of marketing and management at New York Institute of Technology. "Although more companies are now using AI-generated content to strengthen brand engagement and attachment, no study has explored how consumers view the authenticity of textual content that was created by a robot." Kirk and her study co-author, Julian Givi, Ph.D., a marketing researcher and faculty member at West Virginia University, completed various experiments to see how consumers react when emotional messages are written by AI. They hypothesized that consumers would view emotionally charged AI-generated content less favorably, impacting their perception of the brand and desire to interact with it. In one scenario, participants imagined receiving a heartfelt message from a fitness salesperson who helped them buy a new set of weights. The message stated that he was inspired by the consumer's purchase, with some participants believing that it was AI-generated and others believing that the salesman drafted it himself (control group). While the members of the control group responded favorably, those in the AI group felt that the note violated their moral principles (moral disgust). As a result, they were also unlikely to recommend the store to others and more likely to switch brands when making future purchases. Many even gave the store poor ratings on a simulated reviews site. Other scenarios also revealed key findings in support of the researchers' hypothesis: In short, the findings suggest that companies must carefully consider whether and how to disclose AI-authored communications, always prioritizing authenticity in their interactions with consumers. As governments seek to increasingly regulate AI disclosure, making consumers more aware of how brands craft their messages, Kirk says marketers will want to pay close attention to the study's findings. "Consumers are becoming ever more skeptical of the human origin of marketing communications. Our research provides much-needed insight into how using AI to generate emotional content could negatively impact brands' perceptions and, in turn, the consumer relationships that support their bottom lines," she says. "While AI tools offer marketers a new frontier, these professionals should bear in mind a time-tested principle: authenticity is always best." New York Institute of Technology's six schools and colleges offer undergraduate, graduate, doctoral, and other professional degree programs in in-demand disciplines including computer science, data science, and cybersecurity; biology, health professions, and medicine; architecture and design; engineering; IT and digital technologies; management; and energy and sustainability. A nonprofit, independent, private, and nonsectarian institute of higher education founded in 1955, it welcomes nearly 8,000 students worldwide. The university has campuses in New York City and Long Island, New York; Jonesboro, Arkansas; and Vancouver, British Columbia, as well as programs around the world. More than 116,000 alumni are part of an engaged network of physicians, architects, scientists, engineers, business leaders, digital artists, and healthcare professionals. Together, the university's community of doers, makers, healers, and innovators empowers graduates to change the world, solve 21st-century challenges, and reinvent the future. For more information, visit nyit.edu.
[2]
Do Customers Perceive AI-Written Communications as | Newswise
OLD WESTBURY, N.Y. -- From Nike and Google to Coca-Cola and McDonald's, major brands are incorporating artificial intelligence (AI) into their advertising campaigns. But how do consumers feel about robots generating emotionally charged marketing content? That's the question a New York Institute of Technology professor raises in a new Journal of Business Research study. Whereas predictive AI allows marketers to forecast consumer behavior, generative AI enables them to produce novel content, including text, images, videos, or audio. For example, a recent AI-generated Toys"R"Us commercial featured video content of the company's founder as a boy alongside its brand mascot Geoffrey the Giraffe. While many brands have trumpeted their AI-driven campaigns as a mark of innovation, others may fail to disclose AI use, leading to ethical concerns and calls for government regulation. However, even transparent brands receive backlash, as Google experienced when viewers were offended by its "Dear Sydney" ad, in which a father uses AI to help his daughter draft a fan letter to her favorite Olympic athlete. "AI is a new territory for brand marketers, but what we do know is that consumers highly value authentic interactions with brands," says the study's lead author Colleen Kirk, D.P.S., professor of marketing and management at New York Institute of Technology. "Although more companies are now using AI-generated content to strengthen brand engagement and attachment, no study has explored how consumers view the authenticity of textual content that was created by a robot." Kirk and her study co-author, Julian Givi, Ph.D., a marketing researcher and faculty member at West Virginia University, completed various experiments to see how consumers react when emotional messages are written by AI. They hypothesized that consumers would view emotionally charged AI-generated content less favorably, impacting their perception of the brand and desire to interact with it. In one scenario, participants imagined receiving a heartfelt message from a fitness salesperson who helped them buy a new set of weights. The message stated that he was inspired by the consumer's purchase, with some participants believing that it was AI-generated and others believing that the salesman drafted it himself (control group). While the members of the control group responded favorably, those in the AI group felt that the note violated their moral principles (moral disgust). As a result, they were also unlikely to recommend the store to others and more likely to switch brands when making future purchases. Many even gave the store poor ratings on a simulated reviews site. Other scenarios also revealed key findings in support of the researchers' hypothesis: In short, the findings suggest that companies must carefully consider whether and how to disclose AI-authored communications, always prioritizing authenticity in their interactions with consumers. As governments seek to increasingly regulate AI disclosure, making consumers more aware of how brands craft their messages, Kirk says marketers will want to pay close attention to the study's findings. "Consumers are becoming ever more skeptical of the human origin of marketing communications. Our research provides much-needed insight into how using AI to generate emotional content could negatively impact brands' perceptions and, in turn, the consumer relationships that support their bottom lines," she says. "While AI tools offer marketers a new frontier, these professionals should bear in mind a time-tested principle: authenticity is always best." New York Institute of Technology's six schools and colleges offer undergraduate, graduate, doctoral, and other professional degree programs in in-demand disciplines including computer science, data science, and cybersecurity; biology, health professions, and medicine; architecture and design; engineering; IT and digital technologies; management; and energy and sustainability. A nonprofit, independent, private, and nonsectarian institute of higher education founded in 1955, it welcomes nearly 8,000 students worldwide. The university has campuses in New York City and Long Island, New York; Jonesboro, Arkansas; and Vancouver, British Columbia, as well as programs around the world. More than 116,000 alumni are part of an engaged network of physicians, architects, scientists, engineers, business leaders, digital artists, and healthcare professionals. Together, the university's community of doers, makers, healers, and innovators empowers graduates to change the world, solve 21st-century challenges, and reinvent the future. For more information, visit nyit.edu.
[3]
Do customers perceive AI-written communications as less authentic?
From Nike and Google to Coca-Cola and McDonald's, major brands are incorporating artificial intelligence (AI) into their advertising campaigns. But how do consumers feel about robots generating emotionally charged marketing content? That's the question a New York Institute of Technology professor raises in a new Journal of Business Research study. Whereas predictive AI allows marketers to forecast consumer behavior, generative AI enables them to produce novel content, including text, images, videos, or audio. For example, a recent AI-generated Toys "R" Us commercial featured video content of the company's founder as a boy alongside its brand mascot Geoffrey the Giraffe. While many brands have trumpeted their AI-driven campaigns as a mark of innovation, others may fail to disclose AI use, leading to ethical concerns and calls for government regulation. However, even transparent brands receive backlash, as Google experienced when viewers were offended by its "Dear Sydney" ad, in which a father uses AI to help his daughter draft a fan letter to her favorite Olympic athlete. "AI is a new territory for brand marketers, but what we do know is that consumers highly value authentic interactions with brands," says the study's lead author Colleen Kirk, D.P.S., professor of marketing and management at New York Institute of Technology. "Although more companies are now using AI-generated content to strengthen brand engagement and attachment, no study has explored how consumers view the authenticity of textual content that was created by a robot." Kirk and her study co-author, Julian Givi, Ph.D., a marketing researcher and faculty member at West Virginia University, completed various experiments to see how consumers react when emotional messages are written by AI. They hypothesized that consumers would view emotionally charged AI-generated content less favorably, impacting their perception of the brand and desire to interact with it. In one scenario, participants imagined receiving a heartfelt message from a fitness salesperson who helped them buy a new set of weights. The message stated that he was inspired by the consumer's purchase, with some participants believing that it was AI-generated and others believing that the salesman drafted it himself (control group). While the members of the control group responded favorably, those in the AI group felt that the note violated their moral principles (moral disgust). As a result, they were also unlikely to recommend the store to others and more likely to switch brands when making future purchases. Many even gave the store poor ratings on a simulated reviews site. Other scenarios also revealed key findings in support of the researchers' hypothesis: In short, the findings suggest that companies must carefully consider whether and how to disclose AI-authored communications, always prioritizing authenticity in their interactions with consumers. As governments seek to increasingly regulate AI disclosure, making consumers more aware of how brands craft their messages, Kirk says marketers will want to pay close attention to the study's findings. "Consumers are becoming ever more skeptical of the human origin of marketing communications. Our research provides much-needed insight into how using AI to generate emotional content could negatively impact brands' perceptions and, in turn, the consumer relationships that support their bottom lines," she says. "While AI tools offer marketers a new frontier, these professionals should bear in mind a time-tested principle: authenticity is always best."
Share
Copy Link
A new study by New York Institute of Technology researchers shows that consumers view AI-generated emotional marketing content as less authentic, potentially harming brand perception and customer relationships.
In an era where artificial intelligence (AI) is rapidly integrating into various aspects of business, major brands like Nike, Google, Coca-Cola, and McDonald's are incorporating AI into their advertising campaigns. However, a recent study published in the Journal of Business Research raises important questions about consumer perception of AI-generated emotional content in marketing 1.
Led by Colleen Kirk, D.P.S., professor of marketing and management at New York Institute of Technology, and Julian Givi, Ph.D., from West Virginia University, the research explored how consumers react to emotionally charged messages written by AI 2. The study hypothesized that consumers would view such content less favorably, impacting their perception of the brand and willingness to engage with it.
The researchers conducted various experiments to test their hypothesis. In one scenario, participants imagined receiving a heartfelt message from a fitness salesperson after purchasing weights. Those who believed the message was AI-generated reported feelings of moral disgust, were less likely to recommend the store, and more inclined to switch brands for future purchases 3.
The study's findings suggest that using AI to generate emotional content could negatively impact brand perceptions and consumer relationships. Participants who knew the content was AI-generated often felt it violated their moral principles, leading to decreased brand loyalty and negative reviews.
While some brands proudly showcase their AI-driven campaigns as innovative, others may not disclose AI use, raising ethical concerns and calls for government regulation. However, even transparent brands can face backlash, as exemplified by Google's "Dear Sydney" ad, which received criticism for depicting a father using AI to help his daughter write a fan letter 1.
As AI technology advances, marketers must carefully consider how to implement and disclose its use in customer communications. The study emphasizes the importance of prioritizing authenticity in brand interactions, especially as consumers become increasingly skeptical of the human origin of marketing messages.
While AI tools offer exciting possibilities for marketers, the research underscores a crucial principle: authenticity remains paramount in building and maintaining consumer relationships. As Professor Kirk notes, "While AI tools offer marketers a new frontier, these professionals should bear in mind a time-tested principle: authenticity is always best" 2.
NVIDIA announces significant upgrades to its GeForce NOW cloud gaming service, including RTX 5080-class performance, improved streaming quality, and an expanded game library, set to launch in September 2025.
10 Sources
Technology
21 hrs ago
10 Sources
Technology
21 hrs ago
Nvidia is reportedly developing a new AI chip, the B30A, based on its latest Blackwell architecture for the Chinese market. This chip is expected to outperform the currently allowed H20 model, raising questions about U.S. regulatory approval and the ongoing tech trade tensions between the U.S. and China.
11 Sources
Technology
21 hrs ago
11 Sources
Technology
21 hrs ago
SoftBank Group has agreed to invest $2 billion in Intel, buying common stock at $23 per share. This strategic investment comes as Intel undergoes a major restructuring under new CEO Lip-Bu Tan, aiming to regain its competitive edge in the semiconductor industry, particularly in AI chips.
18 Sources
Business
14 hrs ago
18 Sources
Business
14 hrs ago
Databricks, a data analytics firm, is set to raise its valuation to over $100 billion in a new funding round, showcasing the strong investor interest in AI startups. The company plans to use the funds for AI acquisitions and product development.
7 Sources
Business
5 hrs ago
7 Sources
Business
5 hrs ago
OpenAI introduces ChatGPT Go, a new subscription plan priced at ₹399 ($4.60) per month exclusively for Indian users, offering enhanced features and affordability to capture a larger market share.
15 Sources
Technology
13 hrs ago
15 Sources
Technology
13 hrs ago