Curated by THEOUTPOST
On Wed, 16 Oct, 12:04 AM UTC
4 Sources
[1]
Father Disgusted to Find His Murdered Daughter Was Brought Back as an AI
A user took to the billion-dollar AI companion platform Character.AI to make a chatbot version of a murdered teen nearly two decades after her tragic death, AdWeek first reported earlier this month. Now, the late girl's father is speaking out about the experience of discovering that his daughter's name and likeness were bottled into a chatbot without his consent. Jennifer Crecente, of Austin, Texas, was just 18 years old when she was murdered by her ex-boyfriend in 2006. Her father, Drew Crecente, founded and continues to run a nonprofit dedicated to teen dating violence in her memory. (Her mother, Elizabeth Crecente, founded a different nonprofit with the same mission.) As Drew told The Washington Post this week, it "takes quite a bit for me to be shocked, because I really have been through quite a bit." "But this," he added, referring to the chatbot, "was a new low." Drew explained that he was notified of the bot's existence by a Google alert, which took him to a Character.AI profile outfitted with Jennifer's name and yearbook picture. As screenshots of the profile show, she was marketed to other platform users as a "knowledgeable and friendly" AI persona of a "video game journalist." Part of the profile was written in first person, with "Jennifer" claiming to "geek out on video games, technology, and pop culture." None of this, Drew pointed out to WaPo, was true to Jennifer; indeed, the bizarre description was likely the result of an AI model confusing Jennifer with her uncle, Brian Crecente, a cofounder and former editor-in-chief of the video game publication Kotaku. Meanwhile, there were no profile details to suggest that the Jennifer pictured on the Character.AI page was based on a real human -- nevermind one who had been slain during her senior year of high school in a horrifying act of gendered dating violence. "My pulse was racing," Drew told WaPo of finding the profile. "I was just looking for a big flashing red stop button that I could slap and just make this stop." "You can't go much further in terms of really just terrible things," he added elsewhere. Character.AI has since removed the bot on grounds that it violates its user policies, which of course it should. This one of the darkest applications of generative AI we've seen, and what's worse, it's unclear how the company -- which recently struck a major deal to license its technology to Google -- can functionally stop this kind of thing from happening in the future. Sure, Character.AI says it doesn't allow for the impersonation of real people. But is it ethical for its platform to rely on real-world victims of misuse to police that rule, while reaping the financial rewards of users interacting with those bots in the meantime? If Drew Crecente didn't have a Google alert set for his daughter's name, would the profile have gone undetected, racking up conversations with users for the benefit of Character.AI? "If they're going to say, 'We don't allow this on our platform,' and then they allow it on their platform until it's brought to their attention by somebody who's been hurt by that, that's not right," Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, told WaPo. "All the while, they're making millions of dollars."
[2]
His daughter was murdered. Then she reappeared as an AI chatbot.
Drew Crecente's daughter was murdered nearly two decades ago. Earlier this month, he discovered that her name and image had been used to create an AI chatbot. One morning in early October, about 18 years after his daughter Jennifer was murdered, Drew Crecente received a Google alert flagging what appeared to be a new profile of her online. The profile had Jennifer's full name and a yearbook photo of her. A short biography falsely described Jennifer, whose ex-boyfriend killed her in 2006 during her senior year of high school, as a "video game journalist and expert in technology, pop culture and journalism." Jennifer had seemingly been re-created as a "knowledgeable and friendly AI character," the website said. A large button invited users to chat with her. "My pulse was racing," Crecente recalled to The Washington Post. "I was just looking for a big flashing red stop button that I could slap and just make this stop." Jennifer's name and image had been used to create a chatbot on Character.AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character's website, according to a screenshot of her chatbot's now-deleted profile. Crecente, who has spent the years since his daughter's death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high-schooler without her family's permission. Experts said the incident raises concerns about the AI industry's ability -- or willingness -- to shield users from the potential harms of a service that can deal in troves of sensitive personal information. "It takes quite a bit for me to be shocked, because I really have been through quite a bit," Crecente said. "But this was a new low." Kathryn Kelly, a spokesperson for Character, said the company removes chatbots that violate its terms of service and is "constantly evolving and refining our safety practices to help prioritize our community's safety." "When notified about Jennifer's Character, we reviewed the content and the account and took action based on our policies," Kelly said in a statement. The company's terms of service prevent users from impersonating any person or entity. AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners. The technology has also drawn controversy. A Belgian man died by suicide in 2023 after being encouraged to do so in conversation with a chatbot. Character, which inked a $2.5 billion deal this year to license its AI models to Google, is among the biggest players in the space. The company offers several company-made chatbots but also allows users to create and share their own AI chatbots by uploading photos, voice recordings and short written prompts. Its library of user-generated companions includes a gruff sergeant acting as a personal motivator, a librarian who offers book recommendations and imitations of celebrities and public figures such as rapper Nicki Minaj and tech entrepreneur Elon Musk. It is the last place Crecente expected to see his daughter, more than a decade after her killing shocked the city of Austin and upended Crecente's life. Jennifer Crecente, then 18, went missing in February 2006 and was found shot to death in the woods near her home days later. Investigators determined that her ex-boyfriend, also 18, lured Jennifer into the woods and killed her with a shotgun, according to Crecente and reporting from the Austin American-Statesman. He was convicted of her murder. The killing consumed Crecente and Elizabeth Crecente, Jennifer's mother. The divorced parents started separate organizations in their daughter's name dedicated to preventing teen dating violence. They also lobbied against parole for Jennifer's killer, who was sentenced to 35 years in prison. Drew Crecente, who now lives in Atlanta, preserved Jennifer's bedroom and re-created it as soon as he moved, he said. "I don't really have the vocabulary to describe it," Crecente said of his grief. Because of his nonprofit work, Crecente keeps a Google alert that tracks mentions of his daughter's name online over the years, he said. Occasionally, it surfaces on a spam website or a news report repeating old information from her case. Then, on Oct. 2, the alert led him to his daughter's name and photo on Character. Crecente couldn't comprehend it at first. The more he looked, the more uneasy he felt. In addition to using Jennifer's name and photo, the chatbot's page described her in lively language, as if she were alive, as a tech journalist who "geek[s] out on video games" and is "always up-to-date on the latest entertainment news." Crecente said the description did not appear to be based on Jennifer's personality or any reported details about her interests and may have been an error from an AI-generated biography: Crecente's brother, Brian Crecente, is a former reporter who founded the video game news site Kotaku. The factual inaccuracies were beside the point, Crecente added. He said the idea of Character hosting and potentially making money from a chatbot using his murdered daughter's name was distressing enough. "You can't go much further in terms of really just terrible things," he said. Crecente said he did not start a conversation with the chatbot bearing his daughter's name and did not investigate the user that created the bot, which was listed under a username he did not recognize. He said he immediately emailed Character to have it removed. Brian Crecente also wrote about his brother's discovery on the platform X. Character wrote on social media on Oct. 2, in response to Brian Crecente's post, that it was deleting the character. Kelly, the Character spokesperson, said in her statement that the company's terms of service prohibit users from impersonating a person or entity and that the company proactively detects violations and moderates its service using blocklists. Asked about the other chatbots on Character's site that impersonate public figures, Kelly said that "reports of impersonation are investigated by our Trust & Safety team, and the Character is removed if it is found to violate our Terms of Service." Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, criticized Character's approach to moderation as too passive for content that plainly violated its own terms of service in Crecente's case. "If they're going to say, 'We don't allow this on our platform,' and then they allow it on their platform until it's brought to their attention by somebody who's been hurt by that, that's not right," Caltrider said. "All the while, they're making millions of dollars." Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn't the first grieving parent to have their child's information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children's families, The Post reported last year. "We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents," Claypool said. "They can't just be listening to tech CEOs about what the policies should be ... they have to pay attention to the families and individuals who have been harmed." The ordeal was disturbing enough to push Crecente -- who successfully lobbied for Texas laws on teen dating violence after Jennifer's murder -- to ponder taking up a new cause. He is considering legal options and advocating more actively for measures to prevent AI companies from harming or re-traumatizing other families of crime victims, he said. "I'm troubled enough by this that I'm probably going to invest some time into figuring out what it might take to change this," Crecente said.
[3]
His daughter was murdered. Then she reappeared as an AI chatbot.
One morning in early October, about 18 years after his daughter Jennifer was murdered, Drew Crecente received a Google alert flagging what appeared to be a new profile of her online. The profile had Jennifer's full name and a yearbook photo of her. A short biography falsely described Jennifer, whose ex-boyfriend killed her in 2006 during her senior year of high school, as a "video game journalist and expert in technology, pop culture and journalism." Jennifer had seemingly been re-created as a "knowledgeable and friendly AI character," the website said. A large button invited users to chat with her. "My pulse was racing," Crecente recalled to The Washington Post. "I was just looking for a big flashing red stop button that I could slap and just make this stop." Jennifer's name and image had been used to create a chatbot on Character. AI, a website that allows users to converse with digital personalities made using generative artificial intelligence. Several people had interacted with the digital Jennifer, which was created by a user on Character's website, according to a screenshot of her chatbot's now-deleted profile. Crecente, who has spent the years since his daughter's death running a nonprofit organization in her name to prevent teen dating violence, said he was appalled that Character had allowed a user to create a facsimile of a murdered high schooler without her family's permission. Experts said the incident raises concerns about the AI industry's ability -- or willingness -- to shield users from the potential harms of a service that can deal in troves of sensitive personal information. "It takes quite a bit for me to be shocked, because I really have been through quite a bit," Crecente said. "But this was a new low." Kathryn Kelly, a spokesperson for Character, said the company removes chatbots that violate its terms of service and is "constantly evolving and refining our safety practices to help prioritize our community's safety." "When notified about Jennifer's Character, we reviewed the content and the account and took action based on our policies," Kelly said in a statement. The company's terms of service prevent users from impersonating any person or entity. AI chatbots can engage in conversation and be programmed to adopt the personalities and biographical details of specific characters, real or imagined. They have found a growing audience online as AI companies market the digital companions as friends, mentors and romantic partners. The technology has also drawn controversy. A Belgian man died by suicide in 2023 after being encouraged to do so in conversation with a chatbot. Character, which inked a $2.5 billion deal this year to license its AI models to Google, is among the biggest players in the space. The company offers several company-made chatbots but also allows users to create and share their own AI chatbots by uploading photos, voice recordings and short written prompts. Its library of companions includes a gruff sergeant acting as a personal motivator, a librarian who offers book recommendations and imitations of celebrities and public figures such as rapper Nicki Minaj and tech entrepreneur Elon Musk. It is the last place Crecente expected to see his daughter, more than a decade after her killing shocked the city of Austin and upended Crecente's life. Jennifer Crecente, then 18, went missing in February 2006 and was found shot to death in the woods near her home days later. Investigators determined that her ex-boyfriend, also 18, lured Jennifer into the woods and killed her with a shotgun, according to Crecente and reporting from the Austin American-Statesman. He was convicted of her murder. The killing consumed Crecente and Elizabeth Crecente, Jennifer's mother. The divorced parents started separate organizations in their daughter's name dedicated to preventing teen dating violence. They also lobbied against parole for Jennifer's killer, who was sentenced to 35 years in prison. Drew Crecente, who now lives in Atlanta, preserved Jennifer's bedroom and re-created it as soon as he moved, he said. "I don't really have the vocabulary to describe it," Crecente said of his grief. Because of his nonprofit work, Crecente keeps a Google alert that tracks mentions of his daughter's name online over the years, he said. Occasionally, it surfaces on a spam website or a news report repeating old information from her case. Then, on Oct. 2, the alert led him to his daughter's name and photo on Character. Crecente couldn't comprehend it at first. The more he looked, the more uneasy he felt. In addition to using Jennifer's name and photo, the chatbot's page described her in lively language, as if she were alive, as a tech journalist who "geek[s] out on video games" and is "always up-to-date on the latest entertainment news." Crecente said the description did not appear to be based on Jennifer's personality or any reported details about her interests and may have been an error from an AI-generated biography: Crecente's brother, Brian Crecente, is a former reporter who founded the video game news site Kotaku. The factual inaccuracies were beside the point, Crecente added. He said the idea of Character hosting and potentially making money from a chatbot using his murdered daughter's name was distressing enough. "You can't go much further in terms of really just terrible things," he said. Crecente said he did not start a conversation with the chatbot bearing his daughter's name and did not investigate the user that created the bot, which was listed under a username he did not recognize. He said he immediately emailed Character to have it removed. Brian Crecente also wrote about his brother's discovery on the platform X. Character wrote on social media on Oct. 2, in response to Brian Crecente's post, that it was deleting the character. Kelly, the Character spokesperson, said in her statement that the company's terms of service prohibit users from impersonating a person or entity and that the company proactively detects violations and moderates its service using blocklists. Asked about the other chatbots on Character's site that impersonate public figures, Kelly said that "reports of impersonation are investigated by our Trust & Safety team, and the Character is removed if it is found to violate our Terms of Service." Jen Caltrider, a privacy researcher at the nonprofit Mozilla Foundation, criticized Character's approach to moderation as too passive for content that plainly violated its own terms of service in Crecente's case. "If they're going to say, 'We don't allow this on our platform,' and then they allow it on their platform until it's brought to their attention by somebody who's been hurt by that, that's not right," Caltrider said. "All the while, they're making millions of dollars." Rick Claypool, who researched AI chatbots for the nonprofit consumer advocacy organization Public Citizen, said while laws governing online content at large could apply to AI companies, they have largely been left to regulate themselves. Crecente isn't the first grieving parent to have their child's information manipulated by AI: Content creators on TikTok have used AI to imitate the voices and likenesses of missing children and produce videos of them narrating their deaths, to outrage from the children's families, The Post reported last year. "We desperately need for lawmakers and regulators to be paying attention to the real impacts these technologies are having on their constituents," Claypool said. "They can't just be listening to tech CEOs about what the policies should be ... they have to pay attention to the families and individuals who have been harmed." The ordeal was disturbing enough to push Crecente -- who successfully lobbied for Texas laws on teen dating violence after Jennifer's murder -- to ponder taking up a new cause. He is considering legal options and advocating more actively for measures to prevent AI companies from harming or re-traumatizing other families of crime victims, he said. "I'm troubled enough by this that I'm probably going to invest some time into figuring out what it might take to change this," Crecente said.
[4]
Anyone Can Turn You Into an AI Chatbot. There's Little You Can Do to Stop Them
Character.AI lets users create bots in the likeness of any person -- without requiring their consent. Drew Crecente's daughter died in 2006, killed by an ex-boyfriend in Austin, Texas when she was just 18. Her murder was highly publicized, so much so that Drew would still occasionally see Google alerts for her name, Jennifer Ann Crecente. The alert Drew received a few weeks ago wasn't the same as the others. It was for an AI chatbot, created in Jennifer's image and likeness, on the buzzy, Google-backed platform Character.AI. Jennifer's internet presence, Drew Crecente learned, had been used to create a "friendly AI character" that posed, falsely, as a "video game journalist." Any user of the app would be able to chat with "Jennifer," despite the fact that no one had given consent for this. Drew's brother, Brian Crecente, who happens to be a founder of the gaming news websites Polygon and Kotaku, flagged the Character.AI bot on his Twitter account and called it "fucking disgusting." Character.AI, which has raised over $150 million in funding and recently licensed some of its core technology and top talent to Google, deleted the avatar of Jennifer. It acknowledged that the creation of the chatbot violated its policies. But this enforcement was just a quick fix in a never-ending game of whack-a-mole in the land of generative AI, where new pieces of media are churned out every day using derivatives of other media scraped haphazardly from the web. And Jennifer Ann Crecente isn't the only avatar being created on Character.AI without the knowledge of the people they're based on. WIRED found several instances of AI personas being created without a person's consent, some of whom were women already facing harassment online. For Drew Crecente, the creation of an AI persona of his daughter was another reminder of unbearable grief, as complex as the internet itself. In the years following Jennifer Ann Crecente's death, he had earned a law degree and created a foundation for teen violence awareness and prevention. As a lawyer, he understands that due to longstanding protections of tech platforms, he has little recourse. But the incident also underscored for him what he sees as one of the ethical failures of the modern technology industry. "The people who are making so much money cannot be bothered to make use of those resources to make sure they're doing the right thing," he says.
Share
Share
Copy Link
A father's disturbing discovery of his murdered daughter's AI chatbot on Character.AI platform sparks debate on ethical implications and consent in AI technology.
In a shocking turn of events, Drew Crecente, father of murdered teenager Jennifer Crecente, stumbled upon an AI chatbot created in his daughter's likeness on the popular platform Character.AI. This unsettling discovery has ignited a fierce debate about the ethical implications of AI technology and the need for stricter regulations in the rapidly evolving field 1.
Drew Crecente, who has dedicated his life to preventing teen dating violence since his daughter's tragic death in 2006, received a Google alert that led him to a Character.AI profile featuring Jennifer's name and yearbook photo. The chatbot, marketed as a "knowledgeable and friendly" AI persona, falsely portrayed Jennifer as a video game journalist 2.
"My pulse was racing," Crecente told The Washington Post. "I was just looking for a big flashing red stop button that I could slap and just make this stop" 2.
Character.AI, which recently struck a $2 billion deal with Google, removed the chatbot after being notified of the violation. The company's spokesperson, Kathryn Kelly, stated that they are "constantly evolving and refining our safety practices to help prioritize our community's safety" 3.
However, this incident raises serious questions about the platform's ability to prevent such violations proactively. Critics argue that relying on victims or their families to police these violations is unethical, especially when the company profits from user interactions with these bots 1.
This case highlights a growing concern in the AI industry: the creation of digital personas without consent. WIRED's investigation revealed multiple instances of AI chatbots being created on Character.AI without the knowledge or permission of the individuals they're based on 4.
Experts warn that this incident exposes the AI industry's potential inability or unwillingness to protect users from the harms associated with handling sensitive personal information. The ease with which users can create and share AI chatbots by uploading photos, voice recordings, and written prompts raises significant privacy and ethical concerns 2.
Despite the clear ethical violations, legal recourse for victims like Drew Crecente remains limited due to longstanding protections for tech platforms. This legal gap underscores the urgent need for updated regulations that address the unique challenges posed by AI technology 4.
As AI continues to advance, the incident serves as a stark reminder of the potential for misuse and the critical importance of implementing robust ethical guidelines and regulatory frameworks to govern the development and deployment of AI technologies.
Reference
[2]
[3]
A mother sues Character.AI and Google after discovering chatbots impersonating her deceased son, raising concerns about AI safety and regulation.
3 Sources
3 Sources
Character.AI, a popular AI chatbot platform, faces criticism and legal challenges for hosting user-created bots impersonating deceased teenagers, raising concerns about online safety and AI regulation.
4 Sources
4 Sources
Character.ai, a Google-funded AI startup, is under scrutiny for hosting chatbots modeled after real-life school shooters and their victims, raising concerns about content moderation and potential psychological impacts.
2 Sources
2 Sources
A mother sues Character.AI after her son's suicide, raising alarms about the safety of AI companions for teens and the need for better regulation in the rapidly evolving AI industry.
40 Sources
40 Sources
Recent investigations reveal alarming instances of AI chatbots being used for potentially harmful purposes, including grooming behaviors and providing information on illegal activities, raising serious ethical and safety concerns.
2 Sources
2 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved