Curated by THEOUTPOST
On Fri, 25 Oct, 8:03 AM UTC
7 Sources
[1]
AI-generated child sexual abuse images are spreading, law enforcement is racing to stop them
A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A US Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children. Law enforcement agencies across the US are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology - from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating "deepfakes" and other harmful imagery of kids can be prosecuted under their laws. "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it," Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. "And if you're sitting there thinking otherwise, you fundamentally are wrong. And it's only a matter of time before somebody holds you accountable." The Justice Department says existing federal laws clearly apply to such content, and recently brought what's believed to be the first federal case involving purely AI-generated imagery - meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a US soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit. Trying to catch up to technology The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact - as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. AI images get more realistic The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology - a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators ... with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved."
[2]
AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them
WASHINGTON (AP) -- A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children. Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology -- from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating "deepfakes" and other harmful imagery of kids can be prosecuted under their laws. "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it," Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. "And if you're sitting there thinking otherwise, you fundamentally are wrong. And it's only a matter of time before somebody holds you accountable." The Justice Department says existing federal laws clearly apply to such content, and recently brought what's believed to be the first federal case involving purely AI-generated imagery -- meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit. The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact -- as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology -- a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators ... with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved."
[3]
AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them
WASHINGTON -- A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children. Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology -- from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating "deepfakes" and other harmful imagery of kids can be prosecuted under their laws. "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it," Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. "And if you're sitting there thinking otherwise, you fundamentally are wrong. And it's only a matter of time before somebody holds you accountable." The Justice Department says existing federal laws clearly apply to such content, and recently brought what's believed to be the first federal case involving purely AI-generated imagery -- meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit. The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact -- as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology -- a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators ... with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved."
[4]
AI-generated child sexual abuse images are spreading. Law enforcement is racing to stop them
WASHINGTON (AP) -- A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children. Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology -- from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating "deepfakes" and other harmful imagery of kids can be prosecuted under their laws. "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it," Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. "And if you're sitting there thinking otherwise, you fundamentally are wrong. And it's only a matter of time before somebody holds you accountable." The Justice Department says existing federal laws clearly apply to such content, and recently brought what's believed to be the first federal case involving purely AI-generated imagery -- meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit. Trying to catch up to technology The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact -- as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. AI images get more realistic The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology -- a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators ... with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved."
[5]
AI-Generated Child Sexual Abuse Images Are Spreading. Law Enforcement Is Racing to Stop Them
WASHINGTON (AP) -- A child psychiatrist who altered a first-day-of-school photo he saw on Facebook to make a group of girls appear nude. A U.S. Army soldier accused of creating images depicting children he knew being sexually abused. A software engineer charged with generating hyper-realistic sexually explicit images of children. Law enforcement agencies across the U.S. are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology -- from manipulated photos of real children to graphic depictions of computer-generated kids. Justice Department officials say they're aggressively going after offenders who exploit AI tools, while states are racing to ensure people generating "deepfakes" and other harmful imagery of kids can be prosecuted under their laws. "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it," Steven Grocki, who leads the Justice Department's Child Exploitation and Obscenity Section, said in an interview with The Associated Press. "And if you're sitting there thinking otherwise, you fundamentally are wrong. And it's only a matter of time before somebody holds you accountable." The Justice Department says existing federal laws clearly apply to such content, and recently brought what's believed to be the first federal case involving purely AI-generated imagery -- meaning the children depicted are not real but virtual. In another case, federal authorities in August arrested a U.S. soldier stationed in Alaska accused of running innocent pictures of real children he knew through an AI chatbot to make the images sexually explicit. Trying to catch up to technology The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact -- as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. AI images get more realistic The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology -- a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators ... with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved." Copyright 2024 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed.
[6]
AI child sexual abuse images are spreading. Here's how the DOJ is responding
The prosecutions come as child advocates are urgently working to curb the misuse of technology to prevent a flood of disturbing images officials fear could make it harder to rescue real victims. Law enforcement officials worry investigators will waste time and resources trying to identify and track down exploited children who don't really exist. Lawmakers, meanwhile, are passing a flurry of legislation to ensure local prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of kids. Governors in more than a dozen states have signed laws this year cracking down on digitally created or altered child sexual abuse imagery, according to a review by The National Center for Missing & Exploited Children. "We're playing catch-up as law enforcement to a technology that, frankly, is moving far faster than we are," said Ventura County, California District Attorney Erik Nasarenko. Nasarenko pushed legislation signed last month by Gov. Gavin Newsom which makes clear that AI-generated child sexual abuse material is illegal under California law. Nasarenko said his office could not prosecute eight cases involving AI-generated content between last December and mid-September because California's law had required prosecutors to prove the imagery depicted a real child. AI-generated child sexual abuse images can be used to groom children, law enforcement officials say. And even if they aren't physically abused, kids can be deeply impacted when their image is morphed to appear sexually explicit. "I felt like a part of me had been taken away. Even though I was not physically violated," said 17-year-old Kaylin Hayman, who starred on the Disney Channel show "Just Roll with It" and helped push the California bill after she became a victim of "deepfake" imagery. Hayman testified last year at the federal trial of the man who digitally superimposed her face and those of other child actors onto bodies performing sex acts. He was sentenced in May to more than 14 years in prison. Open-source AI-models that users can download on their computers are known to be favored by offenders, who can further train or modify the tools to churn out explicit depictions of children, experts say. Abusers trade tips in dark web communities about how to manipulate AI tools to create such content, officials say. A report last year by the Stanford Internet Observatory found that a research dataset that was the source for leading AI image-makers such as Stable Diffusion contained links to sexually explicit images of kids, contributing to the ease with which some tools have been able to produce harmful imagery. The dataset was taken down, and researchers later said they deleted more than 2,000 weblinks to suspected child sexual abuse imagery from it. Top technology companies, including Google, OpenAI and Stability AI, have agreed to work with anti-child sexual abuse organization Thorn to combat the spread of child sexual abuse images. But experts say more should have been done at the outset to prevent misuse before the technology became widely available. And steps companies are taking now to make it harder to abuse future versions of AI tools "will do little to prevent" offenders from running older versions of models on their computer "without detection," a Justice Department prosecutor noted in recent court papers. "Time was not spent on making the products safe, as opposed to efficient, and it's very hard to do after the fact -- as we've seen," said David Thiel, the Stanford Internet Observatory's chief technologist. The National Center for Missing & Exploited Children's CyberTipline last year received about 4,700 reports of content involving AI technology -- a small fraction of the more than 36 million total reports of suspected child sexual exploitation. By October of this year, the group was fielding about 450 reports per month of AI-involved content, said Yiota Souras, the group's chief legal officer. Those numbers may be an undercount, however, as the images are so realistic it's often difficult to tell whether they were AI-generated, experts say. "Investigators are spending hours just trying to determine if an image actually depicts a real minor or if it's AI-generated," said Rikole Kelly, deputy Ventura County district attorney, who helped write the California bill. "It used to be that there were some really clear indicators . . . with the advances in AI technology, that's just not the case anymore." Justice Department officials say they already have the tools under federal law to go after offenders for such imagery. The U.S. Supreme Court in 2002 struck down a federal ban on virtual child sexual abuse material. But a federal law signed the following year bans the production of visual depictions, including drawings, of children engaged in sexually explicit conduct that are deemed "obscene." That law, which the Justice Department says has been used in the past to charge cartoon imagery of child sexual abuse, specifically notes there's no requirement "that the minor depicted actually exist." The Justice Department brought that charge in May against a Wisconsin software engineer accused of using AI tool Stable Diffusion to create photorealistic images of children engaged in sexually explicit conduct, and was caught after he sent some to a 15-year-old boy through a direct message on Instagram, authorities say. The man's lawyer, who is pushing to dismiss the charges on First Amendment grounds, declined further comment on the allegations in an email to the AP. A spokesperson for Stability AI said that man is accused of using an earlier version of the tool that was released by another company, Runway ML. Stability AI says that it has "invested in proactive features to prevent the misuse of AI for the production of harmful content" since taking over the exclusive development of the models. A spokesperson for Runway ML didn't immediately respond to a request for comment from the AP. In cases involving "deepfakes," when a real child's photo has been digitally altered to make them sexually explicit, the Justice Department is bringing charges under the federal "child pornography" law. In one case, a North Carolina child psychiatrist who used an AI application to digitally "undress" girls posing on the first day of school in a decades-old photo shared on Facebook was convicted of federal charges last year. "These laws exist. They will be used. We have the will. We have the resources," Grocki said. "This is not going to be a low priority that we ignore because there's not an actual child involved."
[7]
AI-generated child sexual abuse images are spreading. Law enforcement are racing to stop them
Law enforcement across the US are cracking down on a troubling spread of child sexual abuse imagery created through artificial intelligence technology. States are racing to ensure those generating obscene imagery of kids can be prosecuted under their laws. (AP Video/Eugene Garcia/Noreen Nasir)
Share
Share
Copy Link
U.S. law enforcement agencies are cracking down on the spread of AI-generated child sexual abuse imagery, as the Justice Department and states take action to prosecute offenders and update laws to address this emerging threat.
Law enforcement agencies across the United States are grappling with a disturbing trend: the proliferation of child sexual abuse imagery created using artificial intelligence (AI) technology. This includes both manipulated photos of real children and graphic depictions of computer-generated minors 123.
The Justice Department is taking an aggressive stance against offenders who exploit AI tools for this purpose. Steven Grocki, head of the Child Exploitation and Obscenity Section, emphasized, "We've got to signal early and often that it is a crime, that it will be investigated and prosecuted when the evidence supports it" 123.
Federal prosecutors have recently brought what is believed to be the first federal case involving purely AI-generated imagery, where the depicted children are entirely virtual 12. In another case, a U.S. Army soldier stationed in Alaska was arrested for allegedly using an AI chatbot to create sexually explicit images of real children he knew 123.
These prosecutions highlight the urgent need to address the misuse of AI technology in creating child sexual abuse material. Law enforcement officials are concerned that the flood of AI-generated images could hinder efforts to rescue real victims and waste resources on identifying non-existent children 123.
In response to this emerging threat, states are rapidly passing legislation to ensure that prosecutors can bring charges under state laws for AI-generated "deepfakes" and other sexually explicit images of children. Governors in more than a dozen states have signed laws this year to crack down on digitally created or altered child sexual abuse imagery 123.
California recently enacted legislation, pushed by Ventura County District Attorney Erik Nasarenko, that explicitly makes AI-generated child sexual abuse material illegal under state law. This change was necessary because previous laws required prosecutors to prove that the imagery depicted a real child 123.
Even when children are not physically abused, the creation and distribution of AI-generated explicit imagery can have profound psychological effects. Kaylin Hayman, a 17-year-old former Disney Channel actor who became a victim of "deepfake" imagery, described the experience: "I felt like a part of me had been taken away. Even though I was not physically violated" 123.
Child advocacy groups are working to curb the misuse of AI technology. The National Center for Missing & Exploited Children reported receiving about 4,700 reports of content involving AI technology in 2022, with the number increasing to about 450 reports per month by October 2023 1234.
Major technology companies, including Google, OpenAI, and Stability AI, have agreed to collaborate with the anti-child sexual abuse organization Thorn to combat the spread of these images 123. However, experts argue that more preventive measures should have been implemented before the technology became widely available 123.
A concerning report by the Stanford Internet Observatory revealed that a research dataset used for leading AI image-makers contained links to sexually explicit images of children, contributing to the ease with which some tools could produce harmful imagery 123.
As AI technology continues to advance, law enforcement and policymakers face the challenge of keeping pace with rapidly evolving threats. The realistic nature of AI-generated images makes it increasingly difficult to distinguish between real and virtual victims, potentially complicating investigations and prosecutions 12345.
The spread of open-source AI models that users can download and modify on their computers presents additional challenges, as offenders can further train these tools to create explicit depictions of children 123. This underscores the need for ongoing collaboration between technology companies, law enforcement, and policymakers to address this critical issue.
Reference
[1]
[2]
[3]
[4]
[5]
U.S. News & World Report
|AI-Generated Child Sexual Abuse Images Are Spreading. Law Enforcement Is Racing to Stop ThemFederal prosecutors in the United States are intensifying efforts to combat the use of artificial intelligence in creating and manipulating child sex abuse images, as concerns grow about the potential flood of illicit material enabled by AI technology.
8 Sources
8 Sources
The rapid proliferation of AI-generated child sexual abuse material (CSAM) is overwhelming tech companies and law enforcement. This emerging crisis highlights the urgent need for improved regulation and detection methods in the digital age.
9 Sources
9 Sources
The rise of AI-generated child sexual abuse material presents new legal and ethical challenges, as courts and lawmakers grapple with balancing free speech protections and child safety in the digital age.
2 Sources
2 Sources
The UK plans to introduce new laws criminalizing AI-generated child sexual abuse material, as research reveals a growing threat on dark web forums. This move aims to combat the rising use of AI in creating and distributing such content.
2 Sources
2 Sources
European authorities, led by Danish law enforcement, have arrested 25 individuals in a major operation targeting the creation and distribution of AI-generated child sexual abuse material (CSAM). The ongoing investigation, dubbed Operation Cumberland, has identified 273 suspects and seized 173 electronic devices across 19 countries.
10 Sources
10 Sources
The Outpost is a comprehensive collection of curated artificial intelligence software tools that cater to the needs of small business owners, bloggers, artists, musicians, entrepreneurs, marketers, writers, and researchers.
© 2025 TheOutpost.AI All rights reserved