A bipartisan group of District Attorneys General urged Congress to broaden its review of artificial intelligence (AI) to specifically include its use in creating deepfake images of child pornography.
Tennessee Attorney General Jonathan Skrmetti joined colleagues from 54 states and territories in a Tuesday letter asking federal officials to examine AI’s use in making child sexual abuse material (CSAM). The letter gave an example of how the process works.
“AI tools can rapidly and easily create ‘deepfakes’ by studying real photographs of abused children to generate new images showing those children in sexual positions,” the letter reads. “This involves overlaying the face of one person on the body of another. Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children.”
The group said AI can also be used to create sexualized images an videos of children who “do not exist.”
“AI can combine data from photographs of both abused and non-abused children to animate new and realistic sexualized images of children who do not exist, but who may resemble actual children,” reads the letter. “Creating these images is easier than ever, as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are ’open- source,’ the tools can be run in an unrestricted and un-policed way.”
The group of attorneys general want Congress to form a special commission to specifically study how AI can be used to exploit children. They also want federal lawmakers to move to expand existing restrictions on CSAM to explicitly cover AI-generated CSAM.