HomeForeign correspondentUNICEF calls on governments to criminalize AI-generated child abuse material

UNICEF calls on governments to criminalize AI-generated child abuse material

- Advertisement -
UNITED NATIONS, Feb 05 (APP): The United Nations children’s agency, UNICEF,  has issued an urgent call for governments to criminalize AI-generated child sexual abuse material, citing alarming evidence that at least 1.2 million children worldwide had their images manipulated into sexually explicit deep-fakes in the past year.
“The harm from deep-fake abuse is real and urgent,” the UN agency said in a statement. “Children cannot wait for the law to catch up.”
At least 1.2 million youngsters have disclosed having had their images manipulated into sexually explicit deep-fakes in the past year, according to a new study across 11 countries conducted by the UN agency, international police agency, INTERPOL and the ECPAT global network working to end the sexual exploitation of children worldwide.
In some countries, this represents one in 25 children or the equivalent of one child in a typical classroom, the study found.
Deepfakes – images, videos, or audio generated or manipulated with AI and designed to look real – are increasingly being used to produce sexualized content involving children, including through so-called “nudification”, where AI tools are used to strip or alter clothing in photos to create fabricated nude or sexualized images.
“When a child’s image or identity is used, that child is directly victimized. Even without an identifiable victim, AI-generated child sexual abuse material normalizes the sexual exploitation of children, fuels demand for abusive content and presents significant challenges for law enforcement in identifying and protecting children that need help,” UNICEF said.
“Deepfake abuse is abuse, and there is nothing fake about the harm it causes.”
The UN agency said it strongly welcomed the efforts of those AI developers who are implementing “safety-by-design” approaches and robust guardrails to prevent misuse of their systems.
However, the response so far is patchy, and too many AI models are not being developed with adequate safeguards, it said.
The risks, the agency added,  can be compounded when generative AI tools are embedded directly into social media platforms where manipulated images spread rapidly.
“Children themselves are deeply aware of this risk,” UNICEF said, adding that in some of the study countries, up to two thirds of youngsters said they worry that AI could be used to create fake sexual images or videos.
“Levels of concern vary widely between countries, underscoring the urgent need for stronger awareness, prevention and protection measures.”
To address this fast-growing threat, the UN agency issued Guidance on AI and Children 3.0 in December with recommendations for policies and systems that uphold child rights.
Right now, UNICEF is calling for immediate action to confront the escalating threat:
— Governments need to expand definitions of child sexual abuse material to include AI-generated content and criminalize its creation, procurement, possession and distribution;
— AI developers should implement safety-by-design approaches and robust guardrails to prevent misuse of AI models; and,
— Digital companies should prevent the circulation of AI-generated child sexual abuse material, not merely remove it, and strengthen content moderation with investment in detection technologies.
RELATED ARTICLES

Most Popular