EU proposes criminalizing AI-generated child sexual abuse and deepfakes

AI generated imagery and other forms of deepfakes depicting child sexual abuse (CSA) could be criminialized in the European Union under plans to update existing legislation to keep pace with technology developments, the Commission announced today.

It’s also proposing to create a new criminal offence of livestreaming child sexual abuse. The possession and exchange of so-called “pedophile manuals” would also be criminalized under the plan — which is part of a wider package of measures the EU says is intended to boost prevention of CSA, including by increasing awareness of online risks; and to make it easier for victims to report crimes and obtain support (including granting them a right to financial compensation).

The proposal to update the EU’s current rules in this area, which date back to 2011, also includes changes around mandatory reporting of offences.

Back in May 2022, the Commission presented a separate piece of CSA-related draft legislation, aiming to establish a framework which could make it obligatory for digital services to use automated technologies to detect and report existing or new child sexual abuse material (CSAM) circulating on their platforms, and identify and report grooming activity targeting kids.

The CSAM-scanning plan has proven to be highly controversial — and it continues to split lawmakers in the parliament and the Council, as well as kicking up suspicions over the Commission’s links with child safety tech lobbyists and raising other awkward questions for the EU’s executive, over a legally questionable foray into microtargeted ads to promote the proposal.

The Commission’s decision to prioritize the targeting of digital messaging platforms to tackle CSA has attracted a lot of criticism that the bloc’s lawmakers are focusing in the wrong area for combatting a complex societal problem — which may have generated some pressure for it to come with follow on proposals. (Not that the Commission is saying that, of course; it describes today’s package as “complementary” to its earlier CSAM-scanning proposal.)

That said, even in the less than two years since the controversial private-message-scanning plan was presented there’s been a massive uptick in attention to the risks around deepfakes and AI generated imagery, including concerns the tech is being abused to produce CSAM; and worries this synthetic content could make it even more challenging for law enforcement authorities to identify genuine victims. So the viral boom in generative AI has given lawmakers a clear incentive to revisit the rules.

“Both increased online presence of children and the technological developments create new possibilities for abuse,” the Commission suggests in a press release today. It also says the proposal aims to “reduce the pervasive impunity of online child sexual abuse and exploitation”.

An impact assessment the Commission conducted ahead of presenting the proposal identified the increased online presence of kids and the “latest technological developments” as areas that are creating new opportunities for CSA to happen. It also said it’s concerned about differences in Member States’ legal frameworks holding back action to combat abuse; and wants to improve the current “limited” efforts to prevent CSA and assist victims.

“Fast evolving technologies are creating new possibilities for child sexual abuse online, and raises challenges for law enforcement to investigate this extremely serious and wide spread crime,” added Ylva Johansson, commissioner for home affairs, in a supporting statement. “A strong criminal law is essential and today we are taking a key step to ensure that we have effective legal tools to rescue children and bring perpetrators to justice. We are delivering on our commitments made in the EU Strategy for a more effective fight against Child sexual abuse presented in July 2020.”

On online safety risks for kids, the Commission’s proposal aims to encourage Member States to step up their investment in “awareness raising”.

As with the CSAM-scanning plan it will be up to the EU’s co-legislators, in the Parliament and Council, to determine the final shape of the proposals. And there’s limited time for talks ahead of parliamentary elections and a rebooting of the college of commissioners later this year — albeit, today’s CSA-combatting proposals may prove rather less divisive than the message-scanning plan. So there could be a chance of it being adopted while the other remains stalled.

If/when there’s agreement on how to amend the current Directive on combatting CSA it would enter into force 20 days after its publication in the Official Journal of the EU, per the Commission.

Deploy large language models for a healthtech use case on Amazon SageMaker

Deploy large language models for a healthtech use case on Amazon SageMaker

Ikea’s AI assistant gives design inspiration — at least it tries to

Ikea’s AI assistant gives design inspiration — at least it tries to