AU Police solicit "happy child" snaps to train CSAM AI

A bit lacking in detail as to what happens to the images when the research project is over - and who has access to all the data during and after the project.

Also perhaps the project could be compromised if pedos submit images of distressed children. That would be a ballsy attack though. (In other words, what they say is a problem with scraping images from the open internet could also be an issue with sourcing images this way.)

1 Like

And there’s always the danger that poorly executed AI could lead to oversealous service providers flagging their users’ innocent images, which might result in annoyances (at best) or legal battles (at worst).

1 Like

Yes, as a research project that is not a problem but presumably the AFP (Australian Federal Police) are interested in this project because they intend to use it in real life if the project is a success - and then you have to worry about false positives.

As implied in the article (which is by no means definitive) the intended use case by the AFP is:

  • The AFP has raided a property / infiltrated an online server and has access to a cache of thousands of images.
  • They want to examine every image to see whether it contains evidence of CSA.
  • Right now the process involves a human being examining images, which is at least time-consuming but also potentially distressing. (There are other technological approaches being used e.g. database of ‘signatures’ of known CSAM images - but that doesn’t help with any new material.)

So if it were limited to, say, the use case of “the AFP has raided your property” then I don’t think false positives are your worst problem. You already have a world of pain even if the AFP finds nothing suspicious at all.

I would guess also that if the AI flags an image, it would have to be confirmed by a human being. I can’t imagine that a court would accept a declaration by the AFP that “their AI flagged the image” as evidence. I mean I can imagine it, as a dystopian future, that the AFP’s AI talks to the court’s AI and you are convicted, end-of-story - but for the time being …

However you don’t have to be a genius to combine Apple’s scanning of the images on your phone (looking for CSAM) with the technology being discussed in this topic to see where it could go … with most online service providers and most operating system providers automatically examining every image of every customer looking for a range of forbidden material.

One thing I am unclear on is why there is an interest in detecting images of exploited, unsafe, unhappy children. Surely it would be easier for AI to detect, you know, actual CSAM? Kids might be unhappy (throwing a tanty) X% of the time anyway.

1 Like

Well, you see, that would involve having a data set of that kind of material to train the neural network with, and what kind of horrible person would keep that around?
Oh wait, that would be the Australian Federal Police.

I think their objective with this is to be far more overbearing on everyone. See, they’re training the system to detect “not abused”. So a “positive” is anything the system DOESN’T match. Last I checked, there’s a lot of ground between “always happy all the time” and “abused”. Abused children aren’t always going to look unhappy, happy children aren’t always going to look perfectly content.
Ultimately, if the AFP gets everything they want, I’d expect them to use this as probable cause to investigate just about anyone the AFP doesn’t like for child abuse in addition to whatever they’re investigating people for. If you say something that turns out to be against the utterly maligned Online Safety Act, you get investigated if you’ve ever uploaded a single photo of your son who was a little upset that day because he wanted to play video games instead of go on a hike.

As far as I’m concerned, there’s no reason for this to exist and you really should regard it as a sign of a police state, as if the last two years weren’t enough evidence of that.

2 Likes