A bit lacking in detail as to what happens to the images when the research project is over - and who has access to all the data during and after the project.
Also perhaps the project could be compromised if pedos submit images of distressed children. That would be a ballsy attack though. (In other words, what they say is a problem with scraping images from the open internet could also be an issue with sourcing images this way.)
And thereâs always the danger that poorly executed AI could lead to oversealous service providers flagging their usersâ innocent images, which might result in annoyances (at best) or legal battles (at worst).
Yes, as a research project that is not a problem but presumably the AFP (Australian Federal Police) are interested in this project because they intend to use it in real life if the project is a success - and then you have to worry about false positives.
As implied in the article (which is by no means definitive) the intended use case by the AFP is:
- The AFP has raided a property / infiltrated an online server and has access to a cache of thousands of images.
- They want to examine every image to see whether it contains evidence of CSA.
- Right now the process involves a human being examining images, which is at least time-consuming but also potentially distressing. (There are other technological approaches being used e.g. database of âsignaturesâ of known CSAM images - but that doesnât help with any new material.)
So if it were limited to, say, the use case of âthe AFP has raided your propertyâ then I donât think false positives are your worst problem. You already have a world of pain even if the AFP finds nothing suspicious at all.
I would guess also that if the AI flags an image, it would have to be confirmed by a human being. I canât imagine that a court would accept a declaration by the AFP that âtheir AI flagged the imageâ as evidence. I mean I can imagine it, as a dystopian future, that the AFPâs AI talks to the courtâs AI and you are convicted, end-of-story - but for the time being âŚ
However you donât have to be a genius to combine Appleâs scanning of the images on your phone (looking for CSAM) with the technology being discussed in this topic to see where it could go ⌠with most online service providers and most operating system providers automatically examining every image of every customer looking for a range of forbidden material.
One thing I am unclear on is why there is an interest in detecting images of exploited, unsafe, unhappy children. Surely it would be easier for AI to detect, you know, actual CSAM? Kids might be unhappy (throwing a tanty) X% of the time anyway.
Well, you see, that would involve having a data set of that kind of material to train the neural network with, and what kind of horrible person would keep that around?
Oh wait, that would be the Australian Federal Police.
I think their objective with this is to be far more overbearing on everyone. See, theyâre training the system to detect ânot abusedâ. So a âpositiveâ is anything the system DOESNâT match. Last I checked, thereâs a lot of ground between âalways happy all the timeâ and âabusedâ. Abused children arenât always going to look unhappy, happy children arenât always going to look perfectly content.
Ultimately, if the AFP gets everything they want, Iâd expect them to use this as probable cause to investigate just about anyone the AFP doesnât like for child abuse in addition to whatever theyâre investigating people for. If you say something that turns out to be against the utterly maligned Online Safety Act, you get investigated if youâve ever uploaded a single photo of your son who was a little upset that day because he wanted to play video games instead of go on a hike.
As far as Iâm concerned, thereâs no reason for this to exist and you really should regard it as a sign of a police state, as if the last two years werenât enough evidence of that.