Hacker News

rbanffy
Child safety org launches AI model trained on real child sex abuse images arstechnica.com

chockablocker5 hours ago

Now that such a model exists, I think we can expect lobbying for making this mandatory. For the children...

Once adopted this will lead to an increase in randomly locked out accounts due to model false positives, because looking for something very rare will lead to most items flagged being false positives (Bayes statistics etc.).

This blog had a few interesting articles on the limitations of these technologies https://www.hackerfactor.com/blog/index.php?/archives/971-FB... While this is comparing the currently used hash based approaches a classification model will have similar problems.

bell-cot8 hours ago

> Once suspected CSAM is flagged, a human reviewer remains in the loop to ensure oversight.

Obviously necessary...but I can't imagine a less-desirable job, for 99.9% of decent human beings.

noufalibrahim8 hours ago

I remember reading somewhere that the folks who worked for the companies that Facebook outsources its moderation work to suffer from serious psychological problems.

V-eHGsd_7 hours ago

back when orkut was a thing, google did this one weekend with internal employees. some co-workers participated; unsurprisingly they all said it was _very_ disturbing.

noufalibrahim7 hours ago

Here's one report detailing some of the stuff that happened. https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

Daviey7 hours ago

It's obviously the right thing to say it's an undesirable job, but with the right support, I actually think it could be a good job. You'd be able to go home each day knowing that you did a good part in helping to reduce the harm to both these children, other children, bringing the perpetrators to justice, and bringing everyone the help they need.

soneil7 hours ago

I think that'd be the huge differentiator - you need that feedback loop that there's actual results/outcomes from your work.

If you just sit 8 hours a day effectively solving the world's most gruesome captcha, I wouldn't last long.

Spivak7 hours ago

I'm honestly not sure why we can't have those 0.01% actually do the job. Like I get the optics are terrible but I think were I standing at the gallows I would prefer my hangman enjoy his job, less total suffering in the world created that way.

I think it's a holdover from a puritan mindset that work that needs doing but is unsavory— slaughterhouse workers, exterminators, executioners, and well… this are only okay if the person doing it feels bad the whole time.

sshine7 hours ago

> I would prefer my hangman enjoy his job

I don't want him looking away at the last minute.

> I think it's a holdover from a puritan mindset that [...] only okay if the person doing it feels bad the whole time.

The Dexter tv series did popularise the idea of an externally moral psychopath mass murderer.

Or rather, if you happen to be a person who just wants to kill, at least channel your energy.

bell-cot7 hours ago

For starters - how do you find & verify the 0.01% (or whatever) of decent people who do not find the "CSAM Verification" job horrible?

With how easily so-called AI's are to maliciously mis-train, there are major issues with having "non-decent" people doing this job. Vs. the homicide-enjoying hangman is not making judgement calls on society's behalf.

sshine7 hours ago

> how do you find & verify [...] decent people who do not find the "CSAM Verification" job horrible?

I think the distinction is:

Some people are okay with the job because they're pedophiles.

Others are okay with the job because they're insensitive to violence.

"Decent" is slightly moralistic and blurs the picture. The people can be absolute assholes, and that's okay, as long as they're not personally motivated to collect and spread CSAM material. So we're really looking for a bunch of psychopaths (in the most well-intended meaning of the word) indifferent to children. I think it's possible to make a qualifying test for both criteria.

nostrebored7 hours ago

Thorn and Hive trained a classifier for CSAM detection. The rest of the article is completely irrelevant.

hn-front (c) 2024 voximity
source