Copied from miku-chan03?
Here’s a dramatic reading of some of miku’s posts: https://www.youtube.com/watch?v=BDqik-Y27Uc
The same text as from the OP is the first one in the video.
Copied from miku-chan03?
Here’s a dramatic reading of some of miku’s posts: https://www.youtube.com/watch?v=BDqik-Y27Uc
The same text as from the OP is the first one in the video.
Finally. I haven’t seen a single positive use of these yet due to the poor performance. Only slightly more accurate than professors or lawyers asking ChatGPT whether something was written by ChatGPT.
Direct link to the (short) report this article refers to:
https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf
https://purl.stanford.edu/vb515nd6874
After reading it, I’m still unsure what all they consider to be CSAM and how much of each category they found. Here are what they count as CSAM categories as far as I can tell. No idea how much the categories overlap, and therefore no idea how many beyond the 112 PhotoDNA images are of actual children.
Personally, I’m not sure what the take-away is supposed to be from this. It’s impossible to moderate all the user-generated content quickly. This is not a Fediverse issue. The same is true for Mastodon, Twitter, Reddit and all the other big content-generating sites. It’s a hard problem to solve. Known CSAM being deleted within hours is already pretty good, imho.
Meta-discussion especially is hard to police. Based on the report, it seems that most CP-material by mass is traded using other services (chat rooms).
For me, there’s a huge difference between actual children being directly exploited and virtual depictions of fictional children. Personally, I consider it the same as any other fetish-images which would be illegal with actual humans (guro/vore/bestiality/rape etc etc).
Perhaps this ASRM-ish reading of java class exceptions might calm you down? https://www.youtube.com/watch?v=CCCTCVBFt6E