Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <v88jf8$j1j9$1@dont-email.me>
Deutsch   English   Français   Italiano  
<v88jf8$j1j9$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Chris <ithinkiam@gmail.com>
Newsgroups: misc.phone.mobile.iphone
Subject: Re: Apple accused of underreporting suspected CSAM on its platforms
Date: Mon, 29 Jul 2024 17:28:08 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 145
Message-ID: <v88jf8$j1j9$1@dont-email.me>
References: <v7mup4$7vpf$1@solani.org>
 <lg8ea1Fa94U1@mid.individual.net>
 <xn0oonlp4azqw16000@reader443.eternal-september.org>
 <lga2k1F7uk8U1@mid.individual.net>
 <xn0oonrftb7hazk002@reader443.eternal-september.org>
 <v7olut$19iie$1@dont-email.me>
 <lga8vfF8qq0U3@mid.individual.net>
 <v7q9vj$1l9co$1@dont-email.me>
 <v7qn3b$2hg0$1@nnrp.usenet.blueworldhosting.com>
 <v7rclq$1r24r$2@dont-email.me>
 <lgdac6F3c6aU4@mid.individual.net>
 <v80bju$2s7ns$2@dont-email.me>
 <lgi05eFq6vhU2@mid.individual.net>
 <v80j93$2nqsm$5@dont-email.me>
 <lgildlFtal2U1@mid.individual.net>
 <v85r6s$mgr$1@dont-email.me>
 <lgni5gFl7shU3@mid.individual.net>
 <v87gf2$d1eu$1@dont-email.me>
 <lgpqjfF17veU1@mid.individual.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 29 Jul 2024 19:28:09 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="fd18570cb1b26bb7482a6614bc9bfd23";
	logging-data="624233"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX19+KauUJQRgwRTfjhpJnYIlrVFGAEPhfac="
User-Agent: NewsTap/5.5 (iPhone/iPod Touch)
Cancel-Lock: sha1:dirQOanSTIhYvr+ZiujcYAe0HnM=
	sha1:uRcox30DEy1qtaJWj83tpZPBRss=
Bytes: 7920

Jolly Roger <jollyroger@pobox.com> wrote:
> On 2024-07-29, Chris <ithinkiam@gmail.com> wrote:
>> Jolly Roger <jollyroger@pobox.com> wrote:
>>> On 2024-07-28, Chris <ithinkiam@gmail.com> wrote:
>>> 
>>>> No-one is going to be charged for a dubious photo of their own
>>>> child. There are much bigger fish to fry and get into jail. 
>>> 
>>> You're wrong. It has already happened:
>>> 
>>> A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged
>>> Him as a Criminal <https://archive.is/78Pla#selection-563.0-1075.217>
>> 
>> I explicitly said "charged". No-one got charged. The law is working
>> just fine. It's the tech, as I've been arguing all along, that's the
>> problem. 
> 
> So it's okay that these parents and their child had their privacy
> violated, their child's naked photos added to the CSAM database, and
> their accounts (along with years of emails, photos, and so on)
> revoked/deleted because they were officially charged?

Nope. Not ok. I didn't say otherwise. 

I'm explicitly saying the tech has to be used proportionately and
effectively. The current systems are neither. 

>>> Read the whole article to get a glimpse of what innocent people go
>>> through who fall victim to this invasive scanning. 
>>> 
>>> Do you think these parents and their child consider their privacy to
>>> be violated? How would you feel if your intimate photos were added to
>>> the PhotoDNA CSAM database because they were incorrectly flagged?
>> 
>> This wasn't PhotoDNA, which is what Apple was similar to. It was
>> google's AI method that is designed to "recognize never-before-seen
>> exploitative images of children" which is where the real danger sits. 
>> 
>> It is designed to identify new abuse images based on only the pixel
>> data so all hits will be massively enriched for things that look like
>> abuse. A human won't have the ability to accurately identify the
>> (likely innocent) motivation for taking photo and "to be safe" will
>> pass it onto someone else make the decision i.e. law enforcement. The
>> LE will have access to much more information and see it's an obvious
>> mistake as seen in your article. 
> 
> Actually, a human being does review it with Google's system:

I was unclear. I'm not saying a human doesn't review, I'm saying that given
the dozens/hundreds of images of suspected abuse images they review a day
they won't have the ability to make informed decisions. 

> --- 
> A human content moderator for Google would have reviewed the photos
> after they were flagged by the artificial intelligence to confirm they
> met the federal definition of child sexual abuse material. When Google
> makes such a discovery, it locks the user’s account, searches for other
> exploitative material and, as required by federal law, makes a report to
> the CyberTipline at the National Center for Missing and Exploited
> Children.  
> ---
> 
>> Apple's system was more like hashing the image data and comparing
>> hashes where false positives are due to  algorithmic randomness. The
>> pixel data when viewed by a human won't be anything like CSAM and an
>> easy decision made. 
>> 
>> What's crucial here is that Google are looking for new stuff - which
>> is always problematic - whereas Apple's was not. The search space when
>> looking for existing images is much tinier and the impact of false
>> positives much, much smaller. 
> 
> Yes, but even in Apple's case, there's a small change of a false
> positive patch.  And were that to happen, there is a danger of an
> innocent person's privacy being violated.

In every case there's a chance of FPs. Apple would have had lower FPR then
*the current* system. 

Given the choice I'm in favour of the better, evidence-based method. You're
in favour of the worse system that is actually being used and then complain
that it's worse. 

>>>> How many children are you prepared to be abused to protect YOUR
>>>> privacy? 
>>> 
>>> Now you're being absurd. My right to privacy doesn't cause any
>>> children to be abused. 
>> 
>> That's what you'd like to think, yet the reality is that not only is
>> it harder to identify perpetrators, but also, ironically, your
>> position ensures more people get erroneously labelled. 
> 
> Nope. I don't support any scanning of private content. 

Yet it's already happening so why not support the better method?

>>>>> Apple was wise to shelve this proposal. And I am happy to see that
>>>>> they embraced more private features such as the Safe Communication
>>>>> feature which is done without violating customers' privacy. 
>>>> 
>>>> It wasn't violating anyone's privacy. For the umpteenth time. It
>>>> actually preserved people's privacy by design. 
>>> 
>>> While it went further than the rest to protect people's privacy,
>>> there was still room for error and for innocent people to be harmed.
>>> That's the truth you seem to want to disregard, and that's why Apple
>>> shelved it.
>> 
>> My truth is that the Apple method was a significant improvement.i
> 
> I agree - still not good enough for me though.

"Perfect is the enemy of the good"

By seeking perfection you and others are allowing and enabling child abuse.


>> Plus if people didn't like it - unreasonably - they didn't have to use
>> icloud.
> 
> True. 
> 
>> Apple only shelved it for PR reasons, which is a real shame. 
> 
> You don't know all of Apple's motivations. What we know is Apple shelved
> it after gathering feedback from industry experts. And many of those
> experts were of the opinion that even with Apple's precautions, the risk
> of violating people's privacy was too great.

That wasn't the consensus. The noisy tin-foil brigade drowned out any
possible discussion. Apple should have simply implemented it like google
are doing, but badly. 

>> What you're disregarding is that the alternative hasn't been more
>> privacy for people, it's been less privacy and more errors. A
>> lose-lose situation. 
>> 
>> You want to live in a world that doesn't exist. 
> 
> Nah. I simply choose not to engage with those alternatives for things I
> consider private. 

Head-in-sand mentality.