Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <v87gf2$d1eu$1@dont-email.me>
Deutsch   English   Français   Italiano  
<v87gf2$d1eu$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!2.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Chris <ithinkiam@gmail.com>
Newsgroups: misc.phone.mobile.iphone
Subject: Re: Apple accused of underreporting suspected CSAM on its platforms
Date: Mon, 29 Jul 2024 07:30:42 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 166
Message-ID: <v87gf2$d1eu$1@dont-email.me>
References: <v7mup4$7vpf$1@solani.org>
 <lg8ea1Fa94U1@mid.individual.net>
 <xn0oonlp4azqw16000@reader443.eternal-september.org>
 <lga2k1F7uk8U1@mid.individual.net>
 <xn0oonrftb7hazk002@reader443.eternal-september.org>
 <v7olut$19iie$1@dont-email.me>
 <lga8vfF8qq0U3@mid.individual.net>
 <v7q9vj$1l9co$1@dont-email.me>
 <v7qn3b$2hg0$1@nnrp.usenet.blueworldhosting.com>
 <v7rclq$1r24r$2@dont-email.me>
 <lgdac6F3c6aU4@mid.individual.net>
 <v80bju$2s7ns$2@dont-email.me>
 <lgi05eFq6vhU2@mid.individual.net>
 <v80j93$2nqsm$5@dont-email.me>
 <lgildlFtal2U1@mid.individual.net>
 <v85r6s$mgr$1@dont-email.me>
 <lgni5gFl7shU3@mid.individual.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 29 Jul 2024 09:30:43 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="fd18570cb1b26bb7482a6614bc9bfd23";
	logging-data="427486"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/eSI1JQwzfXSyiQwmgGQishtd5SpQmc9g="
User-Agent: NewsTap/5.5 (iPhone/iPod Touch)
Cancel-Lock: sha1:xL+hXP/sGI1SQB7/7GZkEEAnUs8=
	sha1:WD3OExxuf77I6VxqISJD7YBnwLM=
Bytes: 8879

Jolly Roger <jollyroger@pobox.com> wrote:
> On 2024-07-28, Chris <ithinkiam@gmail.com> wrote:
>> Jolly Roger <jollyroger@pobox.com> wrote:
>>> On 2024-07-26, Alan <nuh-uh@nope.com> wrote:
>>>> On 2024-07-26 09:11, Jolly Roger wrote:
>>>>> On 2024-07-26, Chris <ithinkiam@gmail.com> wrote:
>>>>>> On 24/07/2024 22:35, Jolly Roger wrote:
>>>>>>> On 2024-07-24, Chris <ithinkiam@gmail.com> wrote:
>>>>>>>> Andrew <andrew@spam.net> wrote:
>>>>>>>>> Chris wrote on Wed, 24 Jul 2024 07:20:19 -0000 (UTC) :
>>>>>>>>> 
>>>>>>>>>> The NSPCC should really be complaining at how ineffectual the
>>>>>>>>>> tech companies are rather than complain at Apple for not
>>>>>>>>>> sending millions of photos to already overwhelmed authorities.
>>>>>>>>> 
>>>>>>>>> For all that is in the news stories, it could be ZERO
>>>>>>>>> convictions resulted.
>>>>>>>>> 
>>>>>>>>> Think about that.
>>>>>>>>> 
>>>>>>>>> Is it worth everyone's loss of privacy for maybe zero gain in
>>>>>>>>> child safety?
>>>>>>>> 
>>>>>>>> Apple's solution wouldn't have resulted in any additional loss
>>>>>>>> of privacy
>>>>>>> 
>>>>>>> Actually, Apple could not guarantee that, and there was a
>>>>>>> non-zero chance that false positive matches would result in
>>>>>>> privacy violations.
>>>>>> 
>>>>>> True. The balance of risk was proportionate, however. Much moreso
>>>>>> than the current system.
>>>>> 
>>>>> Absolutely. I'm just of the opinion if one innocent person is
>>>>> harmed, that's one too many. Would you want to be that unlucky
>>>>> innocent person who has to deal with charges, a potential criminal
>>>>> sexual violation on your record, and all that comes with it? I
>>>>> certainly wouldn't.
>>>> 
>>>> Except that Apple's system wouldn't automatically trigger charges.
>>>> 
>>>> An actual human would review the images in question...
>>> 
>>> And at that point, someone's privacy may be violated. 
>> 
>> You're entering into confucious territory. If nothing is triggered is
>> anyone's privacy infringed. 
> 
> You're claiming innocent photos would never match, but there is a
> possibility of false matches inherent in the algorithm, no matter how
> small.
> 
>>> Do you want a stranger looking at photos of your sick child? 
>> 
>> That wouldn't happen with Apple's method. 
> 
> It would. If a sufficient number of images matched Apple's algorithms
> (which are not perfect and allow for false matches), a human being would
> be looking at those photos of your naked sick child. How else do you
> think Apple would determine whether the images in question are or are
> not CSAM? And what happens when that stranger decides "You know what? I
> think these photos are inappropriate even if they don't match known CSAM"?
> 
>>> What if that stranger came to the conclusion that those photos are
>>> somehow classifiable as sexual or abusive in some way? Would you want
>>> to have to argue your case in court because of it?
>> 
>> That's a lot of ifs and steps. 
> 
> Yes, but it's possible. 
> 
>> No-one is going to be charged for a dubious
>> photo of their own child. There are much bigger fish to fry and get into
>> jail. 
> 
> You're wrong. It has already happened:
>
> A Dad Took Photos of His Naked Toddler for the Doctor. Google Flagged
> Him as a Criminal
> <https://archive.is/78Pla#selection-563.0-1075.217>

I explicitly said "charged". No-one got charged. The law is working just
fine. It's the tech, as I've been arguing all along, that's the problem. 

> Read the whole article to get a glimpse of what innocent people go
> through who fall victim to this invasive scanning. 
> 
> Do you think these parents and their child consider their privacy to be
> violated? How would you feel if your intimate photos were added to the
> PhotoDNA CSAM database because they were incorrectly flagged?

This wasn't PhotoDNA, which is what Apple was similar to. It was google's
AI method that is designed to "recognize never-before-seen exploitative
images of children" which is where the real danger sits. 

It is designed to identify new abuse images based on only the pixel data so
all hits will be massively enriched for things that look like abuse. A
human won't have the ability to accurately identify the (likely innocent)
motivation for taking photo and "to be safe" will pass it onto someone else
make the decision i.e. law enforcement. The LE will have access to much
more information and see it's an obvious mistake as seen in your article. 

Apple's system was more like hashing the image data and comparing hashes
where false positives are due to  algorithmic randomness. The pixel data
when viewed by a human won't be anything like CSAM and an easy decision
made. 

What's crucial here is that Google are looking for new stuff - which is
always problematic - whereas Apple's was not. The search space when looking
for existing images is much tinier and the impact of false positives much,
much smaller. 

 
> --- 
> In 2021, the CyberTipline reported that it had alerted authorities
> to “over 4,260 potential new child victims.” The sons of Mark and Cassio
> were counted among them.
> ---
> 
> A lot of really bad things can happen to good people:
> 
> --- 
> “This would be problematic if it were just a case of content
> moderation and censorship,” Ms. Klonick said. “But this is doubly
> dangerous in that it also results in someone being reported to law
> enforcement.” It could have been worse, she said, with a parent
> potentially losing custody of a child. “You could imagine how this might
> escalate,” Ms. Klonick said.
> ---
> 
>>>> ...AND since they were comparing images against KNOWN CSAM, false
>>>> positives would naturally be very few to begin with.
>>> 
>>> Yes, but one is one too many in my book.
>> 
>> How many children are you prepared to be abused to protect YOUR
>> privacy? 
> 
> Now you're being absurd. My right to privacy doesn't cause any children
> to be abused. 

That's what you'd like to think, yet the reality is that not only is it
harder to identify perpetrators, but also, ironically, your position
ensures more people get erroneously labelled. 

>>> Apple was wise to shelve this proposal. And I am happy to see that
>>> they embraced more private features such as the Safe Communication
>>> feature which is done without violating customers' privacy. 
>> 
>> It wasn't violating anyone's privacy. For the umpteenth time. It
>> actually preserved people's privacy by design. 
> 
> While it went further than the rest to protect people's privacy, there
> was still room for error and for innocent people to be harmed. That's
> the truth you seem to want to disregard, and that's why Apple shelved
> it.

My truth is that the Apple method was a significant improvement. Plus if
people didn't like it - unreasonably - they didn't have to use icloud.
Apple only shelved it for PR reasons, which is a real shame. 

What you're disregarding is that the alternative hasn't been more privacy
for people, it's been less privacy and more errors. A lose-lose situation. 

========== REMAINDER OF ARTICLE TRUNCATED ==========