Deutsch   English   Français   Italiano  
<1015tb3$324da$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail
From: moviePig <nobody@nowhere.com>
Newsgroups: rec.arts.tv
Subject: Re: [OT] Why governments must limit AI violations of copyright
Date: Tue, 27 May 2025 22:44:50 -0400
Organization: A noiseless patient Spider
Lines: 105
Message-ID: <1015tb3$324da$1@dont-email.me>
References: <1014nu3$25lr8$2@dont-email.me> <1015a2u$2qi8f$1@dont-email.me>
 <1015bmu$2r3qe$1@dont-email.me> <1015do6$2rc6i$1@dont-email.me>
 <1015efl$2ro50$1@dont-email.me>
Reply-To: nobody@nowhere.com
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 7bit
Injection-Date: Wed, 28 May 2025 04:44:52 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="9c0345d905c198fd88f0e62d743fce2a";
	logging-data="3215786"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/pmzN1g8MHSWkQy1rZn8ZUv2dCZKxgZFs="
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:85bB50XKuGoUUa+NUeK9elFuUCY=
Content-Language: en-US
In-Reply-To: <1015efl$2ro50$1@dont-email.me>

On 5/27/2025 6:31 PM, BTR1701 wrote:
> On May 27, 2025 at 3:18:45 PM PDT, "moviePig" <nobody@nowhere.com> wrote:
> 
>> On 5/27/2025 5:43 PM, BTR1701 wrote:
>>>   On May 27, 2025 at 2:16:14 PM PDT, "moviePig" <nobody@nowhere.com>
>>>   wrote:
>>>   
>>>>   On 5/27/2025 3:20 PM, Rhino wrote:
>>>>>   On 2025-05-27 2:17 PM, BTR1701 wrote:
>>>>>>   On May 27, 2025 at 9:06:34 AM PDT, "Rhino"
>>>>>>   <no_offline_contact@example.com> wrote:
>>>>>>   
>>>>>>>   Mary Spender presents a relatively brief but, I think,
>>>>>>>   compelling argument for why governments need to reject the
>>>>>>>   tech firms claims that using existing works to train AIs is
>>>>>>>   fair use and does not need to be paid for.
>>>>>>>   
>>>>>>>   https://www.youtube.com/watch?v=R5pFE85oAnA [8 minutes]
>>>>>>>   
>>>>>>>   The tech bros are wallowing in almost unimagineable wealth:
>>>>>>>   they can definitely afford to compensate copyright holders
>>>>>>>   for using their work as training data. Alternatively, they
>>>>>>>   can let copyright holders exclude their works from use in
>>>>>>>   training data and compensate them for what they have used
>>>>>>>   without permission.
>>>>>>>   
>>>>>>>   I don't believe the tech companies have some kind of natural
>>>>>>>   right to generate new works that are closely modelled on
>>>>>>>   existing works without paying for their use of those works.
>>>>>>   
>>>>>>   If you can show that the AI produces a copy of the work it was
>>>>>>   trained on, or one substantially similar enough as to be
>>>>>>   confusing to the reasonable man, then yes, I agree.
>>>>>>   
>>>>>>   E.g., if you ask it to generate a story about a young girl who
>>>>>>   finds herself lost in a fantasy world and it spits out the
>>>>>>   plot to Alice in Wonderland.
>>>>>>   
>>>>>>   But if you ask it that same question and it produces a totally
>>>>>>   different story that isn't Alice in Wonderland in any
>>>>>>   recognizable way but it learned how to do that from 'reading'
>>>>>>   Alice in Wonderland, then I don't see how you have a copyright
>>>>>>   violation under existing law or even under the philosophical
>>>>>>   framework on which existing law has been built. At that point,
>>>>>>   it's no different from a human reading Alice in Wonderland and
>>>>>>   figuring out how to use the elements and techniques employed
>>>>>>   by Carroll in his story to produce a different story of his
>>>>>>   own. No one would suggest copyright violation if a human did
>>>>>>   it, so how can it suddenly be one if a computer algorithm does
>>>>>>   it?
>>>>>>   
>>>>>>>   The new works generated by humans are already pretty
>>>>>>>   derivative in too many cases: we don't need AIs generating
>>>>>>>   still more of the same.
>>>>>>   
>>>>>>   Well therein lies the rub. At least in America. We call it the
>>>>>>   Bill of Rights, not the Bill of Needs, for a reason.
>>>>>>   
>>>>>>>   There's a wealth of art (whether music, visual art, or
>>>>>>>   literature) freely available in the public domain. Let them
>>>>>>>   use that if they need large quantities of art to train their
>>>>>>>   models.
>>>>>>   
>>>>>>   
>>>>>>   
>>>>>   Your points are well taken. Yes, if the AI-generated material
>>>>>   isn't recognizable to someone familiar with Alice in Wonderland,
>>>>>   it's hard to make a case for copyright infringement. And yes,
>>>>>   even if *I* don't see a need for yet more derivative works, it's
>>>>>   not illegal, even if it is annoying.
>>>>>   
>>>>>   The challenge is going to come with deciding if an AI-generated
>>>>>   work is "too similar" to something it trained on. I expect that
>>>>>   similarity, like beauty, is in the eye (or ear) of the beholder.
>>>>>   Maybe a committee will have to do the deciding and only if a
>>>>>   majority of its members thinks the similarity is too close will
>>>>>   the AI work be labelled a copyright infringement. Of course
>>>>>   selection of this committee will be challenging since the tech
>>>>>   companies are going to favour people that don't ever see
>>>>>   similarities even of identical things and the human creators
>>>>>   will tend to see similarity in everything because its in their
>>>>>   financial interest to find similarity.
>>>>   
>>>>   Two ancillary thoughts:  Afaics, we're already within reach of
>>>>   such a pilfering AI-agent that can be dialed to a desired degree
>>>>   of "distance" from the original work it's copying.  Meanwhile,
>>>>   whenever a claim of infringement is brought, adjudicating that
>>>>   "distance" sounds like a proper and plausible task for a
>>>>   magistrate that is itself an AI.
>>>   
>>>   We're at that point with humans, too, and long have been.
>>
>> An answer might lie in my second thought (restored above).  An AI that
>> could detect similarity between a work and its alleged copy might be
>> sufficient proof of infringement.  Even though it'd almost certainly be
>> somewhat imprecise, that shouldn't concern any truly original author.
> 
> Again, you'd have to come up with a coherent legally acceptable reason why de
> minimis similarity with AI would constitute violation but the same similarity
> in a human-produced work would not.

I assume that any claim of infringement would be lodged against the 
copy's publisher, irrespective of his source.