| Deutsch English Français Italiano |
|
<c72aa667027121011042e8b4413d343f3c61bdd1@i2pn2.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory
Subject: Re: Verified facts regarding the software engineering of DDD, HHH,
and HHH1 --- TYPO
Date: Fri, 25 Oct 2024 18:17:28 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <c72aa667027121011042e8b4413d343f3c61bdd1@i2pn2.org>
References: <vf3eu5$fbb3$2@dont-email.me> <vf6mt7$136ja$2@dont-email.me>
<ad43f56a12181e10f59b8a1e6220ed7989b6c973@i2pn2.org>
<vf74oh$1a8oo$1@dont-email.me>
<525ed75662589a150afa1ea268b199a166a7b98b@i2pn2.org>
<vf8ads$1gkf5$1@dont-email.me>
<13583474d25855e665daa98d91605e958f5cf472@i2pn2.org>
<vf8i1g$1h5mj$4@dont-email.me>
<45ea7a6da46453c9da62c1149fa1cf7739218c5f@i2pn2.org>
<vf9qai$1scol$1@dont-email.me>
<2a210ab064b3a8c3397600b4fe87aa390868bb12@i2pn2.org>
<vf9sk6$1sfva$2@dont-email.me>
<4c67570b4898e14665bde2dfdf473130b89b7dd4@i2pn2.org>
<vfaqe7$21k64$1@dont-email.me>
<f789d3ef27e3000f04feb3df4fc561c5da02381f@i2pn2.org>
<vfcbl5$2b6h0$2@dont-email.me>
<b707850664ad22bb1172006f4e24a27633ff1a4d@i2pn2.org>
<vfe344$2o992$1@dont-email.me>
<94449dae60f42358ae29bb710ca9bc3b18c60ad7@i2pn2.org>
<vfeqqo$2ruhp$1@dont-email.me>
<0553e6ab73fa9a21f062de4d645549ae48fd0a64@i2pn2.org>
<vfg6us$36im7$2@dont-email.me>
<da2d4f48cb3b9ac2e44b6f9c9ab28adb3022acb1@i2pn2.org>
<vfh428$3bkkv$2@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Fri, 25 Oct 2024 22:17:29 -0000 (UTC)
Injection-Info: i2pn2.org;
logging-data="3665457"; mail-complaints-to="usenet@i2pn2.org";
posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
In-Reply-To: <vfh428$3bkkv$2@dont-email.me>
Bytes: 16844
Lines: 316
On 10/25/24 5:54 PM, olcott wrote:
> On 10/25/2024 10:45 AM, Richard Damon wrote:
>> On 10/25/24 9:37 AM, olcott wrote:
>>> On 10/25/2024 7:27 AM, Richard Damon wrote:
>>>> On 10/24/24 9:04 PM, olcott wrote:
>>>>> On 10/24/2024 6:23 PM, Richard Damon wrote:
>>>>>> On 10/24/24 2:19 PM, olcott wrote:
>>>>>>> On 10/23/2024 9:48 PM, Richard Damon wrote:
>>>>>>>> On 10/23/24 10:33 PM, olcott wrote:
>>>>>>>>> On 10/23/2024 6:16 PM, Richard Damon wrote:
>>>>>>>>>> On 10/23/24 8:33 AM, olcott wrote:
>>>>>>>>>>> On 10/23/2024 6:12 AM, Richard Damon wrote:
>>>>>>>>>>>> On 10/23/24 12:04 AM, olcott wrote:
>>>>>>>>>>>>> On 10/22/2024 10:47 PM, Richard Damon wrote:
>>>>>>>>>>>>>> On 10/22/24 11:25 PM, olcott wrote:
>>>>>>>>>>>>>>> On 10/22/2024 10:02 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>> On 10/22/24 11:57 AM, olcott wrote:
>>>>>>>>>>>>>>>>> On 10/22/2024 10:18 AM, joes wrote:
>>>>>>>>>>>>>>>>>> Am Tue, 22 Oct 2024 08:47:39 -0500 schrieb olcott:
>>>>>>>>>>>>>>>>>>> On 10/22/2024 4:50 AM, joes wrote:
>>>>>>>>>>>>>>>>>>>> Am Mon, 21 Oct 2024 22:04:49 -0500 schrieb olcott:
>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 9:42 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 7:08 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 6:05 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 6:48 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 5:34 PM, Richard Damon wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/24 12:29 PM, olcott wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 10:17 AM, joes wrote:
>>>>>>>>>>>>>>>>>>>>>>>>>>>> Am Mon, 21 Oct 2024 08:41:11 -0500 schrieb
>>>>>>>>>>>>>>>>>>>>>>>>>>>> olcott:
>>>>>>>>>>>>>>>>>>>>>>>>>>>>> On 10/21/2024 3:39 AM, joes wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>> Did ChatGPT generate that?
>>>>>>>>>>>>>>>>>>>>>>>>>>> If it did then I need *ALL the input that
>>>>>>>>>>>>>>>>>>>>>>>>>>> caused it to generate
>>>>>>>>>>>>>>>>>>>>>>>>>>> that*
>>>>>>>>>>>>>>>>>>>> It's not like it will deterministically regenerate
>>>>>>>>>>>>>>>>>>>> the same output.
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>> No, someone using some REAL INTELEGENCE, as
>>>>>>>>>>>>>>>>>>>>>>>>>> opposed to a program
>>>>>>>>>>>>>>>>>>>>>>>>>> using "artificial intelegence" that had been
>>>>>>>>>>>>>>>>>>>>>>>>>> loaded with false
>>>>>>>>>>>>>>>>>>>>>>>>>> premises and other lies.
>>>>>>>>>>>>>>>>>>>>>>>>> I specifically asked it to verify that its key
>>>>>>>>>>>>>>>>>>>>>>>>> assumption is
>>>>>>>>>>>>>>>>>>>>>>>>> correct and it did.
>>>>>>>>>>>>>>>>>>>>>>>> No, it said that given what you told it (which
>>>>>>>>>>>>>>>>>>>>>>>> was a lie)
>>>>>>>>>>>>>>>>>>>>>>> I asked it if what it was told was a lie and it
>>>>>>>>>>>>>>>>>>>>>>> explained how what
>>>>>>>>>>>>>>>>>>>>>>> it was told is correct.
>>>>>>>>>>>>>>>>>>>> "naw, I wasn't lied to, they said they were saying
>>>>>>>>>>>>>>>>>>>> the truth" sure
>>>>>>>>>>>>>>>>>>>> buddy.
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Because Chat GPT doesn't care about lying.
>>>>>>>>>>>>>>>>>>>>> ChatGPT computes the truth and you can't actually
>>>>>>>>>>>>>>>>>>>>> show otherwise.
>>>>>>>>>>>>>>>>>>>> HAHAHAHAHA there isn't anything about truth in
>>>>>>>>>>>>>>>>>>>> there, prove me wrong
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> Because what you are asking for is nonsense.
>>>>>>>>>>>>>>>>>>>>>> Of course an AI that has been programmed with lies
>>>>>>>>>>>>>>>>>>>>>> might repeat the
>>>>>>>>>>>>>>>>>>>>>> lies.
>>>>>>>>>>>>>>>>>>>>>> When it is told the actual definition, after being
>>>>>>>>>>>>>>>>>>>>>> told your lies,
>>>>>>>>>>>>>>>>>>>>>> and asked if your conclusion could be right, it
>>>>>>>>>>>>>>>>>>>>>> said No.
>>>>>>>>>>>>>>>>>>>>>> Thus, it seems by your logic, you have to admit
>>>>>>>>>>>>>>>>>>>>>> defeat, as the AI,
>>>>>>>>>>>>>>>>>>>>>> after being told your lies, still was able to come
>>>>>>>>>>>>>>>>>>>>>> up with the
>>>>>>>>>>>>>>>>>>>>>> correct answer, that DDD will halt, and that HHH
>>>>>>>>>>>>>>>>>>>>>> is just incorrect to
>>>>>>>>>>>>>>>>>>>>>> say it doesn't.
>>>>>>>>>>>>>>>>>>>>> I believe that the "output" Joes provided was fake
>>>>>>>>>>>>>>>>>>>>> on the basis that
>>>>>>>>>>>>>>>>>>>>> she did not provide the input to derive that output
>>>>>>>>>>>>>>>>>>>>> and did not use
>>>>>>>>>>>>>>>>>>>>> the required basis that was on the link.
>>>>>>>>>>>>>>>>>>>> I definitely typed something out in the style of an
>>>>>>>>>>>>>>>>>>>> LLM instead of my
>>>>>>>>>>>>>>>>>>>> own words /s
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> If you want me to pay more attention to what you
>>>>>>>>>>>>>>>>>>>>>> say, you first need
>>>>>>>>>>>>>>>>>>>>>> to return the favor, and at least TRY to find an
>>>>>>>>>>>>>>>>>>>>>> error in what I say,
>>>>>>>>>>>>>>>>>>>>>> and be based on more than just that you think that
>>>>>>>>>>>>>>>>>>>>>> can't be right.
>>>>>>>>>>>>>>>>>>>>>> But you can't do that, as you don't actually know
>>>>>>>>>>>>>>>>>>>>>> any facts about the
>>>>>>>>>>>>>>>>>>>>>> field that you can point to qualified references.
>>>>>>>>>>>>>>>>>>>>> You cannot show that my premises are actually false.
>>>>>>>>>>>>>>>>>>>>> To show that they are false would at least require
>>>>>>>>>>>>>>>>>>>>> showing that they
>>>>>>>>>>>>>>>>>>>>> contradict each other.
>>>>>>>>>>>>>>>>>>>> Accepting your premises makes the problem
>>>>>>>>>>>>>>>>>>>> uninteresting.
>>>>>>>>>>>>>>>>>>> That seems to indicate that you are admitting that
>>>>>>>>>>>>>>>>>>> you cheated when you
>>>>>>>>>>>>>>>>>>> discussed this with ChatGPT. You gave it a faulty
>>>>>>>>>>>>>>>>>>> basis and then argued
>>>>>>>>>>>>>>>>>>> against that.
>>>>>>>>>>>>>>>>>> Just no. Do you believe that I didn't write this
>>>>>>>>>>>>>>>>>> myself after all?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> They also conventional within the context of software
>>>>>>>>>>>>>>>>>>> engineering. That
>>>>>>>>>>>>>>>>>>> software engineering conventions seem incompatible
>>>>>>>>>>>>>>>>>>> with computer science
>>>>>>>>>>>>>>>>>>> conventions may refute the latter.
>>>>>>>>>>>>>>>>>> lol
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The a halt decider must report on the behavior that
>>>>>>>>>>>>>>>>>>> itself is contained
>>>>>>>>>>>>>>>>>>> within seems to be an incorrect convention.
>>>>>>>>>>>>>>>>>> Just because you don't like the undecidability of the
>>>>>>>>>>>>>>>>>> halting problem?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> u32 HHH1(ptr P) // line 721
>>>>>>>>>>>>>>>>>>> u32 HHH(ptr P) // line 801
>>>>>>>>>>>>>>>>>>> The above two functions have identical C code except
>>>>>>>>>>>>>>>>>>> for their name.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> The input to HHH1(DDD) halts. The input to HHH(DDD)
>>>>>>>>>>>>>>>>>>> does not halt. This
>>>>>>>>>>>>>>>>>>> conclusively proves that the pathological
>>>>>>>>>>>>>>>>>>> relationship between DDD and
>>>>>>>>>>>>>>>>>>> HHH makes a difference in the behavior of DDD.
>>>>>>>>>>>>>>>>>> That makes no sense. DDD halts or doesn't either way.
>>>>>>>>>>>>>>>>>> HHH and HHH1 may
>>>>>>>>>>>>>>>>>> give different answers, but then exactly one of them
>>>>>>>>>>>>>>>>>> must be wrong.
>>>>>>>>>>>>>>>>>> Do they both call HHH? How does their execution differ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> void DDD()
>>>>>>>>>>>>>>>>> {
>>>>>>>>>>>>>>>>> HHH(DDD);
>>>>>>>>>>>>>>>>> return;
>>>>>>>>>>>>>>>>> }
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> *It is a verified fact that*
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> (a) Both HHH1 and HHH emulate DDD according to the
>>>>>>>>>>>>>>>>> semantics of the x86 language.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> But HHH only does so INCOMPLETELY.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> (b) HHH and HHH1 have verbatim identical c source
>>>>>>>>>>>>>>>>> code, except for their differing names.
========== REMAINDER OF ARTICLE TRUNCATED ==========