Deutsch   English   Français   Italiano  
<316810021a1addeb9ee56d7ee33f3d2f6cf6624d@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory
Subject: Re: Even Google AI Overview understands me now --- different
 execution traces have different behavior !!!
Date: Sat, 5 Oct 2024 06:58:25 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <316810021a1addeb9ee56d7ee33f3d2f6cf6624d@i2pn2.org>
References: <vdgpbs$2nmcm$1@dont-email.me> <vdgqhn$2nmcm$2@dont-email.me>
 <7c6cede5237e3eafee262c74dd1a1c90c6b2ffbb@i2pn2.org>
 <vdhblt$2qm1j$2@dont-email.me>
 <cafee8d7a14edd7b1d76bb706c36eef06ae82896@i2pn2.org>
 <vdi0f8$2u1aq$1@dont-email.me>
 <53a60609211a04a123adafa525bac39b5cbc6959@i2pn2.org>
 <vdjlum$38t86$4@dont-email.me>
 <bf681f4404a7df8e3ffc2059dcd7c5c302aeeff1@i2pn2.org>
 <vdkud3$3ipp4$1@dont-email.me>
 <8b646269ba7736c125f0b05a1d764d73540f16e0@i2pn2.org>
 <vdn6sj$3thq0$1@dont-email.me>
 <1263d37668d0fd03df0ab5f9617387ca66ba4f0e@i2pn2.org>
 <vdnl13$3089$1@dont-email.me>
 <5f8d06fc4934a789b337fe9924553e34f9a45586@i2pn2.org>
 <vdp7q7$8eot$3@dont-email.me>
 <ab89296446c0b7827f304f138347765967baf478@i2pn2.org>
 <vdq66f$jf7k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 5 Oct 2024 10:58:25 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="665763"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <vdq66f$jf7k$1@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
Bytes: 14707
Lines: 304

On 10/4/24 9:53 PM, olcott wrote:
> On 10/4/2024 5:43 PM, Richard Damon wrote:
>> On 10/4/24 1:14 PM, olcott wrote:
>>> On 10/4/2024 5:50 AM, Richard Damon wrote:
>>>> On 10/3/24 10:48 PM, olcott wrote:
>>>>> On 10/3/2024 8:17 PM, Richard Damon wrote:
>>>>>> On 10/3/24 6:46 PM, olcott wrote:
>>>>>>> On 10/3/2024 6:16 AM, Richard Damon wrote:
>>>>>>>> On 10/2/24 10:09 PM, olcott wrote:
>>>>>>>>> On 10/2/2024 5:48 PM, Richard Damon wrote:
>>>>>>>>>> On 10/2/24 10:39 AM, olcott wrote:
>>>>>>>>>>> On 10/2/2024 6:08 AM, Richard Damon wrote:
>>>>>>>>>>>> On 10/1/24 7:26 PM, olcott wrote:
>>>>>>>>>>>>> On 10/1/2024 12:58 PM, joes wrote:
>>>>>>>>>>>>>> Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
>>>>>>>>>>>>>>> On 10/1/2024 8:09 AM, joes wrote:
>>>>>>>>>>>>>>>> Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
>>>>>>>>>>>>>>>>> On 10/1/2024 7:19 AM, olcott wrote:
>>>>>>>>>>>>>>>>>> https://www.google.com/search?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>> q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
>>>>>>>>>>>>>>>> AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
>>>>>>>>>>>>>>>>> https://chatgpt.com/ 
>>>>>>>>>>>>>>>>> share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>>>>>>>>>>>> It sounds like it’s trained on your spam. LLMs don’t 
>>>>>>>>>>>>>>>> know anything
>>>>>>>>>>>>>>>> anyway.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> I asked: „What happens when HHH tries to simulate itself?”
>>>>>>>>>>>>>>>> ChatGPT: [my comments in brackets]
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ### 1. **Self-Invocation in HHH**
>>>>>>>>>>>>>>>> If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an 
>>>>>>>>>>>>>>>> infinite loop
>>>>>>>>>>>>>>>> of calls unless there is some form of termination 
>>>>>>>>>>>>>>>> condition built into
>>>>>>>>>>>>>>>> `HHH`.
>>>>>>>>>>>>>>>> Without such a condition, `HHH` would never return, 
>>>>>>>>>>>>>>>> resulting in non-
>>>>>>>>>>>>>>>> termination.
>>>>>>>>>>>>>>>> [So not a decider.]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ### 2. **Halting Criteria**
>>>>>>>>>>>>>>>> In a well-designed termination analyzer, `HHH` would 
>>>>>>>>>>>>>>>> need to have
>>>>>>>>>>>>>>>> specific logic to prevent endless recursion:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> [Not present.]
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ### 3. **Practical Consequences**
>>>>>>>>>>>>>>>> If `HHH` lacks the ability to handle self-referential 
>>>>>>>>>>>>>>>> calls correctly:
>>>>>>>>>>>>>>>> - **Stack Overflow**: Continuous self-invocation will 
>>>>>>>>>>>>>>>> eventually
>>>>>>>>>>>>>>>> exhaust the stack memory, leading to a stack overflow 
>>>>>>>>>>>>>>>> error.
>>>>>>>>>>>>>>>> [Not considered in theory.]
>>>>>>>>>>>>>>>> - **Incorrect Results**: If not designed carefully, it 
>>>>>>>>>>>>>>>> may lead to an
>>>>>>>>>>>>>>>> incorrect conclusion about halting, misrepresenting 
>>>>>>>>>>>>>>>> itself as either
>>>>>>>>>>>>>>>> halting or non-halting.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ### 4. **Best Practices for Termination Analyzers**
>>>>>>>>>>>>>>>> To avoid these pitfalls, a termination analyzer should 
>>>>>>>>>>>>>>>> implement:
>>>>>>>>>>>>>>>> - **Guard Clauses**: Checks at the beginning of the 
>>>>>>>>>>>>>>>> function to prevent
>>>>>>>>>>>>>>>> unnecessary self-simulation.
>>>>>>>>>>>>>>>> - **State Management**: A way to track the functions 
>>>>>>>>>>>>>>>> being analyzed,
>>>>>>>>>>>>>>>> allowing `HHH` to recognize if it's being invoked with 
>>>>>>>>>>>>>>>> itself and act
>>>>>>>>>>>>>>>> accordingly.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> ### Conclusion In summary, if `HHH` tries to simulate 
>>>>>>>>>>>>>>>> itself without
>>>>>>>>>>>>>>>> proper safeguards,
>>>>>>>>>>>>>>>> it risks falling into an infinite recursion or causing a 
>>>>>>>>>>>>>>>> stack
>>>>>>>>>>>>>>>> overflow. A robust design would include checks to prevent
>>>>>>>>>>>>>>>> self-invocation, ensuring the termination analyzer can 
>>>>>>>>>>>>>>>> handle all
>>>>>>>>>>>>>>>> cases, including its own function,
>>>>>>>>>>>>>>>> gracefully.
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> *You can continue this conversation with ChatGPT*
>>>>>>>>>>>>>> I’ll leave that to you.
>>>>>>>>>>>>>>> Click on the link and see how it answers this question:
>>>>>>>>>>>>>>> Is H a Halt Decider for D?
>>>>>>>>>>>>>> You should feed it our objections.
>>>>>>>>>>>>>> If you believe in it only when you prompt it, it is not 
>>>>>>>>>>>>>> suited as an
>>>>>>>>>>>>>> authority (fallacious anyway).
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> You feed it your objections.
>>>>>>>>>>>>> It will tell you how and why you are wrong.
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> No, it will tell you something that matches the words you 
>>>>>>>>>>>> told it.
>>>>>>>>>>>>
>>>>>>>>>>>> You don't seem to understand what Large Language Models are.
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> You seem to forget that LLM know nothing of the "truth", 
>>>>>>>>>>>> only what matches their training data.
>>>>>>>>>>>>
>>>>>>>>>>>> They are know to be liars, just like you.
>>>>>>>>>>>
>>>>>>>>>>> In the case of their evaluation of my work they are correct.
>>>>>>>>>>>
>>>>>>>>>>> *Try this yourself*
>>>>>>>>>>>
>>>>>>>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>>>>>>>
>>>>>>>>>>> Does HHH have to abort its emulation of DDD to prevent the 
>>>>>>>>>>> infinite execution of DDD?
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Try asking it
>>>>>>>>>>
>>>>>>>>>> Why is it ok for HHH to say that DDD doesn't halt when it will 
>>>>>>>>>> when run?
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>> You guys try to pretend that the pathological relationship between
>>>>>>>>> HHH and DDD does not make any difference to the behavior of DDD
>>>>>>>>> knowing full well that it does make a difference.
>>>>>>>>>
>>>>>>>>> When two execution traces differ and one expects the same behavior
>>>>>>>>> this is the same as analogous to the insanity of doing the exact
>>>>>>>>> same thing and expecting different results. It is merely the other
>>>>>>>>> side. Doing an entirely different thing and expecting the same
>>>>>>>>> results is also quite crazy.
>>>>>>>>>
>>>>>>>>
>>>>>>>> No, YOU don't seem to understand that the while the pathological 
>>>>>>>> relationship DOES affect the behavior of DDD, it doesn't mean 
>>>>>>>> that the "correct simulation" of DDD (by anybody) will differ 
>>>>>>>> from the actual behavior of DDD.
>>>>>>>>
>>>>>>>
>>>>>>> If the emulator ignores rather than emulates this
>>>>>>> pathological relationship when the x86 code specifies
>>>>>>> this pathological relationship then it is the same
>>>>>>> kind of damned liar that you are.
>>>>>>>
>>>>>>
>>>>>>
>>>>>> So, what did it actually EMULATE that differed?
>>>>>>
>>>>>
>>>>> The directly executed DDD() depends on HHH aborting what
>>>>> would otherwise be its own infinite recursive emulation.
>>>>
>>>> So? Since that is what the code of that HHH does, that is what DDD 
>>>> does.
========== REMAINDER OF ARTICLE TRUNCATED ==========