Deutsch   English   Français   Italiano  
<vdqsrj$mmcu$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!2.eu.feeder.erje.net!3.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: Mikko <mikko.levanto@iki.fi>
Newsgroups: comp.theory
Subject: Re: Even Google AI Overview understands me now --- different execution traces have different behavior !!!
Date: Sat, 5 Oct 2024 11:20:03 +0300
Organization: -
Lines: 141
Message-ID: <vdqsrj$mmcu$1@dont-email.me>
References: <vdgpbs$2nmcm$1@dont-email.me> <vdgqhn$2nmcm$2@dont-email.me> <7c6cede5237e3eafee262c74dd1a1c90c6b2ffbb@i2pn2.org> <vdhblt$2qm1j$2@dont-email.me> <cafee8d7a14edd7b1d76bb706c36eef06ae82896@i2pn2.org> <vdi0f8$2u1aq$1@dont-email.me> <53a60609211a04a123adafa525bac39b5cbc6959@i2pn2.org> <vdjlum$38t86$4@dont-email.me> <bf681f4404a7df8e3ffc2059dcd7c5c302aeeff1@i2pn2.org> <vdkud3$3ipp4$1@dont-email.me> <vdm1tl$3npme$1@dont-email.me> <vdn0nv$3sa9k$1@dont-email.me> <vdob4p$5sfp$1@dont-email.me> <vdovie$8eot$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 05 Oct 2024 10:20:03 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="8eb8a6a500a5f7eec7b07c6ef4fc6bc8";
	logging-data="743838"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX18w8F2d27Xt3xEPf8x8M1VU"
User-Agent: Unison/2.2
Cancel-Lock: sha1:sevrHw2AY6haap5B7FuVD+6Lw4w=
Bytes: 7849

On 2024-10-04 14:54:06 +0000, olcott said:

> On 10/4/2024 4:05 AM, Mikko wrote:
>> On 2024-10-03 21:01:51 +0000, olcott said:
>> 
>>> On 10/3/2024 7:15 AM, Mikko wrote:
>>>> On 2024-10-03 02:09:39 +0000, olcott said:
>>>> 
>>>>> On 10/2/2024 5:48 PM, Richard Damon wrote:
>>>>>> On 10/2/24 10:39 AM, olcott wrote:
>>>>>>> On 10/2/2024 6:08 AM, Richard Damon wrote:
>>>>>>>> On 10/1/24 7:26 PM, olcott wrote:
>>>>>>>>> On 10/1/2024 12:58 PM, joes wrote:
>>>>>>>>>> Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
>>>>>>>>>>> On 10/1/2024 8:09 AM, joes wrote:
>>>>>>>>>>>> Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
>>>>>>>>>>>>> On 10/1/2024 7:19 AM, olcott wrote:
>>>>>>>>>>>>>> https://www.google.com/search?
>>>>>>>>>>>>>> 
>>>>>>>>>>>> 
>>>>>>>>>> q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA- 
>>>>>>>>>> 
>>>>>>>>>>>> AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
>>>>>>>>>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>>>>>>>> It sounds like it’s trained on your spam. LLMs don’t know anything
>>>>>>>>>>>> anyway.
>>>>>>>>>> 
>>>>>>>>>>>> I asked: „What happens when HHH tries to simulate itself?”
>>>>>>>>>>>> ChatGPT: [my comments in brackets]
>>>>>>>>>> 
>>>>>>>>>>>> ### 1. **Self-Invocation in HHH**
>>>>>>>>>>>> If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop
>>>>>>>>>>>> of calls unless there is some form of termination condition built into
>>>>>>>>>>>> `HHH`.
>>>>>>>>>>>> Without such a condition, `HHH` would never return, resulting in non-
>>>>>>>>>>>> termination.
>>>>>>>>>>>> [So not a decider.]
>>>>>>>>>>>> 
>>>>>>>>>>>> ### 2. **Halting Criteria**
>>>>>>>>>>>> In a well-designed termination analyzer, `HHH` would need to have
>>>>>>>>>>>> specific logic to prevent endless recursion:
>>>>>>>>>> 
>>>>>>>>>>>> [Not present.]
>>>>>>>>>>>> 
>>>>>>>>>>>> ### 3. **Practical Consequences**
>>>>>>>>>>>> If `HHH` lacks the ability to handle self-referential calls correctly:
>>>>>>>>>>>> - **Stack Overflow**: Continuous self-invocation will eventually
>>>>>>>>>>>> exhaust the stack memory, leading to a stack overflow error.
>>>>>>>>>>>> [Not considered in theory.]
>>>>>>>>>>>> - **Incorrect Results**: If not designed carefully, it may lead to an
>>>>>>>>>>>> incorrect conclusion about halting, misrepresenting itself as either
>>>>>>>>>>>> halting or non-halting.
>>>>>>>>>>>> 
>>>>>>>>>>>> ### 4. **Best Practices for Termination Analyzers**
>>>>>>>>>>>> To avoid these pitfalls, a termination analyzer should implement:
>>>>>>>>>>>> - **Guard Clauses**: Checks at the beginning of the function to prevent
>>>>>>>>>>>> unnecessary self-simulation.
>>>>>>>>>>>> - **State Management**: A way to track the functions being analyzed,
>>>>>>>>>>>> allowing `HHH` to recognize if it's being invoked with itself and act
>>>>>>>>>>>> accordingly.
>>>>>>>>>>>> 
>>>>>>>>>>>> ### Conclusion In summary, if `HHH` tries to simulate itself without
>>>>>>>>>>>> proper safeguards,
>>>>>>>>>>>> it risks falling into an infinite recursion or causing a stack
>>>>>>>>>>>> overflow. A robust design would include checks to prevent
>>>>>>>>>>>> self-invocation, ensuring the termination analyzer can handle all
>>>>>>>>>>>> cases, including its own function,
>>>>>>>>>>>> gracefully.
>>>>>>>>>>>> 
>>>>>>>>>>> *You can continue this conversation with ChatGPT*
>>>>>>>>>> I’ll leave that to you.
>>>>>>>>>>> Click on the link and see how it answers this question:
>>>>>>>>>>> Is H a Halt Decider for D?
>>>>>>>>>> You should feed it our objections.
>>>>>>>>>> If you believe in it only when you prompt it, it is not suited as an
>>>>>>>>>> authority (fallacious anyway).
>>>>>>>>>> 
>>>>>>>>> 
>>>>>>>>> You feed it your objections.
>>>>>>>>> It will tell you how and why you are wrong.
>>>>>>>>> 
>>>>>>>> 
>>>>>>>> No, it will tell you something that matches the words you told it.
>>>>>>>> 
>>>>>>>> You don't seem to understand what Large Language Models are.
>>>>>>>> 
>>>>>>>> 
>>>>>>>> You seem to forget that LLM know nothing of the "truth", only what 
>>>>>>>> matches their training data.
>>>>>>>> 
>>>>>>>> They are know to be liars, just like you.
>>>>>>> 
>>>>>>> In the case of their evaluation of my work they are correct.
>>>>>>> 
>>>>>>> *Try this yourself*
>>>>>>> 
>>>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>>> 
>>>>>>> Does HHH have to abort its emulation of DDD to prevent the infinite 
>>>>>>> execution of DDD?
>>>>>>> 
>>>>>>> 
>>>>>> 
>>>>>> Try asking it
>>>>>> 
>>>>>> Why is it ok for HHH to say that DDD doesn't halt when it will when run?
>>>>>> 
>>>>> 
>>>>> You guys try to pretend that the pathological relationship between
>>>>> HHH and DDD does not make any difference to the behavior of DDD
>>>>> knowing full well that it does make a difference.
>>>> 
>>>> The behaviour of DDD is what DDD does if executed. As DDD takes no input
>>>> its behavour is always the same. What does "make a difference" mean
>>>> in this context?
>>>> 
>>> 
>>> The behavior of the directly executed DDD is essentially the
>>> behavior of what would otherwise be infinite recursion except
>>> that the second recursive call has already been aborted.
>>> 
>>> This is not the same as the behavior of DDD correctly emulated
>>> by the same emulator that it calls where HHH cannot rely on DDD
>>> being aborted by any other process than itself.
>> 
>> So the behaviour of DDD "correctly" emulated is not the behaviour
>> of DDD.
>> 
> 
> Directly executed DDD can rely on having its second invocation
> aborted by HHH is different behavior than DDD emulated by HHH
> according to the behavior specified by x86 the machine code of
> DDD and HHH where HHH cannot rely on DDD having already been
> aborted by anything else.

DDD does not "rely" on anything. It simply is a specification of
a behaviour.

-- 
Mikko