Deutsch   English   Français   Italiano  
<bf681f4404a7df8e3ffc2059dcd7c5c302aeeff1@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory
Subject: Re: Even Google AI Overview understands me now
Date: Wed, 2 Oct 2024 18:48:02 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <bf681f4404a7df8e3ffc2059dcd7c5c302aeeff1@i2pn2.org>
References: <vdgpbs$2nmcm$1@dont-email.me> <vdgqhn$2nmcm$2@dont-email.me>
 <7c6cede5237e3eafee262c74dd1a1c90c6b2ffbb@i2pn2.org>
 <vdhblt$2qm1j$2@dont-email.me>
 <cafee8d7a14edd7b1d76bb706c36eef06ae82896@i2pn2.org>
 <vdi0f8$2u1aq$1@dont-email.me>
 <53a60609211a04a123adafa525bac39b5cbc6959@i2pn2.org>
 <vdjlum$38t86$4@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 2 Oct 2024 22:48:02 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="308244"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <vdjlum$38t86$4@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
Bytes: 6057
Lines: 119

On 10/2/24 10:39 AM, olcott wrote:
> On 10/2/2024 6:08 AM, Richard Damon wrote:
>> On 10/1/24 7:26 PM, olcott wrote:
>>> On 10/1/2024 12:58 PM, joes wrote:
>>>> Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
>>>>> On 10/1/2024 8:09 AM, joes wrote:
>>>>>> Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
>>>>>>> On 10/1/2024 7:19 AM, olcott wrote:
>>>>>>>> https://www.google.com/search?
>>>>>>>>
>>>>>>
>>>> q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
>>>>>> AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
>>>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>> It sounds like it’s trained on your spam. LLMs don’t know anything
>>>>>> anyway.
>>>>
>>>>>> I asked: „What happens when HHH tries to simulate itself?”
>>>>>> ChatGPT: [my comments in brackets]
>>>>
>>>>>> ### 1. **Self-Invocation in HHH**
>>>>>> If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an infinite loop
>>>>>> of calls unless there is some form of termination condition built 
>>>>>> into
>>>>>> `HHH`.
>>>>>> Without such a condition, `HHH` would never return, resulting in non-
>>>>>> termination.
>>>>>> [So not a decider.]
>>>>>>
>>>>>> ### 2. **Halting Criteria**
>>>>>> In a well-designed termination analyzer, `HHH` would need to have
>>>>>> specific logic to prevent endless recursion:
>>>>
>>>>>> [Not present.]
>>>>>>
>>>>>> ### 3. **Practical Consequences**
>>>>>> If `HHH` lacks the ability to handle self-referential calls 
>>>>>> correctly:
>>>>>> - **Stack Overflow**: Continuous self-invocation will eventually
>>>>>> exhaust the stack memory, leading to a stack overflow error.
>>>>>> [Not considered in theory.]
>>>>>> - **Incorrect Results**: If not designed carefully, it may lead to an
>>>>>> incorrect conclusion about halting, misrepresenting itself as either
>>>>>> halting or non-halting.
>>>>>>
>>>>>> ### 4. **Best Practices for Termination Analyzers**
>>>>>> To avoid these pitfalls, a termination analyzer should implement:
>>>>>> - **Guard Clauses**: Checks at the beginning of the function to 
>>>>>> prevent
>>>>>> unnecessary self-simulation.
>>>>>> - **State Management**: A way to track the functions being analyzed,
>>>>>> allowing `HHH` to recognize if it's being invoked with itself and act
>>>>>> accordingly.
>>>>>>
>>>>>> ### Conclusion In summary, if `HHH` tries to simulate itself without
>>>>>> proper safeguards,
>>>>>> it risks falling into an infinite recursion or causing a stack
>>>>>> overflow. A robust design would include checks to prevent
>>>>>> self-invocation, ensuring the termination analyzer can handle all
>>>>>> cases, including its own function,
>>>>>> gracefully.
>>>>>>
>>>>> *You can continue this conversation with ChatGPT*
>>>> I’ll leave that to you.
>>>>> Click on the link and see how it answers this question:
>>>>> Is H a Halt Decider for D?
>>>> You should feed it our objections.
>>>> If you believe in it only when you prompt it, it is not suited as an
>>>> authority (fallacious anyway).
>>>>
>>>
>>> You feed it your objections.
>>> It will tell you how and why you are wrong.
>>>
>>
>> No, it will tell you something that matches the words you told it.
>>
>> You don't seem to understand what Large Language Models are.
>>
>>
>> You seem to forget that LLM know nothing of the "truth", only what 
>> matches their training data.
>>
>> They are know to be liars, just like you.
> 
> In the case of their evaluation of my work they are correct.
> 
> *Try this yourself*
> 
> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
> 
> Does HHH have to abort its emulation of DDD to prevent the infinite 
> execution of DDD?
> 
> 

Try asking it

Why is it ok for HHH to say that DDD doesn't halt when it will when run?

At it ADMITS that HHH can't give the right answer because it sees a problem.

Then ask it:

Does this mean it is ok to give a wrong answer at time?


Yes, it can be acceptable for a termination analyzer like HHH to give a 
"wrong" answer in certain contexts. This mainly hinges on the 
distinction between the theoretical analysis of a program and its actual 
execution behavior. Here’s a more detailed look at this concept:


So, Chat GPT is just ADMITTING that you have trained it that it is 
permissable to *LIE* about the answer.

Sorry, you are just proving that you don't know what "truth" actually means.