Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <9e484af8a86021c12f4738b9695460f7c4cb21b0@i2pn2.org>
Deutsch   English   Français   Italiano  
<9e484af8a86021c12f4738b9695460f7c4cb21b0@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory
Subject: Re: Even Google AI Overview understands me now --- different
 execution traces have different behavior !!!
Date: Thu, 3 Oct 2024 18:34:19 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <9e484af8a86021c12f4738b9695460f7c4cb21b0@i2pn2.org>
References: <vdgpbs$2nmcm$1@dont-email.me> <vdgqhn$2nmcm$2@dont-email.me>
 <7c6cede5237e3eafee262c74dd1a1c90c6b2ffbb@i2pn2.org>
 <vdhblt$2qm1j$2@dont-email.me>
 <cafee8d7a14edd7b1d76bb706c36eef06ae82896@i2pn2.org>
 <vdi0f8$2u1aq$1@dont-email.me>
 <53a60609211a04a123adafa525bac39b5cbc6959@i2pn2.org>
 <vdjlum$38t86$4@dont-email.me>
 <bf681f4404a7df8e3ffc2059dcd7c5c302aeeff1@i2pn2.org>
 <vdkud3$3ipp4$1@dont-email.me> <vdm1tl$3npme$1@dont-email.me>
 <vdn0nv$3sa9k$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 3 Oct 2024 22:34:19 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="450643"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
In-Reply-To: <vdn0nv$3sa9k$1@dont-email.me>
Content-Language: en-US
Bytes: 9561
Lines: 205

On 10/3/24 5:01 PM, olcott wrote:
> On 10/3/2024 7:15 AM, Mikko wrote:
>> On 2024-10-03 02:09:39 +0000, olcott said:
>>
>>> On 10/2/2024 5:48 PM, Richard Damon wrote:
>>>> On 10/2/24 10:39 AM, olcott wrote:
>>>>> On 10/2/2024 6:08 AM, Richard Damon wrote:
>>>>>> On 10/1/24 7:26 PM, olcott wrote:
>>>>>>> On 10/1/2024 12:58 PM, joes wrote:
>>>>>>>> Am Tue, 01 Oct 2024 12:31:41 -0500 schrieb olcott:
>>>>>>>>> On 10/1/2024 8:09 AM, joes wrote:
>>>>>>>>>> Am Tue, 01 Oct 2024 07:39:18 -0500 schrieb olcott:
>>>>>>>>>>> On 10/1/2024 7:19 AM, olcott wrote:
>>>>>>>>>>>> https://www.google.com/search?
>>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>> q=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&sca_esv=889093c5cb21af9e&sca_upv=1&source=hp&ei=Muf7ZpOyMZHfwN4PwYL2gAc&iflsig=AL9hbdgAAAAAZvv1Qg04jNg2ze170z3a8BSGu8pA29Fj&ved=0ahUKEwiTk7zkk-2IAxWRL9AFHUGBHXAQ4dUDCBg&uact=5&oq=Termination+Analyzer+H+is+Not+Fooled+by+Pathological+Input+D&gs_lp=Egdnd3Mtd2l6IjxUZXJtaW5hdGlvbiBBbmFseXplciBIIGlzIE5vdCBGb29sZWQgYnkgUGF0aG9sb2dpY2FsIElucHV0IERIAFAAWABwAHgAkAEAmAEAoAEAqgEAuAEDyAEA-
>>>>>>>>>> AEC-AEBmAIAoAIAmAMAkgcAoAcA&sclient=gws-wiz
>>>>>>>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>>>>>> It sounds like it’s trained on your spam. LLMs don’t know 
>>>>>>>>>> anything
>>>>>>>>>> anyway.
>>>>>>>>
>>>>>>>>>> I asked: „What happens when HHH tries to simulate itself?”
>>>>>>>>>> ChatGPT: [my comments in brackets]
>>>>>>>>
>>>>>>>>>> ### 1. **Self-Invocation in HHH**
>>>>>>>>>> If `HHH` calls itself (i.e., `HHH(HHH)`), it creates an 
>>>>>>>>>> infinite loop
>>>>>>>>>> of calls unless there is some form of termination condition 
>>>>>>>>>> built into
>>>>>>>>>> `HHH`.
>>>>>>>>>> Without such a condition, `HHH` would never return, resulting 
>>>>>>>>>> in non-
>>>>>>>>>> termination.
>>>>>>>>>> [So not a decider.]
>>>>>>>>>>
>>>>>>>>>> ### 2. **Halting Criteria**
>>>>>>>>>> In a well-designed termination analyzer, `HHH` would need to have
>>>>>>>>>> specific logic to prevent endless recursion:
>>>>>>>>
>>>>>>>>>> [Not present.]
>>>>>>>>>>
>>>>>>>>>> ### 3. **Practical Consequences**
>>>>>>>>>> If `HHH` lacks the ability to handle self-referential calls 
>>>>>>>>>> correctly:
>>>>>>>>>> - **Stack Overflow**: Continuous self-invocation will eventually
>>>>>>>>>> exhaust the stack memory, leading to a stack overflow error.
>>>>>>>>>> [Not considered in theory.]
>>>>>>>>>> - **Incorrect Results**: If not designed carefully, it may 
>>>>>>>>>> lead to an
>>>>>>>>>> incorrect conclusion about halting, misrepresenting itself as 
>>>>>>>>>> either
>>>>>>>>>> halting or non-halting.
>>>>>>>>>>
>>>>>>>>>> ### 4. **Best Practices for Termination Analyzers**
>>>>>>>>>> To avoid these pitfalls, a termination analyzer should implement:
>>>>>>>>>> - **Guard Clauses**: Checks at the beginning of the function 
>>>>>>>>>> to prevent
>>>>>>>>>> unnecessary self-simulation.
>>>>>>>>>> - **State Management**: A way to track the functions being 
>>>>>>>>>> analyzed,
>>>>>>>>>> allowing `HHH` to recognize if it's being invoked with itself 
>>>>>>>>>> and act
>>>>>>>>>> accordingly.
>>>>>>>>>>
>>>>>>>>>> ### Conclusion In summary, if `HHH` tries to simulate itself 
>>>>>>>>>> without
>>>>>>>>>> proper safeguards,
>>>>>>>>>> it risks falling into an infinite recursion or causing a stack
>>>>>>>>>> overflow. A robust design would include checks to prevent
>>>>>>>>>> self-invocation, ensuring the termination analyzer can handle all
>>>>>>>>>> cases, including its own function,
>>>>>>>>>> gracefully.
>>>>>>>>>>
>>>>>>>>> *You can continue this conversation with ChatGPT*
>>>>>>>> I’ll leave that to you.
>>>>>>>>> Click on the link and see how it answers this question:
>>>>>>>>> Is H a Halt Decider for D?
>>>>>>>> You should feed it our objections.
>>>>>>>> If you believe in it only when you prompt it, it is not suited 
>>>>>>>> as an
>>>>>>>> authority (fallacious anyway).
>>>>>>>>
>>>>>>>
>>>>>>> You feed it your objections.
>>>>>>> It will tell you how and why you are wrong.
>>>>>>>
>>>>>>
>>>>>> No, it will tell you something that matches the words you told it.
>>>>>>
>>>>>> You don't seem to understand what Large Language Models are.
>>>>>>
>>>>>>
>>>>>> You seem to forget that LLM know nothing of the "truth", only what 
>>>>>> matches their training data.
>>>>>>
>>>>>> They are know to be liars, just like you.
>>>>>
>>>>> In the case of their evaluation of my work they are correct.
>>>>>
>>>>> *Try this yourself*
>>>>>
>>>>> https://chatgpt.com/share/66fbec5c-7b10-8011-9ce6-3c26424cb21c
>>>>>
>>>>> Does HHH have to abort its emulation of DDD to prevent the infinite 
>>>>> execution of DDD?
>>>>>
>>>>>
>>>>
>>>> Try asking it
>>>>
>>>> Why is it ok for HHH to say that DDD doesn't halt when it will when 
>>>> run?
>>>>
>>>
>>> You guys try to pretend that the pathological relationship between
>>> HHH and DDD does not make any difference to the behavior of DDD
>>> knowing full well that it does make a difference.
>>
>> The behaviour of DDD is what DDD does if executed. As DDD takes no input
>> its behavour is always the same. What does "make a difference" mean
>> in this context?
>>
> 
> The behavior of the directly executed DDD is essentially the
> behavior of what would otherwise be infinite recursion except
> that the second recursive call has already been aborted.

Nol, the recursion was NEVER infinite, becuase the HHH that it calls 
ALWAYS aborts its simulation at that point.

> 
> This is not the same as the behavior of DDD correctly emulated
> by the same emulator that it calls where HHH cannot rely on DDD
> being aborted by any other process than itself.

No, that is EXACTLY the behavior of DDD correctly emulated,

THe fact that HHH doesn't do that means your criteria is just invalid.

> 
> Do an entirely different thing and expecting the same results
> is just as insane as doing the same thing and expecting different results.
> 
> 

Right, the fact that HHH presumes that the HHH that DDD does will do 
something different than it does makes HHH wrong, and the fact that its 
programmer thinks it WILL do something different shows he is actually 
INSAIN.

Think of it this way, If you were3 DEAD, you wouldn't need to worry any 
more about dying. Does that mean that you shouldn't worry about yourself 
dying.

No, because you are not dead yet.

Thus, by the same token, HHH can't consider that the HHH that DDD is 
calling is different than what it will do itself, and thus since i WILL 
abort, it is just lying to itself (or its programmer lied to it) for it 
to think it will behave differently.

All you are doing is PROVING that you think LYING is valid logic, and it 
is ok to think things are the way they are not.

Thus you are AGREEING with the Climate Change and Election deniers, you 
are telling them it is ok to ignore the actual facts and hold on to 
========== REMAINDER OF ARTICLE TRUNCATED ==========