Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <0cbdead00fd4ebddccb71e228e47f1fed1696ba4@i2pn2.org>
Deutsch   English   Français   Italiano  
<0cbdead00fd4ebddccb71e228e47f1fed1696ba4@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!news.nk.ca!rocksolid2!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory
Subject: Re: The philosophy of computation reformulates existing ideas on a
 new basis ---
Date: Thu, 31 Oct 2024 19:08:49 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <0cbdead00fd4ebddccb71e228e47f1fed1696ba4@i2pn2.org>
References: <vfli1h$fj8s$1@dont-email.me> <vflue8$3nvp8$2@i2pn2.org>
 <vfmd8m$k2m7$1@dont-email.me>
 <bcd82d9f8a987d3884220c0df7b8f7204cb9de3e@i2pn2.org>
 <vfmueh$mqn9$1@dont-email.me>
 <ff039b922cabbb6d44f90aa71a52d8c2f446b6ab@i2pn2.org>
 <vfo95k$11qs1$1@dont-email.me> <vfp8c0$3tobi$2@i2pn2.org>
 <vfpbtq$1837o$2@dont-email.me> <vfq4h9$1fo1n$1@dont-email.me>
 <vfqrro$1jg6i$1@dont-email.me> <vfvnbk$2lj5i$1@dont-email.me>
 <vfvudo$2mcse$5@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 31 Oct 2024 23:08:49 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="366200"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
Content-Language: en-US
In-Reply-To: <vfvudo$2mcse$5@dont-email.me>
X-Spam-Checker-Version: SpamAssassin 4.0.0
Bytes: 6472
Lines: 134

On 10/31/24 8:50 AM, olcott wrote:
> On 10/31/2024 5:49 AM, Mikko wrote:
>> On 2024-10-29 14:35:34 +0000, olcott said:
>>
>>> On 10/29/2024 2:57 AM, Mikko wrote:
>>>> On 2024-10-29 00:57:30 +0000, olcott said:
>>>>
>>>>> On 10/28/2024 6:56 PM, Richard Damon wrote:
>>>>>> On 10/28/24 11:04 AM, olcott wrote:
>>>>>>> On 10/28/2024 6:16 AM, Richard Damon wrote:
>>>>>>>> The machine being used to compute the Halting Function has taken 
>>>>>>>> a finite string description, the Halting Function itself always 
>>>>>>>> took a Turing Machine,
>>>>>>>>
>>>>>>>
>>>>>>> That is incorrect. It has always been the finite string Turing 
>>>>>>> Machine
>>>>>>> description of a Turing machine is the input to the halt decider.
>>>>>>> There are always been a distinction between the abstraction and the
>>>>>>> encoding.
>>>>>>
>>>>>> Nope, read the problem you have quoted in the past.
>>>>>>
>>>>>
>>>>> Ultimately I trust Linz the most on this:
>>>>>
>>>>> the problem is: given the description of a Turing machine
>>>>> M and an input w, does M, when started in the initial
>>>>> configuration qow, perform a computation that eventually halts?
>>>>> https://www.liarparadox.org/Peter_Linz_HP_317-320.pdf
>>>>>
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qy ∞
>>>>> Ĥ.q0 ⟨Ĥ⟩ ⊢* embedded_H ⟨Ĥ⟩ ⟨Ĥ⟩ ⊢* Ĥ.qn
>>>>>
>>>>> Linz also makes sure to ignore that the behavior of ⟨Ĥ⟩ ⟨Ĥ⟩
>>>>> correctly simulated by embedded_H cannot possibly reach
>>>>> either ⟨Ĥ.qy⟩ or ⟨Ĥ.qn⟩ because like everyone else he rejects
>>>>> simulation out of hand:
>>>>>
>>>>> We cannot find the answer by simulating the action of M on w,
>>>>> say by performing it on a universal Turing machine, because
>>>>> there is no limit on the length of the computation.
>>>>
>>>> That statement does not fully reject simulation but is correct in
>>>> the observation that non-halting cannot be determied in finite time
>>>> by a complete simulation so someting else is needed instead of or
>>>> in addition to a partial simulation. Linz does include simulationg
>>>> Turing machines in his proof that no Turing machine is a halt decider.
>>>>
>>>
>>> *That people fail to agree with this and also fail to*
>>> *correctly point out any error seems to indicate dishonestly*
>>> *or lack of technical competence*
>>>
>>> DDD emulated by HHH according to the semantics of the x86
>>> language cannot possibly reach its own "return" instruction
>>> whether or not any HHH ever aborts its emulation of DDD.
>>
>> - irrelevant
> 
> 100% perfectly relevant within the philosophy of computation
> 
> *THE TITLE OF THIS THREAD*
> [The philosophy of computation reformulates existing ideas on a new 
> basis ---]
> 
>> - couterfactual
>>
> You can baselessly claim that verified facts are counter-factual
> you cannot show this.
> 
> _DDD()
> [00002172] 55         push ebp      ; housekeeping
> [00002173] 8bec       mov ebp,esp   ; housekeeping
> [00002175] 6872210000 push 00002172 ; push DDD
> [0000217a] e853f4ffff call 000015d2 ; call HHH(DDD)
> [0000217f] 83c404     add esp,+04
> [00002182] 5d         pop ebp
> [00002183] c3         ret
> Size in bytes:(0018) [00002183]
> 
> If you don't even understand the x86 language and claim
> that I am wrong that would make you a liar.

And, that system has UNDEFINED behavior per the semantics of the x86 
language, as the code at 000015d2 has not be defined per the x86 language.

If you want it to be part of the input to HHH, then it needs to be 
accepted as part of the input (even if not list, accepted that the code 
that is there at this instance, is part of the input, and thus can not 
change).

> 
> https://chatgpt.com/share/67158ec6-3398-8011-98d1-41198baa29f2
> ChatGPT explains all of the details of how and why I am correct
> and will vigorously argue against anyone that says otherwise.

Nope, it admits that you are wrong, when you tell it to forget your lies.

> 
> The key to getting correct reasoning from ChatGPT is to exhaustively
> explain all of the details of an algorithm and its input such that
> your explanation and its analysis fits within 4000 words. When you go
> over that limit it simply forgets key details and makes big mistakes.

But you lied to it, and so your arguement is based on lies.

You tell it that HHH actually simulates until it proves something, but 
then it doesn't actually prove that fact, but only show that it would 
seem to be true if it is assumed to be true.

Even YOU habe admitted that logic like that isn't allowed.

> 
> ChatGPT totally understands Simulating termination analyzer HHH
> apply to input DDD: (as proven by the above link)
> 
> void DDD()
> {
>    HHH(DDD);
>    return;
> }
> 
> ChatGPT gets overwhelmed by this same HHH applied to DD
> int DD()
> {
>    int Halt_Status = HHH(DD);
>    if (Halt_Status)
>      HERE: goto HERE;
>    return Halt_Status;
> }
> 
> 
>