Path: nntp.eternal-september.org!news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: Richard Damon Newsgroups: comp.theory,comp.ai.philosophy,sci.logic Subject: Re: Title: A Structural Analysis of the Standard Halting Problem Proof Date: Sun, 20 Jul 2025 18:48:16 -0400 Organization: i2pn2 (i2pn.org) Message-ID: <76a984ed68ff0ff7b7b54b237b1867012791ae93@i2pn2.org> References: <105ht1n$36s20$1@dont-email.me> <105iuqi$3cagp$4@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Sun, 20 Jul 2025 23:43:24 -0000 (UTC) Injection-Info: i2pn2.org; logging-data="1509501"; mail-complaints-to="usenet@i2pn2.org"; posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg"; User-Agent: Mozilla Thunderbird X-Spam-Checker-Version: SpamAssassin 4.0.0 In-Reply-To: <105iuqi$3cagp$4@dont-email.me> Content-Language: en-US On 7/20/25 10:34 AM, olcott wrote: > On 7/20/2025 6:13 AM, Richard Damon wrote: >> On 7/20/25 12:58 AM, olcott wrote: >>> Title: A Structural Analysis of the Standard Halting Problem Proof >>> >>> Author: PL Olcott >>> >>> Abstract: >>> This paper presents a formal critique of the standard proof of the >>> undecidability of the Halting Problem. While we do not dispute the >>> conclusion that the Halting Problem is undecidable, we argue that the >>> conventional proof fails to establish this conclusion due to a >>> fundamental misapplication of Turing machine semantics. Specifically, >>> we show that the contradiction used in the proof arises from >>> conflating the behavior of encoded simulations with direct execution, >>> and from making assumptions about a decider's domain that do not hold >>> under a rigorous model of computation. >>> >>> >> >> Your problem is you don't understand the meaning of the words you are >> using. >> >> You are starting with an incorrect assumption that a "Correct >> Simulation" can possible show behavior that is not in the direct >> exectuion of the machine, but that is IMPOSSIBLE, as the DEFINITION of >> "Correct Simulation" is that it reveals exactly the same behavior as >> the direct execution of the machine. >> > > > Misrepresentation of Input: > The standard proof assumes a decider > H(M,x) that determines whether machine > M halts on input x. > > But this formulation is flawed, because: > Turing machines can only process finite > encodings (e.g. ⟨M⟩), not executable entities > like M. > > So the valid formulation must be > H(⟨M⟩,x), where ⟨M⟩ is a string. > SInce the "H(M,x)" is YOUR notation, and the Linz notation was H w IT seems that ChatGPT is pointing out the error in YOUR work, not the proof. Note, The Sipser proof that uses call notation defines how it works, and H(D) is DEFINED as passing a specific form of representation of the program D to H. Your problem is you just don't understand what you are talking about, and thus can't explain it correctly to the AI, and thus you get the nonsense back that you do. > > >> You talk about the "misapplication of Turing Machine Semantics", but >> you have shown that you don't understand what those are, >> >> Sorry, your abstract just reveals that you don't know what you are >> talking about. > >