Path: ...!3.eu.feeder.erje.net!feeder.erje.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: olcott Newsgroups: comp.theory Subject: Re: H(D,D) cannot even be asked about the behavior of D(D) Date: Fri, 14 Jun 2024 12:39:15 -0500 Organization: A noiseless patient Spider Lines: 189 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Fri, 14 Jun 2024 19:39:17 +0200 (CEST) Injection-Info: dont-email.me; posting-host="e98c84ba8c24dba675dc413b0edf993a"; logging-data="3147839"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19aczj6bRcNlF0BcVBtI4rh" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:Bjsws7AwjJnFChIaau7wB6fcrOc= Content-Language: en-US In-Reply-To: Bytes: 9451 On 6/14/2024 10:54 AM, joes wrote: > Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >> On 6/14/2024 6:39 AM, Richard Damon wrote: >>> On 6/14/24 12:13 AM, olcott wrote: >>>> On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>> On 6/13/24 11:14 PM, olcott wrote: >>>>>> On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>> On 6/13/24 9:39 PM, olcott wrote: >>>>>>>> On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>> On 6/13/24 11:32 AM, olcott wrote: > >>>> H cannot even be asked the question: Does D(D) halt? >>> No, you just don't understand the proper meaning of "ask" when applied >>> to a deterministic entity. >> When H and D have a pathological relationship to each other then H(D,D) >> is not being asked about the behavior of D(D). H1(D,D) has no such >> pathological relationship thus D correctly simulated by H1 is the >> behavior of D(D). > H is asked whether its input halts, and by definition should give the > (right) answer for every input. If we used that definition of decider then no human ever decided anything because every human has made at least one mistake. I use the term "termination analyzer" as a close fit. The term partial halt decider is more accurate yet confuses most people. A partial halt decider is a halt decider with a limited domain. > D by construction is pathological to the supposed decider it is > constructed on. H1 can not decide D1. For every "decider" we can construct > an undecidable pathological program. No decider decides every input. > Parroting what you memorized by rote is not very deep understanding. Understanding that the halting problem counter-example input that does the opposite of whatever value the halt decider returns is merely the Liar Paradox in disguise is a much deeper understanding. >> Can a correct answer to the stated question be a correct answer to the >> unstated question? >> H(D,D) is not even being asked about the behavior of D(D) > > It can't be asked any other way. > It can't be asked in any way what-so-ever because it is already being asked a different question. >>>> When H is a simulating halt decider you can't even ask it about the >>>> behavior of D(D). You already said that it cannot map its input to the >>>> behavior of D(D). That means that you cannot ask H(D,D) about the >>>> behavior of D(D). >>> OF course you can, becaue, BY DEFINITION, that is the ONLY thing it >>> does with its inputs. >> That definition might be in textbooks, >> yet H does not and cannot read textbooks. > > That is very confusing. H still adheres to textbooks. > No the textbooks have it incorrectly. >> The only definition that H sees is the combination of its algorithm with >> the finite string of machine language of its input. > H does not see its own algorithm, it only follows its internal > programming. A machine and input completely determine the behaviour, > whether that is D(D) or H(D, D). > No H (with a pathological relationship to D) can possibly see the behavior of D(D). >> It is impossible to encode any algorithm such that H and D have a >> pathological relationship and have H even see the behavior of D(D). > > H literally gets it as input. > The input DOES NOT SPECIFY THE BEHAVIOR OF D(D). The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP. >> You already admitted there there is no mapping from the finite string of >> machine code of the input to H(D,D) to the behavior of D(D). > > Which means that H can't simulate D(D). Other machines can do so. > H cannot simulate D(D) for the same reason that int sum(int x, int y) { return x + y; } sum(3,4) cannot return the sum of 5 + 6; >>> And note, it only gives difinitive answers for SOME input. >> It is my understanding is that it does this much better than anyone else >> does. AProVE "symbolically executes the LLVM program". > > Better doesn't cut it. H should work for ALL programs, especially for D. > You don't even have a slight clue about termination analyzers. >>>>> H is just a "mechanical" computation. It is a rote algorithm that >>>>> does what it has been told to do. >>>> H cannot be asked the question Does D(D) halt? >>>> There is no way to encode that. You already admitted this when you >>>> said the finite string input to H(D,D) >>>> cannot be mapped to the behavior of D(D). > > H answers that question for every other input. > The question "What is your answer/Is your answer right?" is pointless > and not even computed by H. > It is ridiculously stupid to think that the pathological relationship between H and D cannot possibly change the behavior of D especially when it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D >>> It is every time it is given an input, at least if H is a halt decider. >> If you cannot even ask H the question that you want answered then this >> is not an actual case of undecidability. H does correctly answer the >> actual question that it was actually asked. > > D(D) is a valid input. H should be universal. > Likewise the Liar Paradox *should* be true or false, except for the fact that it isn't. >>> That is what halt deciders (if they exist) do. >> When H and D are defined to have a pathological relationship then H >> cannot even be asked about the behavior of D(D). > > H cannot give a correct ANSWER about D(D). > H cannot be asked the right question. >>>>> It really seems likem you just don't understand the concept of >>>>> deterministic automata, and Willful beings as being different. >>>> You can not simply correctly wave your hands to get H to know what >>>> question is being asked. > H doesn't need to know. It is programmed to answer a fixed question, > and the input completely determines the answer. > The fixed question that H is asked is: Can your input terminate normally? The answer to that question is: NO. >>>> It can't even be asked. You said that yourself. >>>> The input to H(D,D) cannot be transformed into the behavior of D(D). > It can, just not by H. > How crazy is it to expect a correct answer to a different question than the one you asked? >>> No, we can't make an arbitrary problem solver, since we can show there >>> are unsolvable problems. >> That is a whole other different issue. ========== REMAINDER OF ARTICLE TRUNCATED ==========