Path: ...!feed.opticnetworks.net!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: olcott Newsgroups: comp.theory Subject: Re: H(D,D) cannot even be asked about the behavior of D(D) Date: Sat, 15 Jun 2024 07:21:37 -0500 Organization: A noiseless patient Spider Lines: 185 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Sat, 15 Jun 2024 14:21:37 +0200 (CEST) Injection-Info: dont-email.me; posting-host="020d44455d70acf7231aebb6a85d124b"; logging-data="3637804"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19h2KMA3dNTxkKorBe996+x" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:499IGzpjOTdptt+U/7fnZF2KOlY= Content-Language: en-US In-Reply-To: Bytes: 10782 On 6/15/2024 6:34 AM, joes wrote: > Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott: >> On 6/14/2024 10:54 AM, joes wrote: >>> Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>>> On 6/14/2024 6:39 AM, Richard Damon wrote: >>>>> On 6/14/24 12:13 AM, olcott wrote: >>>>>> On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>>> On 6/13/24 11:14 PM, olcott wrote: >>>>>>>> On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>>> On 6/13/24 9:39 PM, olcott wrote: >>>>>>>>>> On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote: > >>>> When H and D have a pathological relationship to each other then >>>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>>> such pathological relationship thus D correctly simulated by H1 is the >>>> behavior of D(D). > What is H1 asked? >>> H is asked whether its input halts, and by definition should give the >>> (right) answer for every input. >> If we used that definition of decider then no human ever decided >> anything because every human has made at least one mistake. > Yes. Humans are not machines. >> I use the term "termination analyzer" as a close fit. The term partial >> halt decider is more accurate yet confuses most people. > I have not seen you use that term before. You have not called it partial. > That was confusing. > >>> D by construction is pathological to the supposed decider it is >>> constructed on. H1 can not decide D1. For every "decider" we can >>> construct an undecidable pathological program. No decider decides every >>> input. >> Parroting what you memorized by rote is not very deep understanding. > This was my own phrasing. Can you explain the halting problem proof? >> Understanding that the halting problem counter-example input that does >> the opposite of whatever value the halt decider returns is merely the >> Liar Paradox in disguise is a much deeper understanding. > > I know that. > If you really knew that then you would know that the Halting probe is a mere ruse. >>>> H(D,D) is not even being asked about the behavior of D(D) >>> It can't be asked any other way. >> It can't be asked in any way what-so-ever because it is already being >> asked a different question. > Is that question "Do you answer yes?"? > >>>>>> When H is a simulating halt decider you can't even ask it about the >>>>>> behavior of D(D). You already said that it cannot map its input to >>>>>> the behavior of D(D). That means that you cannot ask H(D,D) about >>>>>> the behavior of D(D). >>>>> OF course you can, becaue, BY DEFINITION, that is the ONLY thing it >>>>> does with its inputs. >>>> That definition might be in textbooks, >>>> yet H does not and cannot read textbooks. >>> That is very confusing. H still adheres to textbooks. >> No the textbooks have it incorrectly. > > >>>> The only definition that H sees is the combination of its algorithm >>>> with the finite string of machine language of its input. >>> H does not see its own algorithm, it only follows its internal >>> programming. A machine and input completely determine the behaviour, >>> whether that is D(D) or H(D, D). >> No H (with a pathological relationship to D) can possibly see the >> behavior of D(D). > That is not a problem with D, but with H not being total. > >>>> It is impossible to encode any algorithm such that H and D have a >>>> pathological relationship and have H even see the behavior of D(D). >>> H literally gets it as input. >> The input DOES NOT SPECIFY THE BEHAVIOR OF D(D). >> The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It >> does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP. > There is no difference. If an H exists, it gives one answer. D then does > the opposite. H cannot change its answer. Other analysers can see that > H gives the wrong answer. > >>>> You already admitted there there is no mapping from the finite string >>>> of machine code of the input to H(D,D) to the behavior of D(D). >>> Which means that H can't simulate D(D). Other machines can do so. >> H cannot simulate D(D) for the same reason that int sum(int x, int y) { >> return x + y; } sum(3,4) cannot return the sum of 5 + 6; > >>>>> And note, it only gives definitive answers for SOME input. >>>> It is my understanding is that it does this much better than anyone >>>> else does. AProVE "symbolically executes the LLVM program". >>> Better doesn't cut it. H should work for ALL programs, especially for >>> D. >> You don't even have a slight clue about termination analyzers. > Why do you say that? A (partial) termination analyser doesn't disprove > the halting problem. > >>>>>> H cannot be asked the question Does D(D) halt? >>>>>> There is no way to encode that. You already admitted this when you >>>>>> said the finite string input to H(D,D) >>>>>> cannot be mapped to the behavior of D(D). >>> H answers that question for every other input. >>> The question "What is your answer/Is your answer right?" is pointless >>> and not even computed by H. >> It is ridiculously stupid to think that the pathological relationship >> between H and D cannot possibly change the behavior of D especially when >> it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D > D as a machine is completely specified and a valid Turing machine: > It asks a supposed decider if it halts, and then does the opposite, > making the decider wrong. > Other deciders than the one it calls can simulate or decide it. > D has exactly one fixed behaviour, like all TMs. > The behaviour of H should change because of the recursion, but it has to > make up its mind. D goes "I'm gonna do the opposite of what you said". > >>>> If you cannot even ask H the question that you want answered then this >>>> is not an actual case of undecidability. H does correctly answer the >>>> actual question that it was actually asked. > That would be the wrong question. >>> D(D) is a valid input. H should be universal. >> Likewise the Liar Paradox *should* be true or false, >> except for the fact that it isn't. > >>>> When H and D are defined to have a pathological relationship then H >>>> cannot even be asked about the behavior of D(D). >>> H cannot give a correct ANSWER about D(D). >> H cannot be asked the right question. > Then H would be faulty. > >>>>>> You can not simply wave your hands to get H to know what >>>>>> question is being asked. >>> H doesn't need to know. It is programmed to answer a fixed question, >>> and the input completely determines the answer. >> The fixed question that H is asked is: >> Can your input terminate normally? > Does the input terminate, rather. >> The answer to that question is: NO. > If that were so, this would be given to D, since it asks H about itself. > In this case, it would actually terminate. If H said Yes, it would go > into an infinite loop. > >>>>>> It can't even be asked. You said that yourself. >>>>>> The input to H(D,D) cannot be transformed into the behavior of D(D). >>> It can, just not by H. >> How crazy is it to expect a correct answer to a different question than >> the one you asked? > >>>>> No, we can't make an arbitrary problem solver, since we can show >>>>> there are unsolvable problems. >>>> That is a whole other different issue. >>>> The key subset of this is that the notion of undecidability is a ruse. >>> A ruse for what? > There are undecidable problems. Like halting. > >>>>> Nothing says we can't encode the Halting Question into an input. >>>> If there is no mapping from the input to H(D,D) to the behavior of >>>> D(D) then H cannot possibly be asked about behavior that it cannot >>>> possibly see. No it cannot even be asked and the technical details of this are over everyone's head. Computing the map from the input to H(D,D) to the behavior of D(D) has nothing to do with Google maps. ========== REMAINDER OF ARTICLE TRUNCATED ==========