Deutsch English Français Italiano |
<v4ju8f$222a$1@i2pn2.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!news.misty.com!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: joes <noreply@example.com> Newsgroups: comp.theory Subject: Re: H(D,D) cannot even be asked about the behavior of D(D) Date: Sat, 15 Jun 2024 11:34:39 -0000 (UTC) Organization: i2pn2 (i2pn.org) Message-ID: <v4ju8f$222a$1@i2pn2.org> References: <v45tec$4q15$1@dont-email.me> <v4apjs$19rnv$1@dont-email.me> <v4arp0$1a7uo$1@dont-email.me> <v4b1c3$3nf9n$3@i2pn2.org> <v4b50m$1f89t$5@dont-email.me> <v4c12r$3oop0$3@i2pn2.org> <v4cjl7$1o4b4$1@dont-email.me> <v4d991$3qbnc$1@i2pn2.org> <v4da12$1sioe$1@dont-email.me> <v4dbmf$3qbnc$3@i2pn2.org> <v4dcd6$1sioe$3@dont-email.me> <v4df0h$3qbnd$1@i2pn2.org> <v4dhf5$1tsdf$2@dont-email.me> <v4dja1$3qbnd$5@i2pn2.org> <v4djhf$1tsdf$6@dont-email.me> <v4dk7b$3qbnc$8@i2pn2.org> <v4dl3b$225kb$1@dont-email.me> <v4dn5u$3qbnd$8@i2pn2.org> <v4dop4$22o4a$2@dont-email.me> <v4dq07$3qbnc$12@i2pn2.org> <v4dqq0$2353n$1@dont-email.me> <v4el9m$3rsd6$3@i2pn2.org> <v4f3ec$2akmh$2@dont-email.me> <v4g65a$3tn6q$1@i2pn2.org> <v4g6vr$2ic0g$1@dont-email.me> <v4gc0b$3tn6r$6@i2pn2.org> <v4gcjc$2msea$1@dont-email.me> <v4geab$3tn6r$8@i2pn2.org> <v4gg0s$2nim8$2@dont-email.me> <v4ha63$3v16r$2@i2pn2.org> <v4hfq9$2sdqr$5@dont-email.me> <v4hp3r$3viml$1@i2pn2.org> <v4hv85$3021v$1@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Injection-Date: Sat, 15 Jun 2024 11:34:39 -0000 (UTC) Injection-Info: i2pn2.org; logging-data="67658"; mail-complaints-to="usenet@i2pn2.org"; posting-account="nS1KMHaUuWOnF/ukOJzx6Ssd8y16q9UPs1GZ+I3D0CM"; User-Agent: Pan/0.145 (Duplicitous mercenary valetism; d7e168a git.gnome.org/pan2) X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 9925 Lines: 166 Am Fri, 14 Jun 2024 12:39:15 -0500 schrieb olcott: > On 6/14/2024 10:54 AM, joes wrote: >> Am Fri, 14 Jun 2024 08:15:52 -0500 schrieb olcott: >>> On 6/14/2024 6:39 AM, Richard Damon wrote: >>>> On 6/14/24 12:13 AM, olcott wrote: >>>>> On 6/13/2024 10:44 PM, Richard Damon wrote: >>>>>> On 6/13/24 11:14 PM, olcott wrote: >>>>>>> On 6/13/2024 10:04 PM, Richard Damon wrote: >>>>>>>> On 6/13/24 9:39 PM, olcott wrote: >>>>>>>>> On 6/13/2024 8:24 PM, Richard Damon wrote: >>>>>>>>>> On 6/13/24 11:32 AM, olcott wrote: >>> When H and D have a pathological relationship to each other then >>> H(D,D) is not being asked about the behavior of D(D). H1(D,D) has no >>> such pathological relationship thus D correctly simulated by H1 is the >>> behavior of D(D). What is H1 asked? >> H is asked whether its input halts, and by definition should give the >> (right) answer for every input. > If we used that definition of decider then no human ever decided > anything because every human has made at least one mistake. Yes. Humans are not machines. > I use the term "termination analyzer" as a close fit. The term partial > halt decider is more accurate yet confuses most people. I have not seen you use that term before. You have not called it partial. That was confusing. >> D by construction is pathological to the supposed decider it is >> constructed on. H1 can not decide D1. For every "decider" we can >> construct an undecidable pathological program. No decider decides every >> input. > Parroting what you memorized by rote is not very deep understanding. This was my own phrasing. Can you explain the halting problem proof? > Understanding that the halting problem counter-example input that does > the opposite of whatever value the halt decider returns is merely the > Liar Paradox in disguise is a much deeper understanding. I know that. >>> H(D,D) is not even being asked about the behavior of D(D) >> It can't be asked any other way. > It can't be asked in any way what-so-ever because it is already being > asked a different question. Is that question "Do you answer yes?"? >>>>> When H is a simulating halt decider you can't even ask it about the >>>>> behavior of D(D). You already said that it cannot map its input to >>>>> the behavior of D(D). That means that you cannot ask H(D,D) about >>>>> the behavior of D(D). >>>> OF course you can, becaue, BY DEFINITION, that is the ONLY thing it >>>> does with its inputs. >>> That definition might be in textbooks, >>> yet H does not and cannot read textbooks. >> That is very confusing. H still adheres to textbooks. > No the textbooks have it incorrectly. >>> The only definition that H sees is the combination of its algorithm >>> with the finite string of machine language of its input. >> H does not see its own algorithm, it only follows its internal >> programming. A machine and input completely determine the behaviour, >> whether that is D(D) or H(D, D). > No H (with a pathological relationship to D) can possibly see the > behavior of D(D). That is not a problem with D, but with H not being total. >>> It is impossible to encode any algorithm such that H and D have a >>> pathological relationship and have H even see the behavior of D(D). >> H literally gets it as input. > The input DOES NOT SPECIFY THE BEHAVIOR OF D(D). > The input specifies the behavior WITHIN THE PATHOLOGICAL RELATIONSHIP It > does not specify the behavior WITHOUT THE PATHOLOGICAL RELATIONSHIP. There is no difference. If an H exists, it gives one answer. D then does the opposite. H cannot change its answer. Other analysers can see that H gives the wrong answer. >>> You already admitted there there is no mapping from the finite string >>> of machine code of the input to H(D,D) to the behavior of D(D). >> Which means that H can't simulate D(D). Other machines can do so. > H cannot simulate D(D) for the same reason that int sum(int x, int y) { > return x + y; } sum(3,4) cannot return the sum of 5 + 6; >>>> And note, it only gives definitive answers for SOME input. >>> It is my understanding is that it does this much better than anyone >>> else does. AProVE "symbolically executes the LLVM program". >> Better doesn't cut it. H should work for ALL programs, especially for >> D. > You don't even have a slight clue about termination analyzers. Why do you say that? A (partial) termination analyser doesn't disprove the halting problem. >>>>> H cannot be asked the question Does D(D) halt? >>>>> There is no way to encode that. You already admitted this when you >>>>> said the finite string input to H(D,D) >>>>> cannot be mapped to the behavior of D(D). >> H answers that question for every other input. >> The question "What is your answer/Is your answer right?" is pointless >> and not even computed by H. > It is ridiculously stupid to think that the pathological relationship > between H and D cannot possibly change the behavior of D especially when > it has been conclusively proven that it DOES CHANGE THE BEHAVIOR OF D D as a machine is completely specified and a valid Turing machine: It asks a supposed decider if it halts, and then does the opposite, making the decider wrong. Other deciders than the one it calls can simulate or decide it. D has exactly one fixed behaviour, like all TMs. The behaviour of H should change because of the recursion, but it has to make up its mind. D goes "I'm gonna do the opposite of what you said". >>> If you cannot even ask H the question that you want answered then this >>> is not an actual case of undecidability. H does correctly answer the >>> actual question that it was actually asked. That would be the wrong question. >> D(D) is a valid input. H should be universal. > Likewise the Liar Paradox *should* be true or false, > except for the fact that it isn't. >>> When H and D are defined to have a pathological relationship then H >>> cannot even be asked about the behavior of D(D). >> H cannot give a correct ANSWER about D(D). > H cannot be asked the right question. Then H would be faulty. >>>>> You can not simply wave your hands to get H to know what >>>>> question is being asked. >> H doesn't need to know. It is programmed to answer a fixed question, >> and the input completely determines the answer. > The fixed question that H is asked is: > Can your input terminate normally? Does the input terminate, rather. > The answer to that question is: NO. If that were so, this would be given to D, since it asks H about itself. In this case, it would actually terminate. If H said Yes, it would go into an infinite loop. >>>>> It can't even be asked. You said that yourself. >>>>> The input to H(D,D) cannot be transformed into the behavior of D(D). >> It can, just not by H. > How crazy is it to expect a correct answer to a different question than > the one you asked? >>>> No, we can't make an arbitrary problem solver, since we can show >>>> there are unsolvable problems. >>> That is a whole other different issue. >>> The key subset of this is that the notion of undecidability is a ruse. >> A ruse for what? There are undecidable problems. Like halting. >>>> Nothing says we can't encode the Halting Question into an input. >>> If there is no mapping from the input to H(D,D) to the behavior of >>> D(D) then H cannot possibly be asked about behavior that it cannot >>> possibly see. >> It can be asked and be wrong. >>>> What can't be done is create a program that gives the right answer >>>> for all such inputs. >>> Expecting a correct answer to the wrong question is only foolishness. >> The question is just whether D(D) halts. >> Where do you disagree with the halting problem proof? > There are several different issues the key one of these issues [...] > is that there is something wrong with it along the lines of it being > isomorphic to the Liar Paradox. "Something along the lines"? Can you point to the step where you disagree? ========== REMAINDER OF ARTICLE TRUNCATED ==========