Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: Richard Damon Newsgroups: comp.theory,sci.logic Subject: Re: How Partial Simulations correctly determine non-halting ---Mike Terry Error !!! Date: Wed, 5 Jun 2024 21:10:25 -0400 Organization: i2pn2 (i2pn.org) Message-ID: References: <87h6eamkgf.fsf@bsb.me.uk> <_gWdnbwuZPJP2sL7nZ2dnZfqn_GdnZ2d@brightview.co.uk> <87frtr6867.fsf@bsb.me.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Thu, 6 Jun 2024 01:10:26 -0000 (UTC) Injection-Info: i2pn2.org; logging-data="3314249"; mail-complaints-to="usenet@i2pn2.org"; posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg"; User-Agent: Mozilla Thunderbird In-Reply-To: Content-Language: en-US X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 8705 Lines: 179 On 6/5/24 8:48 PM, olcott wrote: > On 6/5/2024 6:33 PM, Mike Terry wrote: >> On 05/06/2024 18:25, John Smith wrote: >>> On 5/06/24 18:49, olcott wrote: >>> >>>> My idea was to have the executed HH pass a portion of what is >>>> essentially its own Turing Machine tape down to the simulated >>>> instances of HH. It does do this now. >>> >>> In other words, your idea is to have an incorrect simulation. >>> >>> DD(DD) doesn't get passed a portion of any other machine's tape. If >>> the simulation does, then the simulation is simulating something >>> other than DD(DD). It might be simulating DD(DD,SecretParameter). >>> >>>> The only issue left that seems to not matter is that each simulated >>>> HH needs to see if it must initialize its own tape. >>> >>> Why shouldn't it always initialize its own tape? >>> >>>   Since this >>>> has no effect on its halt status decision I don't think it makes >>>> any difference. >>>> >>>> I will double check everything to make sure there is no data passed >>>> from the outer simulations to the inner simulations that can possibly >>>> be used for any halt status decision by these inner simulated >>>> instances of HH. >>> >>> There are actually two different ways that it's possible to >>> understand your program. >>> >>> The inner DebugStep doesn't work the same as the outer DebugStep. >>> Depending on which kind of viewpoint we use, we can say that the >>> inner DebugStep is using secret information so it knows to work >>> differently, or we can say that the inner DebugStep is normal (if >>> directly executed), but it simulates differently from its direct >>> execution (an incorrect simulation). >>> >> >> I'd put it like this... >> >> The job of the DebugStep is to simulate 1 instruction of the simulated >> code.  It does that the same for both inner and outer simulations, >> AFAICT. >> >> So that's instruction-level simulation, but there's more to correct >> simulation than just correct instructions.  Easiest if I give a code >> example: >> >> --------------------------------------------- >> // declare Simulate function... >> ...  // omitted to save me thinking about it :) >> >> static int depth = 0; >> >> void Target () >> { >>    if (depth == 0) >>    { >>      printf (" First time here!!\n"); >>    } >>    else >>    { >>      printf (" Been here before!!\n"); >>    } >>    depth++; >> } >> >> void main () >> { >>    printf ("Direct from main:\n"); >>    Target ();   // direct execution >> >>    printf ("Simulated:\n"); >>    Simulate (Target); >> } >> --------------------------------------------- >> >> Output: >> >> Direct from main: >>   First time here!! >> Simulated: >>   Been here before!! >> --------------------------------------------- >> >> (Rhetorical:) How come the simulation of Target by Simulate (which >> used DebugStep() internally) produced totally different output to the >> direct execution?  It didn't even simulate the same instructions... >> >> If we examine a trace of all the instructions that DebugStep executed >> we'll see that every instruction it simulated was simulated correctly. >> "So I've PROVED the simulation was correct" >> says PO.  Obviously more is needed, which is that in a simulation >> environment where all simulations share one single memory space, the >> functions being simulated need to follow "simulation-compatibility >> rules", such as "no use of mutable global variables" and so forth. >> >> If I asked "Which DebugStep call was incorrect then?" that would be >> missing the point.  All the DebugSteps are correct at the instruction >> simulation level - it is the wider environment that PO has created >> through his design decisions where the problem lies.  [Similarly in >> your post, both the inner and outer DebugStep are correct in terms of >> each instruction they simulate.  But in the wider picture, the inner >> simulation of HH is obviously Wrong] >> >> Slightly tricky point:  PO's larger aim here is to refute the Linz >> (and similar) proofs which are Turing Machine based.  I.e. the subject >> to which simulation applies is the "machine"  (+input).  He translates >> this into a C world, where the his machines /should/ logically become >> a /program/. >> >> Imagine PO had implemented his simulation more logically by: >> 1.  Creating a totally new 32-bit address space >> 2.  Loading the code to be simulated >> 3.  Setting up initial stack and parameter arguments etc. >> 4.  Stepping the situation as required, within that new environment >> >> Then the Wrong output above would become fine, i.e. We'd see >> >> --------------------------------------------- >> >> Output: >> >> Direct from main: >>   First time here!! >> Simulated: >>   First time here!! >> --------------------------------------------- >> >> All would be good, and the restrictions on use of mutable global data >> and similar would not be required.  That's because what is being >> simulated in this case is /the whole program/.  That's actually what >> PO should want for his argument because it relates back to the Linz >> TMs. But PO has chosen a design with a quirky simulation where the >> subject of simulation is individual functions within some larger >> shared context. So he has only himself to blame for the compatibility >> restrictions... >> >> Still, those restrictions are pretty much common sense if you get the >> purpose of the simulation, right?  And PO would have reasoned up front >> that appropriate restriction will need to be followed, right?  No >> chance... >> >> Mike. >> > > It is DEAD OBVIOUS (Only if one pays attention) > that this DD simulated by the same HH that it calls > *DOES REMAIN STUCK IN RECURSIVE SIMULATION* > Until the outer HH stops simulating it. But if correctly simulated past that point, it will halt. Since Halting is reaching a final state in ANY number of finite steps, non-halting requires showing that it will run for an UNBOUNDED number of steps, which HH did not do. You looked at a DIFFERENT input to try to show that, which means you effectively looked at a fifteen story office building to learn about the property of puppies. ========== REMAINDER OF ARTICLE TRUNCATED ==========