Deutsch English Français Italiano |
<v3qnip$152b5$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: olcott <polcott333@gmail.com> Newsgroups: comp.theory,sci.logic Subject: Re: How Partial Simulations correctly determine non-halting ---Mike Terry Error Date: Wed, 5 Jun 2024 17:07:20 -0500 Organization: A noiseless patient Spider Lines: 352 Message-ID: <v3qnip$152b5$1@dont-email.me> References: <v3j20v$3gm10$2@dont-email.me> <J_CdnTaA96jxpcD7nZ2dnZfqnPudnZ2d@brightview.co.uk> <87h6eamkgf.fsf@bsb.me.uk> <v3kcdj$3stk9$1@dont-email.me> <v3l7uo$13cp$8@dont-email.me> <v3lcat$228t$3@dont-email.me> <v3mq9j$chc3$1@dont-email.me> <v3mrli$chc4$1@dont-email.me> <_gWdnbwuZPJP2sL7nZ2dnZfqn_GdnZ2d@brightview.co.uk> <v3nkqr$h7f9$3@dont-email.me> <v3oeh5$jthg$2@dont-email.me> <v3of8e$lirl$1@dont-email.me> <v3ofld$jthh$1@dont-email.me> <v3oh8l$pi6u$3@dont-email.me> <v3ohkh$jthg$4@dont-email.me> <87frtr6867.fsf@bsb.me.uk> <p5ydnbxV2pHtF_37nZ2dnZfqn_WdnZ2d@brightview.co.uk> <v3q4ut$11tp3$1@dont-email.me> <sYqdnVoAec4XV_37nZ2dnZfqn_ednZ2d@brightview.co.uk> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Date: Thu, 06 Jun 2024 00:07:22 +0200 (CEST) Injection-Info: dont-email.me; posting-host="4cb2a3366a4bdb85a28904f6e3988fec"; logging-data="1214821"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX191BJ0uGH1UGbea7P8r2Cz0" User-Agent: Mozilla Thunderbird Cancel-Lock: sha1:mdG3RD3z0Id3+RYR8oouVGqXd/s= In-Reply-To: <sYqdnVoAec4XV_37nZ2dnZfqn_ednZ2d@brightview.co.uk> Content-Language: en-US Bytes: 16975 On 6/5/2024 3:28 PM, Mike Terry wrote: > On 05/06/2024 17:49, olcott wrote: >> On 6/5/2024 10:55 AM, Mike Terry wrote: >>> On 05/06/2024 10:38, Ben Bacarisse wrote: >>>> John Smith <news2@immibis.com> writes: >>>> >>>>> Then increase the stack space until it doesn't run out. Turing >>>>> machines >>>>> can't run out of stack space unless you programmed them wrong. >>>> >>>> A Turing machine can't run out of stack space because there is no >>>> stack. >>>> That's like saying a polynomial has limited precision if you >>>> evaluate it >>>> badly. It's the evaluation that's wrong, not the polynomial. I know >>>> what you mean, but having talked to maths crank on Usenet for years, >>>> one >>>> thing I would caution against is being slowly sucked into the cranks >>>> bad >>>> use of technical terms. >>>> >>> >>> Wandering slightly : also, PO's H/HH/etc. (running under x86utm) >>> requires minimal stack space to run - probably just a few KB would >>> suffice, /regardless of recursion depth/. Given that PO allocates >>> 64KB for the stack, this is not going to be a problem. >>> >>> The reason recusion depth is not a factor is that H /simulates/ D >>> rather than calling it. The simulation does not consume H's stack >>> space, and neither do nested simulations - they all have their own >>> separately allocated stacks. >>> >>> PO's design uses a single 32-bit address space which must hold ALL >>> levels of nested recursion, so obviously something has to fail as >>> nesting levels grow. That would be an "out of memory" failure when >>> trying to acquire resource to create a new simulation level. I.e. a >>> /heap/ error rather than "out of stack". >>> >>> In practice his system would become unusable long before then due to >>> CPU requirements in simulating instructions for each recursion level >>> - that grows exponentially with a factor of (something like) 200 >>> between each level. So at a recursive simulation depth of just 10, a >>> single instruction would take something like >>> 100,000,000,000,000,000,000,000 outer level instructions to simulate, >>> which is just impractical. >>> >>> >>> Mike. >>> >> >> Thank you very much for being the voice of correct reasoning here. >> I just figured out how to handle your objection to my HH code. >> >> My idea was to have the executed HH pass a portion of what is >> essentially its own Turing Machine tape down to the simulated >> instances of HH. It does do this now. > > What happens now is that there is one single trace array in global > memory, and all simulations appends simulated instructions to that one > array YES > and can read array entries written by other simulation levels. NO they have not been doing this and I will encode it so that this is impossible. I knew about this issue about two years before you recently raised it. I only found out abut this issue from comp.theory respondents. > That fundamentally breaks the concept of a simulation exactly matching > the behaviour of the outer (unsimulated) computation. More details > below... > I have known this for at least two years. > [but the problem is I don't believe you really understand what the > requirements of simulation are, because if you did you simply wouldn't > have written code like that in the first place...] > I was emulating a UTM passing a portion of its own tape down to its simulated instances of itself. That aspect always seemed legit. > And calling it "a portion of what is essentially its own TM tape" is > just obfuscation to try to hide the obvious breaking of rules. > Not if the simulated instances never access the data from the outer-simulations. I will encode this to make it impossible. I will encode u32 start_tape_address to the position in execution_trace where the simulated HH begins so that it will not ever look further back. typedef struct Decoded { u32 Address; u32 ESP; // Current value of ESP u32 TOS; // Current value of Top of Stack u32 NumBytes; u32 Simplified_Opcode; u32 Decode_Target; } Decoded_Line_Of_Code; execution_trace is essentially a std::vector of Decoded_Line_Of_Code >> >> The key objection that you seemed to have is that it can't pass >> any information to its simulated instance that they can use in >> their own halt status decision. > > Partly right but woefully incomplete. If you added "...or to modify its > logical behaviour in any way" that would be a good summing up. > Either the simulated instances must some how know not to allocate memory for what is essentially a portion of the master UTM tape that is passed down to them, or the master UTM must some how find the UTM tape of these simulated instances. > Your specific focus on just "their own halt status decision" is only one > part of the picture - my guess would be you know "the rules" but for > whatever reason the only solution you can come up with breaks the rules, > so you try to convince yourself that that's ok by changing the rules. > I am having you analyze the details. I was basically gobsmacked that A TM cannot do any damn thing that any C program can do. It has been at least three years since then. > > More specifically, the behaviour of a simulated HH must exactly[*] match > the behaviour of the outer (unsimulated) HH. > > Behaviour means all of the following: > > - the instruction path of simulated HH exactly matches the instruction > path > of the outer HH. [Currently HH has different code paths for simulated/ > outer execution! It is not enough that you /believe/ this will not > affect "the result".] > They are exactly the same except that the inner ones remain stuck in recursive simulation until the outer one stops simulating its DD. *This unequivocally proves the behavior of DD correctly simulated by HH* https://liarparadox.org/DD_correctly_simulated_by_HH_is_Proven.pdf > - registers and all data and data locations used by the simulation must > match the registers > and data and data addresses used by the outer HH. [*] > > - no backdoor channels where a nested simulation feeds data to a > nesting simulation or vice versa. Obviously an outer simulation can > "scrape" data from it's directly simulated computation. > > > [*] exactly match?? A big headache is that you've chosen a model where > all the simulations share one 32-bit address space. So simulation > working data will be at different locations to the outer working data. > IF your code acts "responsibly" this can be catered for. E.g. simulated > data on stacks will be at different addresses for each simulation - but > still the location of the data within the stack and their values should > /exactly/ [*] match across all simulations. Similarly a call to > Allocate() will return different results in each simulation, but the > location of all data used by the simulation relative to that allocation ========== REMAINDER OF ARTICLE TRUNCATED ==========