Deutsch English Français Italiano |
<78c25d2bbe85422ea83f3e8f2db66a9843c26b51@i2pn2.org> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: Richard Damon <richard@damon-family.org> Newsgroups: comp.theory Subject: Re: The actual truth is that ... Date: Sun, 13 Oct 2024 09:13:10 -0400 Organization: i2pn2 (i2pn.org) Message-ID: <78c25d2bbe85422ea83f3e8f2db66a9843c26b51@i2pn2.org> References: <ve39pb$24k00$1@dont-email.me> <ve6lsa$207d$2@news.muc.de> <ve8289$336c8$1@dont-email.me> <ve91hf$1ab4$1@news.muc.de> <7959253e834d2861b27ab7b3881619c2017e199f.camel@gmail.com> <ve9ju2$3ar6j$1@dont-email.me> <a965e0f825570212334deda4a92cd7489c33c687@i2pn2.org> <vea0mi$3cg0k$2@dont-email.me> <a4d0f7ff8798ce118247147d7d0385028ae44168@i2pn2.org> <veb557$3lbkf$2@dont-email.me> <2e6d8fc76e4e70decca1df44f49b338e61cc557e@i2pn2.org> <vebchp$3m87o$1@dont-email.me> <1071eb58637e27c9b2b99052ddb14701a147d23a@i2pn2.org> <vebeu2$3mp5v$1@dont-email.me> <58fef4e221da8d8bc3c274b9ee4d6b7b5dd82990@i2pn2.org> <vebmta$3nqde$1@dont-email.me> <99541b6e95dc30204bf49057f8f4c4496fbcc3db@i2pn2.org> <vedb3s$3g3a$1@dont-email.me> <vedibm$4891$2@dont-email.me> <72315c1456c399b2121b3fffe90b933be73e39b6@i2pn2.org> <veeh55$8jnq$2@dont-email.me> <0858883d537a7b088992d5eb2e066e13418f3843@i2pn2.org> <veeuje$bf9q$3@dont-email.me> <vefvkj$k1d1$1@dont-email.me> <vegcr5$lk27$4@dont-email.me> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Sun, 13 Oct 2024 13:13:10 -0000 (UTC) Injection-Info: i2pn2.org; logging-data="1852049"; mail-complaints-to="usenet@i2pn2.org"; posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg"; User-Agent: Mozilla Thunderbird In-Reply-To: <vegcr5$lk27$4@dont-email.me> Content-Language: en-US X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 5219 Lines: 75 On 10/13/24 8:01 AM, olcott wrote: > On 10/13/2024 3:16 AM, Mikko wrote: >> On 2024-10-12 22:52:30 +0000, olcott said: >> >>> On 10/12/2024 5:12 PM, joes wrote: >>>> Am Sat, 12 Oct 2024 14:03:01 -0500 schrieb olcott: >>>>> On 10/12/2024 9:43 AM, Richard Damon wrote: >>>>>> On 10/12/24 6:17 AM, olcott wrote: >>>>>>> On 10/12/2024 3:13 AM, Mikko wrote: >>>>>>>> On 2024-10-11 21:13:18 +0000, joes said: >>>>>>>>> Am Fri, 11 Oct 2024 12:22:50 -0500 schrieb olcott: >>>>>>>>>> On 10/11/2024 12:11 PM, Richard Damon wrote: >>>>>>>>>>> On 10/11/24 11:06 AM, olcott wrote: >>>>>>>>>>>> On 10/11/2024 9:54 AM, Richard Damon wrote: >>>>>>>>>>>>> On 10/11/24 10:26 AM, olcott wrote: >>>>>>>>>>>>>> On 10/11/2024 8:05 AM, Richard Damon wrote: >>>>>>>>>>>>>>> On 10/11/24 8:19 AM, olcott wrote: >>>>>>>>>>>>>>>> On 10/11/2024 6:04 AM, Richard Damon wrote: >>>>>>>>>>>>>>>>> On 10/10/24 9:57 PM, olcott wrote: >>>>>>>>>>>>>>>>>> On 10/10/2024 8:39 PM, Richard Damon wrote: >>>>>>>>>>>>>>>>>>> On 10/10/24 6:19 PM, olcott wrote: >>>>>>>>>>>>>>>>>>>> On 10/10/2024 2:26 PM, wij wrote: >>>>>>>>>>>>>>>>>>>>> On Thu, 2024-10-10 at 17:05 +0000, Alan Mackenzie >>>>>>>>>>>>>>>>>>>>> wrote: >>>>>>>>>>>>>>>>>>>>>> Mikko <mikko.levanto@iki.fi> wrote: >>>> >>>>>>> When HHH is an x86 emulation based termination analyzer then each >>>>>>> DDD >>>>>>> emulated by any HHH that it calls never returns. >>>>>> Nope, Even software Engineering treats the funciton HHH as part of >>>>>> the >>>>>> program DDD, and termination analysis as looking at properties of the >>>>>> whole program, not a partial emulation of it. >>>>> So if we ask the exact question can DDD emulated by any HHH reach its >>>>> own return statement they would answer the counter-factual yes? >>>> Yes. DDD reaches it, so a purported simulator should as well. >>>> Therefore HHH is not a simulator. >>>> >>> >>> I tried to tell ChatGPT the same thing several times >>> and it would not accept this. >>> https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e >>> >>> Although LLM system are prone to lying: If it told a lie >>> there would be an error that could be found in its reasoning. >> >> Not necessarily in the reasoning. The error could also be in the input >> material. >> > > Some cases may be too complex to verify. When all of its > premises are true and it only applies truth preserving > operations to these premises then its conclusion is > necessarily correct. > > https://chatgpt.com/share/6709e046-4794-8011-98b7-27066fb49f3e > > When you click on the link and try to explain how HHH must > be wrong when it reports that DDD does not terminate because > DDD does terminate it will explain your mistake to you. > > Which I DID as you seem to ignore, and it tries to argue that while DDD does in reality Halt, we need to theoretically let HHH give the wrong answer becuase we require it to give an answer. In other words, you taught the AI the LYING (the giving of an answer that is KNOW to be at least possibly incorrect) is ok, That is NOT a correct statement in logic. Your logic is built on the FALSE premise that the job must be able to be done, and thus you invent the LIE that it is ok to LIE if you nee to. This just shows that you beleive that lying is ok, when it isn't.