Deutsch   English   Français   Italiano  
<vfh522$3bkkv$3@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder2.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: olcott <polcott333@gmail.com>
Newsgroups: comp.theory
Subject: Re: Verified facts regarding the software engineering of DDD, HHH,
 and HHH1 --- TYPO
Date: Fri, 25 Oct 2024 17:11:14 -0500
Organization: A noiseless patient Spider
Lines: 104
Message-ID: <vfh522$3bkkv$3@dont-email.me>
References: <vf3eu5$fbb3$2@dont-email.me> <vf6mt7$136ja$2@dont-email.me>
 <ad43f56a12181e10f59b8a1e6220ed7989b6c973@i2pn2.org>
 <vf74oh$1a8oo$1@dont-email.me>
 <525ed75662589a150afa1ea268b199a166a7b98b@i2pn2.org>
 <vf8ads$1gkf5$1@dont-email.me>
 <13583474d25855e665daa98d91605e958f5cf472@i2pn2.org>
 <vf8i1g$1h5mj$4@dont-email.me>
 <45ea7a6da46453c9da62c1149fa1cf7739218c5f@i2pn2.org>
 <vf9qai$1scol$1@dont-email.me>
 <2a210ab064b3a8c3397600b4fe87aa390868bb12@i2pn2.org>
 <vf9sk6$1sfva$2@dont-email.me>
 <4c67570b4898e14665bde2dfdf473130b89b7dd4@i2pn2.org>
 <vfaqe7$21k64$1@dont-email.me>
 <f789d3ef27e3000f04feb3df4fc561c5da02381f@i2pn2.org>
 <vfc96p$2b6h0$1@dont-email.me>
 <74edcca800e7af74169cea47cb8f1715d3a5145f@i2pn2.org>
 <vfdihe$2kvn4$2@dont-email.me>
 <4abd6615b2730699ecc474d01b97163917e0b01d@i2pn2.org>
 <vfeqbs$2rugm$1@dont-email.me>
 <d7e366b37fa336944a72bb41a0e655076b6b335f@i2pn2.org>
 <vfg82q$36im7$4@dont-email.me>
 <0911038494da3f0613bcc3f31271820baa79a0b2@i2pn2.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sat, 26 Oct 2024 00:11:15 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="f00999e9e0e5447cf99e873d021c7ec9";
	logging-data="3527327"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1/Du5cIhqEmVgSVPXlU+tZ1"
User-Agent: Mozilla Thunderbird
Cancel-Lock: sha1:TKcj08XojcLAnixcZmB51BTMvzo=
Content-Language: en-US
X-Antivirus-Status: Clean
In-Reply-To: <0911038494da3f0613bcc3f31271820baa79a0b2@i2pn2.org>
X-Antivirus: Norton (VPS 241025-4, 10/25/2024), Outbound message
Bytes: 5884

On 10/25/2024 10:45 AM, Richard Damon wrote:
> On 10/25/24 9:56 AM, olcott wrote:
>> On 10/25/2024 7:27 AM, Richard Damon wrote:
>>> On 10/24/24 8:56 PM, olcott wrote:
>>>> On 10/24/2024 6:23 PM, Richard Damon wrote:
>>>>> On 10/24/24 9:36 AM, olcott wrote:
>>>>>> On 10/23/2024 9:48 PM, Richard Damon wrote:
>>>>>>> On 10/23/24 9:51 PM, olcott wrote:
>>>>>>>> ChatGPT does completely understand this.
>>>>>>>>
>>>>>>>
>>>>>>> But, it is just a stupid idiot that has been taught to repeat 
>>>>>>> what it has been told.
>>>>>>>
>>>>>>
>>>>>> It is a brilliant genius that seems to infallibly deduce all
>>>>>> of the subtle nuances of each of the consequences on the basis
>>>>>> of a set of premises.
>>>>>
>>>>> I guess you don't undetstand how "Large Language Models work, do you.
>>>>>
>>>>> It has NO actual intelegence, or ability to "deduce" nuances, it is 
>>>>> just a massive pattern matching system.
>>>>>
>>>>> All you are doing is proving how little you understand about what 
>>>>> you are talking about,
>>>>>
>>>>> Remember, at the bottom of the page is a WARNING that it can make 
>>>>> mistakes. And feeding it LIES, like you do is one easy way to do that.
>>>>>
>>>>
>>>> There is much more to this than your superficial
>>>> understanding.  Here is a glimpse:
>>>> https://www.technologyreview.com/2024/03/04/1089403/large-language- 
>>>> models-amazing-but-nobody-knows-why/
>>>>
>>>> The bottom line is that ChatGPT made no error in its
>>>> evaluation of my work when this evaluation is based on
>>>> pure reasoning. It is only when my work is measured
>>>> against arbitrary dogma that cannot be justified with
>>>> pure reasoning that makes me and ChatGPT seem incorrect.
>>>>
>>>> If use your same approach to these things we could say that
>>>> ZFC stupidly fails to have a glimmering of understanding of
>>>> Naive set theory. From your perspective ZFC is a damned liar.
>>>>
>>>
>>> The articles says no such thing.
>>>
>>
>> *large-language-models-amazing-but-nobody-knows-why*
>> They are much smarter and can figure out all kinds of
>> things. Their original designers have no idea how they
>> do this.
>>
>>> In fact, it comments about the problem of "overfitting" where the 
>>> processing get the wrong answers because it over generalizes.
>>>
>>> This is because the modeling process has no concept of actual 
>>> meaning, and thus of truth, only the patterns that it has seen.
>>>
>>> AI's don't "Reason", they patern match and compare.
>>>
>>> Note, that "arbitrary dogma" that you try to reject, are the RULES 
>>> and DEFINITONS of the system that you claim to be working in.
>>>
>>
>> How about we stipulate that the system that I am
>> working in is termination analysis for the x86 language.
>> as my system software says in its own name: x86utm.
> 
> But it doesn;t actually know
> 

I said the the underlying formal mathematical system
of DDD/HHH <is> the x86 language.

DDD emulated by HHH within this formal system cannot
possibly reach its own "return" instruction even if
no one or no thing "knows" this.

> 
> Just came accross an interesting parody about LLMs, showing there issues
> 
> https://www.youtube.com/watch?v=Bbfii4wz2ys&ab_channel=HonestAds
> 
> It seems you are just one of those taken in by it.
> 

Not at all taken in by it.
100% perfectly understanding that its review of the
succinct essence of my work is utterly unassailable.

Mike's review of the difference between
DDD emulated by HHH
and
DDD emulated by HHH1
according to the semantics of the x86 language
is pure bluster.


-- 
Copyright 2024 Olcott "Talent hits a target no one else can hit; Genius
hits a target no one else can see." Arthur Schopenhauer