Deutsch   English   Français   Italiano  
<451e0fd2f43e24f80c94d6b6beda4f483f34918d@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!panix!weretis.net!feeder9.news.weretis.net!news.nk.ca!rocksolid2!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory,sci.logic,sci.math
Subject: Re: ChatGPT totally understands exactly how I refuted the
 conventional halting problem proof technique
Date: Sun, 22 Jun 2025 21:59:33 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <451e0fd2f43e24f80c94d6b6beda4f483f34918d@i2pn2.org>
References: <1037cr1$1aja4$1@dont-email.me>
 <0a85c405ce0dc53846598ce806361b9fa2201599@i2pn2.org>
 <10394vg$j159$3@dont-email.me>
 <e7ab59608b773eeb97d9790de6c1d9dedfbf1774@i2pn2.org>
 <1039sm6$opkl$1@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Mon, 23 Jun 2025 02:08:58 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="1642366"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
X-Spam-Checker-Version: SpamAssassin 4.0.0
Content-Language: en-US
In-Reply-To: <1039sm6$opkl$1@dont-email.me>

On 6/22/25 5:30 PM, olcott wrote:
> On 6/22/2025 4:09 PM, Richard Damon wrote:
>> On 6/22/25 10:46 AM, olcott wrote:
>>> On 6/22/2025 6:23 AM, Richard Damon wrote:
>>>> On 6/21/25 6:48 PM, olcott wrote:
>>>>> int DD()
>>>>> {
>>>>>    int Halt_Status = HHH(DD);
>>>>>    if (Halt_Status)
>>>>>      HERE: goto HERE;
>>>>>    return Halt_Status;
>>>>> }
>>>>>
>>>>> https://chatgpt.com/s/t_6857335b37a08191a077d57039fa4a76
>>>>> ChatGPT agrees that I have correctly refuted every
>>>>> halting problem proof technique that relies on the above
>>>>> pattern.
>>>>>
>>>>
>>>> Just shows your natural stupidity in believing a lie you convinced 
>>>> the artificial inteligence to say.
>>>>
>>>> Sorry, I guess you natural stupidity extends to the failure to 
>>>> understand that AI's are programmed to have no problem with LYING, 
>>>> but are really just glorified "Yes Men", who will say what your 
>>>> prompt directs them to say.
>>>>
>>>
>>> MIT Technology Review
>>> https://www.technologyreview.com/2024/03/04/1089403/large-language- 
>>> models-amazing-but-nobody-knows-why/
>>>
>>
>>
>> Which doesn't make a claim that their answers are without error.
>>
> 
> I was not rebutting this and you know it.
> 

It sure seemed like you were, since you seemed to have been using it as 
a rebuttal to the fact that AI's LIE.

I guess you are just admitting that a non-sequitor can be used as a 
refutation of an error being pointed out.

Not understanding how they work is not a good argument that they must be 
thought of as reliable.

I guess you are really showing that you have no idea at all about what 
you are talking about.