Deutsch   English   Français   Italiano  
<e7ab59608b773eeb97d9790de6c1d9dedfbf1774@i2pn2.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: news.eternal-september.org!eternal-september.org!feeder3.eternal-september.org!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: Richard Damon <richard@damon-family.org>
Newsgroups: comp.theory,sci.logic,sci.math
Subject: Re: ChatGPT totally understands exactly how I refuted the
 conventional halting problem proof technique
Date: Sun, 22 Jun 2025 17:09:35 -0400
Organization: i2pn2 (i2pn.org)
Message-ID: <e7ab59608b773eeb97d9790de6c1d9dedfbf1774@i2pn2.org>
References: <1037cr1$1aja4$1@dont-email.me>
 <0a85c405ce0dc53846598ce806361b9fa2201599@i2pn2.org>
 <10394vg$j159$3@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Sun, 22 Jun 2025 21:18:32 -0000 (UTC)
Injection-Info: i2pn2.org;
	logging-data="1617367"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="diqKR1lalukngNWEqoq9/uFtbkm5U+w3w6FQ0yesrXg";
User-Agent: Mozilla Thunderbird
In-Reply-To: <10394vg$j159$3@dont-email.me>
Content-Language: en-US
X-Spam-Checker-Version: SpamAssassin 4.0.0

On 6/22/25 10:46 AM, olcott wrote:
> On 6/22/2025 6:23 AM, Richard Damon wrote:
>> On 6/21/25 6:48 PM, olcott wrote:
>>> int DD()
>>> {
>>>    int Halt_Status = HHH(DD);
>>>    if (Halt_Status)
>>>      HERE: goto HERE;
>>>    return Halt_Status;
>>> }
>>>
>>> https://chatgpt.com/s/t_6857335b37a08191a077d57039fa4a76
>>> ChatGPT agrees that I have correctly refuted every
>>> halting problem proof technique that relies on the above
>>> pattern.
>>>
>>
>> Just shows your natural stupidity in believing a lie you convinced the 
>> artificial inteligence to say.
>>
>> Sorry, I guess you natural stupidity extends to the failure to 
>> understand that AI's are programmed to have no problem with LYING, but 
>> are really just glorified "Yes Men", who will say what your prompt 
>> directs them to say.
>>
> 
> MIT Technology Review
> https://www.technologyreview.com/2024/03/04/1089403/large-language- 
> models-amazing-but-nobody-knows-why/
> 


Which doesn't make a claim that their answers are without error.

You are just too naturally stupid to understand that limitation.