Deutsch   English   Français   Italiano  
<v92nl0$vc0g$2@solani.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!fu-berlin.de!weretis.net!feeder8.news.weretis.net!reader5.news.weretis.net!news.solani.org!.POSTED!not-for-mail
From: Mild Shock <janburse@fastmail.fm>
Newsgroups: comp.lang.prolog
Subject: Long life learning also for real world philosophers? (Re: The
 anchoring problem in a real world philosopher)
Date: Thu, 8 Aug 2024 17:18:56 +0200
Message-ID: <v92nl0$vc0g$2@solani.org>
References: <b406aa35-c39b-46f3-862f-1cc4b75143ae@googlegroups.com>
 <1b7ce2bd-722b-4c2e-b853-12fc2232752bn@googlegroups.com>
 <v8on8h$psr9$1@solani.org> <v92mat$vban$1@solani.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Date: Thu, 8 Aug 2024 15:18:56 -0000 (UTC)
Injection-Info: solani.org;
	logging-data="1028112"; mail-complaints-to="abuse@news.solani.org"
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:91.0) Gecko/20100101
 Firefox/91.0 SeaMonkey/2.53.18.2
Cancel-Lock: sha1:gRoSdspmOYz4B7+QLx8X5Wt5tOM=
X-User-ID: eJwNysEBgDAIA8CVRJIA49jW7D+C3vuYCu2CKND0xKiiKWY7FlcaxbeDocEV0wW3eWsFj3Fp6+wUHv8d5wMQ+RRi
In-Reply-To: <v92mat$vban$1@solani.org>
Bytes: 4924
Lines: 115

But I wouldn’t give up so quickly, even
classical expert system theory of the 80’s
had it that an expert system needs somewhere

a knowledge acquisition component. But the
idea there was that the system would simulate
the experts dialog with the advice taker

Von Datenbanken zu Expertsystemen
https://www.orellfuessli.ch/shop/home/artikeldetails/A1051258432

and gather further information to complete
the advice. Still this could be inspiring,
don’t stop at not knowing Curry-Howard isomorphism,

go on learn it, never stop! Just like here:

Never Gonna Give You Up
https://www.youtube.com/watch?v=dQw4w9WgXcQ

Mild Shock schrieb:
> Hi,
> 
> Lets say one milestone in cognitive science,
> is the concept of "bounded rationality".
> It seems LLMs have some traits that are also
> 
> found in humans. For example the anchoring effect
> is a psychological phenomenon in which an
> individual’s judgements or decisions
> 
> are influenced by a reference point or “anchor”
> which can be completely irrelevant. Like for example
> when discussing Curry Howard isomorphism with
> 
> a real world philosopher , one that might
> not know Curry Howard isomorphism but
> 
> https://en.wikipedia.org/wiki/Anchoring_effect
> 
> nevertheless be tempted to hallucinate some nonsense.
> One highly cited paper in this respect is Tversky &
> Kahneman 1974. R.I.P. Daniel Kahneman,
> 
> March 27, 2024. The paper is still cited today:
> 
> Artificial Intelligence and Cognitive Biases: A Viewpoint
> https://www.cairn.info/revue-journal-of-innovation-economics-2024-2-page-223.htm 
> 
> 
> Maybe using deeper and/or more careful reasoning,
> possibly backed up by Prolog engine, could have
> a positive effect? Its very difficult also for a
> 
> Prolog engine, since there is a trade-off
> between producing no answer at all if the software
> agent is too careful, and of producing a wealth
> 
> of nonsense otherwise.
> 
> Bye
> 
> Mild Shock schrieb:
>  >
>  > Well we all know about this rule:
>  >
>  > - Never ask a woman about her weight
>  >
>  > - Never ask a woman about her age
>  >
>  > There is a similar rule for philosophers:
>  >
>  > - Never ask a philosopher what is cognitive science
>  >
>  > - Never ask a philosopher what is formula-as-types
>  >
>  > Explanation: They like to be the champions of
>  > pure form like in this paper below, so they
>  > don’t like other disciplines dealing with pure
>  > form or even having pure form on the computer.
>  >
>  > "Pure” logic, ontology, and phenomenology
>  > David Woodruff Smith - Revue internationale de philosophie 2003/2
>  > 
> https://www.cairn.info/revue-internationale-de-philosophie-2003-2-page-21.htm 
> 
>  >
> 
>  > Mild Shock schrieb:
>> There are more and more papers of this sort:
>>
>> Reliable Reasoning Beyond Natural Language
>> To address this, we propose a neurosymbolic
>> approach that prompts LLMs to extract and encode
>> all relevant information from a problem statement as
>> logical code statements, and then use a logic programming
>> language (Prolog) to conduct the iterative computations of
>> explicit deductive reasoning.
>> [2407.11373] Reliable Reasoning Beyond Natural Language
>>
>> The future of Prolog is bright?
>>
>> Mild Shock schrieb:
>>>
>>> Your new Scrum Master is here! - ChatGPT, 2023
>>> https://www.bbntimes.com/companies/ai-will-make-agile-coaches-and-scrum-masters-redundant-in-less-than-2-years 
>>>
>>>
>>> LoL
>>>
>>> Thomas Alva Edison schrieb am Dienstag, 10. Juli 2018 um 15:28:05 UTC+2:
>>>> Prolog Class Signpost - American Style 2018
>>>> https://www.youtube.com/watch?v=CxQKltWI0NA
>>
>