Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail From: Don Y Newsgroups: sci.electronics.design Subject: Re: The end of stackoverflow? Date: Fri, 10 May 2024 01:29:18 -0700 Organization: A noiseless patient Spider Lines: 17 Message-ID: References: MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8; format=flowed Content-Transfer-Encoding: 7bit Injection-Date: Fri, 10 May 2024 10:29:28 +0200 (CEST) Injection-Info: dont-email.me; posting-host="34557ccb4cf0a0dfc074d582ac51803c"; logging-data="1329650"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX19fuObjR6G9YbztrxAeSPsq" User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:102.0) Gecko/20100101 Thunderbird/102.2.2 Cancel-Lock: sha1:PgGX3+4KAoG1xy/VbnalEdlCw2U= Content-Language: en-US In-Reply-To: Bytes: 1851 On 5/10/2024 12:16 AM, Sylvia Else wrote: > One often has to trawl through a number of suggested solutions, either because > most of them are wrong (or at least wildly apocryphal), irrelevant, or because > the same or similar symptoms can have many different underlying causes. > > I have to wonder whether a language model is really up to the task of filtering > out the dross, while keeping the important parts. Patterns (repeated) in answers are reinforced. So, outliers tend to not influence the model, as much. E.g., Carlin (?) did a bit in which he uttered something like, "Here's a sentence no one has ever said before..." You, thus, wouldn't expect an AI to come up with such a sentence in "normal use" because its weights are so low.