Deutsch   English   Français   Italiano  
<l7m76dFmp2rU1@mid.individual.net>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!fu-berlin.de!uni-berlin.de!individual.net!not-for-mail
From: rbowman <bowman@montana.com>
Newsgroups: comp.os.linux.advocacy
Subject: Re: GNU/Linux is the Empowerment of the PC. So What Do You Do?
Date: 10 Apr 2024 01:09:34 GMT
Lines: 38
Message-ID: <l7m76dFmp2rU1@mid.individual.net>
References: <17c37c35720e7ddc$20295$3326957$802601b3@news.usenetexpress.com>
	<uurc4n$1vjvo$1@dont-email.me> <uusdii$2848h$1@dont-email.me>
	<rge51j1uctfqpeg318tbnb0oevs9ck3j6u@4ax.com>
	<17c40c6de0c879aa$405$1351842$802601b3@news.usenetexpress.com>
	<uuun8r$9j84$1@solani.org>
	<17c417d563439fed$44$3121036$802601b3@news.usenetexpress.com>
	<uuvhaq$921c$1@solani.org>
	<17c43fa6cd7d5d2d$59197$3716115$802601b3@news.usenetexpress.com>
	<uv15ob$au8o$1@solani.org>
	<17c46a7f9e9e2dcb$3880$111488$802601b3@news.usenetexpress.com>
	<l7jf72F9s43U2@mid.individual.net> <uv3clk$7ecf$2@dont-email.me>
	<l7l8bsFi2mtU3@mid.individual.net> <uv4bpk$fa07$3@dont-email.me>
Mime-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Trace: individual.net xZVGnzpDeMAYWtdbEVBj2QIGzwik7tZ4MBMXd17pDyjlk6m4hk
Cancel-Lock: sha1:leU3JJb++eKwm4NxCHv30li1gzM= sha256:AIEFYBHleLZQTWrcmkZNLXYHUJerMaY6thBeVIUUyRs=
User-Agent: Pan/0.149 (Bellevue; 4c157ba)
Bytes: 3038

On Tue, 9 Apr 2024 17:26:12 -0400, Chris Ahlstrom wrote:

> I've seen some interesting books about neural logic circuits, but circa
> 1980.

Rumelhart and McClelland's 'Parallel Distributed Processing' was the text 
used in a seminar I took that summed up the state of the art. The 
perceptron went back to McColloch and Pitts in 1944. It was a start but 
had problems some of which were pointed out by Minsky. Hopfield came up 
with the network named after him. Rumelhart and Hinton threw in back 
propagation based on gradient descent in an '86 paper. 

Anyway, read the current literature and you'll see a lot of what was 
discussed in the '80s. The problem was hardware. Nvidia is making big 
bucks producing $40,000+ GPUs that can handle all the matrix operations 
involved. That sort of power wasn't available in the '80s outside of a few 
supercomputers and even they might have been breathing heavy. 

The other problem was the inevitable hype that oversold NNs. 

https://www.skynettoday.com/overviews/neural-net-history

It may be tl;dr but if you're interested that covers the history very 
well. It's telling that 'neural network' became a little toxic amid the 
stumbles over the years and became 'machine learning'. Having watched the 
technology waxing and waning for close to 40 years I don't know if the 
current boom will become a bust.

https://mitsloan.mit.edu/ideas-made-to-matter/why-neural-net-pioneer-
geoffrey-hinton-sounding-alarm-ai

Hinton is entering "what have we wrought?" territory. McClelland seems to 
be optimistic still.

https://deepai.org/profile/james-mcclelland

Rumelhart died relatively young before AI became a headline item.