Deutsch English Français Italiano |
<5cGcnReFBprdLzD6nZ2dnZfqnPudnZ2d@earthlink.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!Xl.tags.giganews.com!local-3.nntp.ord.giganews.com!nntp.earthlink.com!news.earthlink.com.POSTED!not-for-mail NNTP-Posting-Date: Thu, 13 Feb 2025 08:50:08 +0000 Subject: Re: Paris : In Rush For Profits, AI Safety Issues Are Ignored Newsgroups: comp.os.linux.misc,comp.os.linux.advocacy References: <R3idnTt4H_lt8DD6nZ2dnZfqnPWdnZ2d@earthlink.com> <m15k6pFlg2rU1@mid.individual.net> From: "WokieSux282@ud0s4.net" <WokieSux283@ud0s4.net> Organization: WokieSux Date: Thu, 13 Feb 2025 03:50:11 -0500 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101 Thunderbird/78.13.0 MIME-Version: 1.0 In-Reply-To: <m15k6pFlg2rU1@mid.individual.net> Content-Type: text/plain; charset=utf-8; format=flowed Content-Language: en-US Content-Transfer-Encoding: 7bit Message-ID: <5cGcnReFBprdLzD6nZ2dnZfqnPudnZ2d@earthlink.com> Lines: 72 X-Usenet-Provider: http://www.giganews.com NNTP-Posting-Host: 99.101.150.97 X-Trace: sv3-hqsu1q8f4NvAVyF+xl1/OLIGf2OOJ4bX6A9/9FJSNNl/q7KcRty2YbVVTZmPW8qUyf52wojjQyZfGIh!1Hu4Y59td0HkxZQRwcot3aJZI5FfsWE0bv+uz+a5VALTcdZy3qhnVfvv4vQHAhKHQcRzbzBG8S8+!4+cX3rS7AtP2tAtkJVah X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 Bytes: 4374 On 2/13/25 2:10 AM, rbowman wrote: > On Wed, 12 Feb 2025 22:58:30 -0500, WokieSux282@ud0s4.net wrote: > >> Neural networks can likely do "someone in there" even better, >> eventually. At the moment LLMs get most of the funding so NNs are a >> bit behind the curve. New/better hardware and paradigms are needed >> but WILL eventually arrive. > > So far there is nobody in there for CNNs. You know all the pieces and they > don't magically start breathing when you put them together. It is true the > whole system is a bit of a black box but it is describable. Well, I agree about "CNNs" :-) As for LLMs ... dunno. Get enough stuff going there and something very hard, maybe impossible, to distinguish from "someone in there" may be realized. Then what do we do - ruthlessly pull the plug ? > The problem I see is already starting -- turning them into weapons and > letting them run autonomously. One of the 'hello world' applications is > training a NN on a huge number of labeled photos of cats and dogs and the > models perform very well. NNs - kinda modeling real-life neurons - will eventually result in "someone in there" ... maybe more recognizable than anything the LLMs produce. As for weapons - that's well in progress now, with China ahead of the game according to various reports. Fully autonomous weapons are game-changers. Just tell 'em to "ID Enemy. KILL Enemy" is about all it'd take. In theory such devices could be extremely fast, strong, accurate. Remember the Hunter-Killer drones from "Terminator" - that sort of thing (likely a bit smaller) and they would NOT miss shots. > The metrics are sort of a truth table, with false negatives, false > positives, and correct identification. It's a stochastic process so you're > looking at 'good enough', maybe 97%. Say I hate dogs, set up a camera in > the yard, and shoot all the dogs. A few dogs are going to slide and I'll > kill a few cats. Oh well ... a few friendly-fire casualties are expected ... > Now hand this to the military. The AI decides it sees a terrorist and a > Reaper puts a Hellfire missile up his ass. You get a few school kids, but > that's life. Yep. Some may freak about that, but that's how it goes. It's doubly true for people like Hamas who kinda literally stacked up babies as sandbags. > The Israelis may already be doing something like that or maybe they just > randomly kill people, who knows? > > Give AI enhanced facial recognition to the cops -- won't that be fun. > Enter 'Minority Report'. Oh, there ARE very very dark possibilities ..... Coming soon to a street near you. As for 'Minority', they ARE training AIs to "identify emotional states" from various cues. In theory the bots will spot your malicious intent, perhaps before even you realize you were feeling malicious. "The Computer Said So" is all the justification The State needs ... The "a few mistakes are OK" logic WILL be applied.