Deutsch English Français Italiano |
<vrpmac$315f4$1@dont-email.me> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail From: Paul <nospam@needed.invalid> Newsgroups: comp.os.linux.advocacy,alt.comp.os.windows-11 Subject: =?UTF-8?Q?Re:_Windows-on-ARM_Laptop_Is_A_=e2=80=9cFrequently-Return?= =?UTF-8?Q?ed_Item=e2=80=9d_On_Amazon?= Date: Sun, 23 Mar 2025 15:10:05 -0400 Organization: A noiseless patient Spider Lines: 103 Message-ID: <vrpmac$315f4$1@dont-email.me> References: <vrnbks$rkfk$1@dont-email.me> <vrnqf9$18esr$1@dont-email.me> <WYRDP.1210933$_N6e.547203@fx17.iad> <a9rvtjtpp62h8hihlc3b9mmlbbf03nm885@4ax.com> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit Injection-Date: Sun, 23 Mar 2025 20:10:06 +0100 (CET) Injection-Info: dont-email.me; posting-host="dee92a441859fd8b11938b2856f0bb2d"; logging-data="3184100"; mail-complaints-to="abuse@eternal-september.org"; posting-account="U2FsdGVkX18lEWJOx4atOXL9o4ZbSCyOtlAhDMd1Mac=" User-Agent: Ratcatcher/2.0.0.25 (Windows/20130802) Cancel-Lock: sha1:yPcJ59edcBphXF1omQ16La4EQX8= Content-Language: en-US In-Reply-To: <a9rvtjtpp62h8hihlc3b9mmlbbf03nm885@4ax.com> Bytes: 5833 On Sun, 3/23/2025 7:17 AM, Joel wrote: > CrudeSausage <crude@sausa.ge> wrote: >> On 2025-03-22 10:08 p.m., Paul wrote: >>> On Sat, 3/22/2025 5:55 PM, Lawrence D'Oliveiro wrote: > >>>> It’s clear Windows users have no clue about this Windows-on-ARM thing >>>> that Microsoft keeps trying to push. They just expect their software >>>> to work. But ARM-based Windows machines still require too many >>>> workarounds and suffer too many limitations, and the users are having >>>> great difficulty seeing the point to them. >>>> >>>> <https://www.tomshardware.com/laptops/snapdragon-x-powered-surface-laptop-7-gets-frequently-returned-item-warning-on-amazon> >>> >>> Microsoft does have a translater, to run Win32 code on ARM. >>> That's what is on the Snapdragon device. >>> >>> "What is Prism? >>> >>> Prism is Microsoft's emulation technology that enables x86/x64 >>> applications to run on Windows PCs with Arm processors, such >>> as Surface Pro 11th Edition, Snapdragon processor; >>> Surface Pro 9 with 5G, Surface Pro X, and Surface Laptop 7th Edition, >>> Snapdragon processor. It seamlessly translates app code to run on >>> ARM architecture, optimizing performance, and reducing CPU usage >>> to ensure a smooth user experience on devices powered by >>> Snapdragon X series chips." >>> >>> Google is playing up right now, but I gather that isn't working >>> all that well. Some installers can "detect" they're running on the >>> wrong platform. >>> >>> One person using one of those products, experienced good performance >>> at first (right after the OOBE), but as soon as some updates came >>> in, the emulator performance cratered. >>> >>> Summary: "Safer to test the emulator on a Raspberry PI than spend $3K and return it" >> >> Thanks for that, I had no idea that Microsoft actually came up with >> something to convert. Of course, it doesn't seem to work all that well >> if people are returning their machines the way the press says they are. >> It's not like the press would lie or anything, right? >> >> I can't help but notice the stellar reviews the supposedly returned >> devices are getting. > > > It's clear why Microsoft would use x86 emulation with ARM, countless > reasons, but who cares about their Copilot bullshit, put Linux for ARM > on that mother fucker. Some day, you'll be able to run an AI locally. The idea is, while you can run multiple video cards, like the two on the left, you get a better result if a single card has more RAM on it. In addition to video cards, some people run "computer clusters" with crappy network wiring between them. No one mentions how fast those go. The purpose of me telling you this, is to discourage buying a shitload of video cards and hoping it will work well. So far, it would appear we need to wait for the next generation of video cards with HBM3 memory on them. I feel better now, that I did not buy certain video cards L-) 16GB RAM 16GB RAM 32GB RAM 4000 shaders 4000 shaders versus 4000 shaders ^ ^ | PCIe | 7X faster for AI +--------------+ The models are tuned sequentially, and at a certain size. If a group of experts of size 35GB is on offer, then the single video card of 48GB size might be able to run it. The AI is laid out as linear stages. Load and run Step 1. Load and run Step 2. Load and run Step 3. +--------+ +--------+ +--------+ | Step 1 | | Step 2 | | Step 3 | ==> +--------+ +--------+ +--------+ Which isn't exactly like how the human brain works. Even if the hardware had more room, the way it works might still be discrete steps. The first step, is strategy planning, for whatever other steps are required. Loading the video cards would be from a PCIe Rev5 NVMe. Your question might use the math module, or the studio art model (for drawing pictures). But you can test them now, and see if there are good at anything. At the current time, you do not come away from the experience, thinking the thing is all that useful. For example, I asked it to make a list of 2025 internal combustion cars in the SUV style, and it made a list, but the list was no better than a regular web site would offer for such a class of things. If you asked a human such a question, they would point out which models had manual heating controls :-) Not so with the AI. Dumb as a post. Paul