Deutsch English Français Italiano |
<83ombj1ftdc0gbfp77fo326suvpvdlpa5c@4ax.com> View for Bookmarking (what is this?) Look up another Usenet article |
Path: ...!Xl.tags.giganews.com!local-4.nntp.ord.giganews.com!news.giganews.com.POSTED!not-for-mail NNTP-Posting-Date: Tue, 13 Aug 2024 13:51:48 +0000 From: Spalls Hurgenson <spallshurgenson@gmail.com> Newsgroups: comp.sys.ibm.pc.games.action Subject: Re: rant Date: Tue, 13 Aug 2024 09:51:47 -0400 Message-ID: <83ombj1ftdc0gbfp77fo326suvpvdlpa5c@4ax.com> References: <XnsB1C3E64D4C3AAmpndisorg@135.181.20.170> <v8o92t$4ktk$1@dont-email.me> <v8vk94$us4$3@ereborbbs.duckdns.org> <lhuuhvFlce3U2@mid.individual.net> <6sclbjh3gn1s5vvg6umk2b21ib3263tqe5@4ax.com> <li0rs0Fu9geU1@mid.individual.net> X-Newsreader: Forte Agent 2.0/32.652 MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Transfer-Encoding: 7bit Lines: 119 X-Usenet-Provider: http://www.giganews.com X-Trace: sv3-DTMrD0TorEIlBODCPqTXhxcfKl3j4HAeeGvECcOwElxD7wcYdv4kPlUvmXSn/9qXkJCPzzvDZW4sWew!vPGvhDMdAJu1O8QieIa9mNMH6gvdNdc1nzRzcdrdlmGY/IXLp0nbc0Irz/y2YApMSMOfy/u0 X-Complaints-To: abuse@giganews.com X-DMCA-Notifications: http://www.giganews.com/info/dmca.html X-Abuse-and-DMCA-Info: Please be sure to forward a copy of ALL headers X-Abuse-and-DMCA-Info: Otherwise we will be unable to process your complaint properly X-Postfilter: 1.3.40 Bytes: 6706 On Tue, 13 Aug 2024 05:46:23 -0500, Altered Beast <j63480576@gmail.com> wrote: >Spalls Hurgenson wrote: >> On Mon, 12 Aug 2024 12:19:58 -0500, Altered Beast >> <j63480576@gmail.com> wrote: >> >>> Kyonshi wrote: >>>> On 8/4/2024 6:09 PM, Dimensional Traveler wrote: >>>>> On 8/3/2024 10:38 PM, Mark P. Nelson wrote: >>>>>> Look, the whole point of the *personal* computer was that you didn't >>>>>> have to rent time from >>>>>> IBM to figure out your profit/loss balance. >>>>>> >>>>>> Ever since then, every computer company has been trying desperately >>>>>> to revive the "You >>>>>> only rent it" model to bolster their bottom line, no matter their >>>>>> public face on the question. >>>>>> >>>>>> We're getting closer and closer to no longer having personal >>>>>> computers which we own and >>>>>> can configure/control as we wish but rather Microsoft or Banana >>>>>> computers for which we pay >>>>>> a regular fee. >>>>>> >>>>>> Pfui! >>>>> >>>>> Its not just computers. >>>>> >>>> >>>> well, by now lots of things have more computing power than was used to >>>> get man to the moon. e.g. cars. >>> >>> What units are computing power measured in? >> >> >> Here's a layman's answer. I'm sure experts in the field will take >> issue with some of my descriptions but I think its a good enough >> overall introduction. >> >> FLOPS and IPS are the units that I've typically seen used. The former >> - Floating Point Operations Per Second - calculates how fast the >> computer can do arithmetic calculations, which is a 'real-world' >> example of what PCs do. After all, in the end everything we ask our >> computers to do revolves around maths, so knowing how fast it can run >> a calculation is the best measurement between computers. >> >> IPS - Instructions Per Second - counts how many internal instructions >> the CPU can parse each second. However, because of differences in >> CPUs, IPS doesn't directly scale to output; a calculation that takes >> one type of CPU three instructions may take a different architecture >> five instructions and a third architecture might need twelve. >> >> >> FLOPS is more useful for comparing actual performance between >> different computers (e.g., your phone versus your home PC versus an >> F-35 fighter jet). IPS is really only useful for comparing between >> similar architectures (e.g., an Intel 13900 and an Intel 13700). There >> are also different ways of measuring a CPUs performance, which causes >> different results depending on which method you used. >> >> Precision (how many decimal points you use) also effects the results >> of FLOPs benchmarking; some computers only have 16-bit precision, >> others go up to 64-bit. Many early computers also lacked dedicated >> hardware for floating-point calculations, and so had to 'brute-force' >> the math at a significant hit to performance. Others were specialized >> for floating point performance at a cost to 'regular' arithmetic used >> for a lot of user operations. And -especially with older computers- >> architectural differences were so radically different that comparisons >> are almost impossible. > >I had heard of FLOPS via MATLAB, which reports the number of FLOP in >each instruction. I hadn't realized that the S in FLOPS was for >seconds, which was why it didn't make sense to me. I'm really surprised >at the numbers especially for Cray. 2 FLOPS sounds kind of primitive. >MATLAB had operations on the order of 1 teraflop for one instruction. >Of course, these could take a few minutes to execute. Those numbers -from Wikipedia, so take them for what you will- are floating point operations per CYCLE (so FLOPC, I guess?). You need to multiply that by number of cores and megahertz to get the nice big number everyone expects. 160FLOPS (80MHz, two cpus) is the number more typically quoted for a CRAY-1. The 'per cycle' number is arguably a more useful reference since actual processor speed can vary depending on how fast you can clock it. But it makes some CPUs look a lot slower than you'd expect, I know. That's why I added an alternate table of comparison. (Also because the only reference I found on Apollo 11's moon-landing computers was in overall FLOPS performance,not per cycle, and I really wanted to show where it lay in comparison to more modern hardware. We've come a long way, baby!). But that table gives you a more 'real world' example of the performance difference between different machines. (Also, if the 486 performance looks a bit skewed, that's because the first chart shows the base 486 performance using just the CPU, whereas the second is using the dedicated floating-point hardware built into the chip available in the DX line of processors*) But really, all these numbers mean very little to the end-user. A lot of the day-to-day stuff we ask our computers to do rarely touch floating point math, and more often RAM and storage performance have a more immediate impact than trying to find a chip with the best FLOPS or MIPS performance. Unfortunately, measuring THAT makes for even more difficult comparisons between computers. Ultimately, the best performance test is, "Does it run Doom?" If the answer is affirmative, then you've got enough performance to do most of what you'd need a computer to do. ;-) * and yes, I know the 486SX chips actually had built-in-but-disabled FPUs too ;-)