Deutsch   English   Français   Italiano  
<jwvecwhixg6.fsf-monnier+comp.arch@gnu.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!eternal-september.org!.POSTED!not-for-mail
From: Stefan Monnier <monnier@iro.umontreal.ca>
Newsgroups: comp.arch
Subject: Re: Is Parallel Programming Hard, And, If So, What Can You Do About It?
Date: Wed, 21 May 2025 15:30:43 -0400
Organization: A noiseless patient Spider
Lines: 27
Message-ID: <jwvecwhixg6.fsf-monnier+comp.arch@gnu.org>
References: <vvnds6$3gism$1@dont-email.me>
	<27492f8028a0d40eff5071e85214fc36@www.novabbs.com>
	<100gj7t$1sbnn$11@dont-email.me> <QP%WP.57065$RXsc.38723@fx36.iad>
	<100iher$2b7vi$2@dont-email.me>
	<jwvcyc3xd2v.fsf-monnier+comp.arch@gnu.org>
	<fcb1f88f53b1a99fae7dc50eaba94f54@www.novabbs.org>
	<i3bq2klvtcl1d47i6hp9bbbi2lud240l6e@4ax.com>
	<100jata$2g8o9$3@dont-email.me> <100ji1o$2lgt3$5@dont-email.me>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Injection-Date: Wed, 21 May 2025 21:30:42 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="586cf7d1a036c30072b7f9745dd34d51";
	logging-data="3149518"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX1+rhm+qhgQJ0DQGKwgttU7NmM1mIOgMTxQ="
User-Agent: Gnus/5.13 (Gnus v5.13)
Cancel-Lock: sha1:6Q1CIV06ujXkxNF9Ziqn3ZDwVe8=
	sha1:TrQVwSkaT4YLF4ElGi3UincShdk=
Bytes: 2561

>>> Processes on the same core are concurrent - processes on different
>>> cores are parallel.
>> Only if the cores and/or "hardware threads" do not interfere with one
>> another?
> That’s why I think the distinction is meaningless.

If you're talking about a set of processes running concurrently or in
parallel, then indeed the two terms are interchangeable, AFAIK.
If you're talking about research areas, parallelism and concurrency are
different.

In the case of concurrency the core question is: Given a set of somewhat
independent tasks working on some chunks of data, make sure the computed
result is correct, e.g. design tools like mutexes, memory barriers,
transactional memory, static analysis, reasoning principles, etc...
whose core focus is on making sure there's no race conditions, dead
locks, ...

In the case of parallelism, the core question instead is: given
a program/algorithm, restructure (or even completely replace) it so as
to divide it into somewhat independent tasks that can take advantage of
multiple CPUs to finish the work faster.

Clearly, the two overlap, but they are nevertheless fairly different.


        Stefan