Deutsch   English   Français   Italiano  
<v3mb2k$afuu$1@dont-email.me>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!eternal-september.org!feeder3.eternal-september.org!news.eternal-september.org!.POSTED!not-for-mail
From: "Stephen Fuld" <SFuld@alumni.cmu.edu.invalid>
Newsgroups: comp.arch
Subject: Re: Byte Addressability And Beyond
Date: Tue, 4 Jun 2024 06:09:25 -0000 (UTC)
Organization: A noiseless patient Spider
Lines: 97
Message-ID: <v3mb2k$afuu$1@dont-email.me>
References: <v0s17o$2okf4$2@dont-email.me> <2024May29.090435@mips.complang.tuwien.ac.at> <v38opv$1gsj2$3@dont-email.me> <v38rkd$1ha8a$1@dont-email.me> <jwvttifrysb.fsf-monnier+comp.arch@gnu.org> <f90b6e03c727b0f209d64484ec097298@www.novabbs.org> <v3jtd8$3qduu$2@dont-email.me> <20240603132227.00004e0f@yahoo.com> <k6k7O.8602$7jpd.5620@fx47.iad> <v3klhp$3ugeh$1@dont-email.me> <wnl7O.10195$Inzb.2858@fx13.iad> <v3ktnl$3vv86$1@dont-email.me> <K2n7O.29169$61Y8.18080@fx15.iad> <v3l3mk$sns$1@dont-email.me> <jKp7O.4277$QX6b.311@fx46.iad> <72661b952b48d0404b59832c362d1537@www.novabbs.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Injection-Date: Tue, 04 Jun 2024 08:09:25 +0200 (CEST)
Injection-Info: dont-email.me; posting-host="31f6837d4cb52c527806976dfca3c14c";
	logging-data="344030"; mail-complaints-to="abuse@eternal-september.org";	posting-account="U2FsdGVkX18bL7jUBQz2h0HOGL6z0lWvcwydQUReq1c="
User-Agent: XanaNews/1.21-f3fb89f (x86; Portable ISpell)
Cancel-Lock: sha1:8YgEjBWOYuYVK+4k7TSFDIljJ7g=
Bytes: 5489

MitchAlsup1 wrote:

> Scott Lurndal wrote:
> 
> >"Stephen Fuld" <SFuld@alumni.cmu.edu.invalid> writes:
> > > Scott Lurndal wrote:
> > > 
> > > > 
> > > > 
> > > > The ARM neoverse cores, for example, require very little area.
> > > 
> > > Agreed.  I was assuming that the cost of the logic was about the
> > > same whether it was done as CPU instructions or a chunk of
> > > accelerator logic in the I/O stream.  If that is true, then the
> > > cost of having multiples of them in the I/O stream is small.
> 
> > Although the accelerator requires addition logic to interface
> > to the CPUs (either by presenting as a memory mapped device,
> > integrated into the processor register scheme, or some other
> > proprietary mechanism).  Which means non-standard software is
> > required to manage and use the accelerator.
> 
> First consider that it is possible for an I/O device to DMA directly
> to another I/O device in the PCIe routing tree/DAG.
> 
> Then, consider that with this infrastructure, you could DMA from
> memory through the Cryptor and back to memory (or anywhere you wanted
> it).
> 
> > > 
> > > 
> > > 
> > > > 
> >>>>> From the operating software standpoint, it becomes most
> >>>>> convenient, then, to model the offload as a device which
> >>>>> requires OS support (and intervention for e.g. interrupt
> >>>>> handling).
> >>>>  >  > I look at it differently (and perhaps incorrectly).  I view
> >>>> encryption as one of several "transformations" that data goes
> >>>> through in its path to/from some external device.
> > > > 
> > > > That's certainly a valid view, if perhaps not complete.   There
> > > > are use cases for in-place encryption.
> > > 
> > > Good.  Can you give some examples, and perhaps an estimate of what
> > > percentage of the total encryption operations are in place?  Note
> > > that it may be possible to add a feature to the "in-stream"
> > > hardware to allow in-place encryption - i.e. both sides go
> > > to/come from memory.
> 
> Different users want their files encrypted using different keys than
> any other user--where file could be memory resident (or not).


Memory resident files I agree with you about.  But in my conception of
how this would all work, there would be a key specified for each I/O
operation, thus, I/O to different files could trivially have different
keys.



> 
> > Consider file access.   From the perspective of the disk, all blocks
> > are identical - it doesn't know which partition, filesystem, or file
> > that any individual block is part of, if any.
> 
> > Whole-disk encryption can happen at the drive.    Per-file (or
> > per-filesystem) happens in the filesystem driver(s), perhaps
> > with a hardware assist, but it wouldn't be on the path from
> > the disk to memory.
> 
> You may be correct in how it is now--but if the device has encryption
> services why can they not be applied sector by sector ??
> 
> > There are cases where only a portion of a file is encrypted, and
> > cases where the encryption is combined with compression (pkzip,
> > rar, etc).

If the "boundary" of where the encrypted portion starts or ends
corresponds to where an I/O boundary is, then no problem.  If not, then
the interface requires requires the ability to start/stop encryption at
an arbitrary spot within the I/O.  I envision this to work sort of like
a scatter gather, but instead of different memory addresses, each
"chunk" is encrypted or not.  This is probably needed anyway for things
like netword I/O where you want to encrypt the data but not the header.
As for combining it with compression, clearly the encryption must come
after the compression, and decryption must come before decompression.
If you are doing the compression in the hardware interface that
shouldn't be a problem, and if you are doing it in the software, then
it definitly isn't a problem.


>

-- 
 - Stephen Fuld 
(e-mail address disguised to prevent spam)