Path: ...!news.misty.com!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: mitchalsup@aol.com (MitchAlsup1) Newsgroups: comp.arch Subject: Re: Arguments for a sane ISA 6-years later Date: Mon, 5 Aug 2024 20:05:53 +0000 Organization: Rocksolid Light Message-ID: References: <2024Jul30.114424@mips.complang.tuwien.ac.at> <2024Aug1.191028@mips.complang.tuwien.ac.at> <55b647d46e79c54dc1261bbd037db3ac@www.novabbs.org> <12d7fbb12f475a46ebce3e22fa4920ad@www.novabbs.org> <2667d4e23b45172d4d618fbc6fd4cee8@www.novabbs.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Info: i2pn2.org; logging-data="1578207"; mail-complaints-to="usenet@i2pn2.org"; posting-account="65wTazMNTleAJDh/pRqmKE7ADni/0wesT78+pyiDW8A"; User-Agent: Rocksolid Light X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8 X-Spam-Checker-Version: SpamAssassin 4.0.0 X-Rslight-Site: $2y$10$604zgLLOaqaaME3/oeS/Yunr5xup5f2u/APQF0G.wUTVfo4SizMTK Bytes: 3306 Lines: 50 On Mon, 5 Aug 2024 17:41:34 +0000, Stephen Fuld wrote: > Scott Lurndal wrote: > >> "Stephen Fuld" writes: >>> MitchAlsup1 wrote: >>> >> >>>> >>>> It seems to me that a file copy application would understand >>>> that writing of DRAM is irrelevant when the true destination >>>> is another sector on another disk, and any means to connect >>>> those does is more than sufficient. >>> >>> >>> I suppose you could creaate a mecnahism that fed the data from the >>> "read" DMA directly to the "Write DMA, thus bypassing not only the >>> cache, but the saving DRAM bandwidth as well. This would help on >>> copies, and perhaps things like defrag and backup. But I suspect >>> that the savings are not worth the effort. >> >> It would be more logical, I think, to simply build the functionality >> into the controller (when the source and destination are devices >> attached to that controller (e.g. SATA, SAS or nVME)). IIRC, >> that sort of functionality was available on some SCSI controllers. >> >> For the case where devices are on multiple controllers, PCI express >> peer-to-peer would be the appropriate solution. There's no need >> for the CPU and cache complex to be involved at all. > > > Yes, thank you. The PCI Express option was the kind of thing I was > thinking of. Since it is more general than the "in controller option", > if you implement it at the PCI level, then you don't need the > controller option. Done right, it is jus a apart of I/O MMU address translation > But even though the savings are real, given the limited use case for > the feature, I question if it is worth the trouble. With an I/O MMU it pretty much drops out for free. > >> Shades of channel programs... > > > > Not nearly as flexible as channel programs, nor with their overhead. > >