Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail From: mitchalsup@aol.com (MitchAlsup1) Newsgroups: comp.arch Subject: Re: Making Lemonade (Floating-point format changes) Date: Thu, 16 May 2024 23:10:48 +0000 Organization: Rocksolid Light Message-ID: <06ca70b0f9a7f44c59dd827779ad855e@www.novabbs.org> References: <20240513151647.0000403f@yahoo.com> <9c79fb24a0cf92c5fac633a409712691@www.novabbs.org> <01b78f15d6b5f87809163ce539356ef2@www.novabbs.org> <4551b3b18017eb57d94876733377f532@www.novabbs.org> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8; format=flowed Content-Transfer-Encoding: 8bit Injection-Info: i2pn2.org; logging-data="1324308"; mail-complaints-to="usenet@i2pn2.org"; posting-account="65wTazMNTleAJDh/pRqmKE7ADni/0wesT78+pyiDW8A"; User-Agent: Rocksolid Light X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8 X-Rslight-Site: $2y$10$WdmmKZdcjo8AjucAvXqYweeEly/dXp5nbS/hk7rPfGbelyDZ8DAYC X-Spam-Checker-Version: SpamAssassin 4.0.0 Bytes: 5162 Lines: 78 EricP wrote: > MitchAlsup1 wrote: >> Scott Lurndal wrote: >> >>> mitchalsup@aol.com (MitchAlsup1) writes: >>>> BGB wrote: >> >>>>>>> Also I may need to rework how page-in/page-out is handled (and or >>>>>>> how IO is handled in general) since if a page swap needs to happen >>>>>>> while IO is already in progress (such as a page-miss in the >>>>>>> system-call process), at present, the OS is dead in the water (one >>>>>>> can't access the SDcard in the middle of a different access to the >>>>>>> SDcard). >>>>>> >>>>>> Having a HyperVisor helps a lot here, with HV taking the page faults >>>>>> of the OS page fault handler. >>>> >>>>> Seems like adding another layer couldn't help with this, unless it >>>>> also abstracts away the SDcard interface. >>>> >>>> With a HV, GuestOS does not "do" IO is paravirtualizes it via HV. >> >>> Actually, that's not completely accurate. With PCI Express SR-IOV, >>> an I/O MMU and hardware I/O virtualization, the guest accesses the I/O >>> device >>> hardware directly and initiates DMA transactions to-or-from the >>> guest OS directly. With the PCIe PRI (Page Request Interface), the >>> guest DMA target pages don't need to be pinned by the hypervisor; the >>> I/O MMU will interrupt the hypervisor to make the page present >>> and pin it and the hardware will then do the DMA. >> >> This was something I was not aware of but probably should have anticipated. >> >> GuestOS initiates an I/O request (command) using a virtual function. >> Rather than going through a bunch of activities to verify the user >> owns the page and it is present, GuestOS just launches request and >> then the I/O device page faults and pins the required page (if it is >> not already so)--much like the page fault volcano when a new process >> begins running:: page faulting in .text, the stack, and data pages >> as they get touched. >> >> This way, GuestOS simply considers all pages in its "portfolio" to be >> present in memory, and HV does the heavy lifting and page virtualization. >> >> I guess I should have anticipated this. Sorry !! > The reason OS's pin the pages before the IO starts is so there is no > latency reading in from a device, which then has to buffer the input. > An HDD seek avg about 9 ms, add 3 ms for the page fault code. > A 100 Mbs Ethernet can receive 10 MB/s or 10 kB/ms, = 120 kB in 12 ms. > What would likely happen is the Ethernet card buffer would fill up > then it starts tossing packets, while it waits for HV to page fault > the receive buffer in from its page file. Later when the guest OS > buffer has faulted in and the card's buffer is emptied, the network > software will eventually NAK all the tossed packets and they get resent. > So there is a stutter every time the HV recycles that guest OS memory > that requires retransmissions to fix. And this is basically using the > senders memory to buffer the transmission while this HV page faults. > Note there are devices, like A to D converters which cannot fix the > tossed data by asking for a retransmission. Or devices like tape drives > which can rewind and reread but are verrry slow about it. > I would want an option in this SR-IOV mechanism for the guest app to > tell the guest OS to tell the HV to pin the buffer before starting IO. So, what happens if GuestOS thinks the user file is located on a local SATA drive, but it is really across some network ?? This works when devices are not virtualized since the request is routed to a different system where the file is local, accessed and data returned over the network. Does this mean the application has lost a level of indirection in order to have become virtualized ?????