Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <06ca70b0f9a7f44c59dd827779ad855e@www.novabbs.org>
Deutsch   English   Français   Italiano  
<06ca70b0f9a7f44c59dd827779ad855e@www.novabbs.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup1)
Newsgroups: comp.arch
Subject: Re: Making Lemonade (Floating-point format changes)
Date: Thu, 16 May 2024 23:10:48 +0000
Organization: Rocksolid Light
Message-ID: <06ca70b0f9a7f44c59dd827779ad855e@www.novabbs.org>
References: <abe04jhkngt2uun1e7ict8vmf1fq8p7rnm@4ax.com> <memo.20240512203459.16164W@jgd.cix.co.uk> <v1rab7$2vt3u$1@dont-email.me> <20240513151647.0000403f@yahoo.com> <v1tre1$3leqn$1@dont-email.me> <9c79fb24a0cf92c5fac633a409712691@www.novabbs.org> <v1u6oi$3o53t$1@dont-email.me> <bcbda29c4c23543d1ed6de8290d1dc3b@www.novabbs.org> <v1ubel$3p1kt$1@dont-email.me> <01b78f15d6b5f87809163ce539356ef2@www.novabbs.org> <pZJ0O.14089$yT%1.12952@fx33.iad> <4551b3b18017eb57d94876733377f532@www.novabbs.org> <CPv1O.3$B7ge.2@fx34.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="1324308"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="65wTazMNTleAJDh/pRqmKE7ADni/0wesT78+pyiDW8A";
User-Agent: Rocksolid Light
X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8
X-Rslight-Site: $2y$10$WdmmKZdcjo8AjucAvXqYweeEly/dXp5nbS/hk7rPfGbelyDZ8DAYC
X-Spam-Checker-Version: SpamAssassin 4.0.0
Bytes: 5162
Lines: 78

EricP wrote:

> MitchAlsup1 wrote:
>> Scott Lurndal wrote:
>> 
>>> mitchalsup@aol.com (MitchAlsup1) writes:
>>>> BGB wrote:
>> 
>>>>>>> Also I may need to rework how page-in/page-out is handled (and or 
>>>>>>> how IO is handled in general) since if a page swap needs to happen 
>>>>>>> while IO is already in progress (such as a page-miss in the 
>>>>>>> system-call process), at present, the OS is dead in the water (one 
>>>>>>> can't access the SDcard in the middle of a different access to the 
>>>>>>> SDcard).
>>>>>>
>>>>>> Having a HyperVisor helps a lot here, with HV taking the page faults
>>>>>> of the OS page fault handler.
>>>>
>>>>> Seems like adding another layer couldn't help with this, unless it 
>>>>> also abstracts away the SDcard interface.
>>>>
>>>> With a HV, GuestOS does not "do" IO is paravirtualizes it via HV.
>> 
>>> Actually, that's not completely accurate.  With PCI Express SR-IOV,
>>> an I/O MMU and hardware I/O virtualization, the guest accesses the I/O 
>>> device
>>> hardware directly and initiates DMA transactions to-or-from the
>>> guest OS directly.   With the PCIe PRI (Page Request Interface), the
>>> guest DMA target pages don't need to be pinned by the hypervisor; the
>>> I/O MMU will interrupt the hypervisor to make the page present
>>> and pin it and the hardware will then do the DMA.
>> 
>> This was something I was not aware of but probably should have anticipated.
>> 
>> GuestOS initiates an I/O request (command) using a virtual function.
>> Rather than going through a bunch of activities to verify the user
>> owns the page and it is present, GuestOS just launches request and
>> then the I/O device page faults and pins the required page (if it is
>> not already so)--much like the page fault volcano when a new process
>> begins running:: page faulting in .text, the stack, and data pages
>> as they get touched.
>> 
>> This way, GuestOS simply considers all pages in its "portfolio" to be
>> present in memory, and HV does the heavy lifting and page virtualization.
>> 
>> I guess I should have anticipated this. Sorry !!

> The reason OS's pin the pages before the IO starts is so there is no
> latency reading in from a device, which then has to buffer the input.
> An HDD seek avg about 9 ms, add 3 ms for the page fault code.
> A 100 Mbs Ethernet can receive 10 MB/s or 10 kB/ms, = 120 kB in 12 ms.

> What would likely happen is the Ethernet card buffer would fill up
> then it starts tossing packets, while it waits for HV to page fault
> the receive buffer in from its page file. Later when the guest OS
> buffer has faulted in and the card's buffer is emptied, the network
> software will eventually NAK all the tossed packets and they get resent.

> So there is a stutter every time the HV recycles that guest OS memory
> that requires retransmissions to fix. And this is basically using the
> senders memory to buffer the transmission while this HV page faults.

> Note there are devices, like A to D converters which cannot fix the
> tossed data by asking for a retransmission. Or devices like tape drives
> which can rewind and reread but are verrry slow about it.

> I would want an option in this SR-IOV mechanism for the guest app to
> tell the guest OS to tell the HV to pin the buffer before starting IO.



So, what happens if GuestOS thinks the user file is located on a local
SATA drive, but it is really across some network ?? This works when 
devices are not virtualized since the request is routed to a different
system where the file is local, accessed and data returned over the
network.

Does this mean the application has lost a level of indirection in order
to have become virtualized ?????