Warning: mysqli::__construct(): (HY000/1203): User howardkn already has more than 'max_user_connections' active connections in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\includes\artfuncs.php on line 21
Failed to connect to MySQL: (1203) User howardkn already has more than 'max_user_connections' active connections
Warning: mysqli::query(): Couldn't fetch mysqli in D:\Inetpub\vhosts\howardknight.net\al.howardknight.net\index.php on line 66
Article <4551b3b18017eb57d94876733377f532@www.novabbs.org>
Deutsch   English   Français   Italiano  
<4551b3b18017eb57d94876733377f532@www.novabbs.org>

View for Bookmarking (what is this?)
Look up another Usenet article

Path: ...!weretis.net!feeder9.news.weretis.net!i2pn.org!i2pn2.org!.POSTED!not-for-mail
From: mitchalsup@aol.com (MitchAlsup1)
Newsgroups: comp.arch
Subject: Re: Making Lemonade (Floating-point format changes)
Date: Thu, 16 May 2024 21:22:39 +0000
Organization: Rocksolid Light
Message-ID: <4551b3b18017eb57d94876733377f532@www.novabbs.org>
References: <abe04jhkngt2uun1e7ict8vmf1fq8p7rnm@4ax.com> <memo.20240512203459.16164W@jgd.cix.co.uk> <v1rab7$2vt3u$1@dont-email.me> <20240513151647.0000403f@yahoo.com> <v1tre1$3leqn$1@dont-email.me> <9c79fb24a0cf92c5fac633a409712691@www.novabbs.org> <v1u6oi$3o53t$1@dont-email.me> <bcbda29c4c23543d1ed6de8290d1dc3b@www.novabbs.org> <v1ubel$3p1kt$1@dont-email.me> <01b78f15d6b5f87809163ce539356ef2@www.novabbs.org> <pZJ0O.14089$yT%1.12952@fx33.iad>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Injection-Info: i2pn2.org;
	logging-data="1316581"; mail-complaints-to="usenet@i2pn2.org";
	posting-account="65wTazMNTleAJDh/pRqmKE7ADni/0wesT78+pyiDW8A";
User-Agent: Rocksolid Light
X-Rslight-Site: $2y$10$lgJ756NJ.AMHDLciaF8OUOKTHoHaecsL5wUfdd/gkdl0bKRSCm/n2
X-Spam-Checker-Version: SpamAssassin 4.0.0
X-Rslight-Posting-User: ac58ceb75ea22753186dae54d967fed894c3dce8
Bytes: 3714
Lines: 49

Scott Lurndal wrote:

> mitchalsup@aol.com (MitchAlsup1) writes:
>>BGB wrote:

>>>>> Also I may need to rework how page-in/page-out is handled (and or how 
>>>>> IO is handled in general) since if a page swap needs to happen while 
>>>>> IO is already in progress (such as a page-miss in the system-call 
>>>>> process), at present, the OS is dead in the water (one can't access 
>>>>> the SDcard in the middle of a different access to the SDcard).
>>>> 
>>>> Having a HyperVisor helps a lot here, with HV taking the page faults
>>>> of the OS page fault handler.
>>
>>> Seems like adding another layer couldn't help with this, unless it also 
>>> abstracts away the SDcard interface.
>>
>>With a HV, GuestOS does not "do" IO is paravirtualizes it via HV.

> Actually, that's not completely accurate.  With PCI Express SR-IOV,
> an I/O MMU and hardware I/O virtualization, the guest accesses the I/O device
> hardware directly and initiates DMA transactions to-or-from the
> guest OS directly.   With the PCIe PRI (Page Request Interface), the
> guest DMA target pages don't need to be pinned by the hypervisor; the
> I/O MMU will interrupt the hypervisor to make the page present
> and pin it and the hardware will then do the DMA.

This was something I was not aware of but probably should have anticipated.

GuestOS initiates an I/O request (command) using a virtual function.
Rather than going through a bunch of activities to verify the user
owns the page and it is present, GuestOS just launches request and
then the I/O device page faults and pins the required page (if it is
not already so)--much like the page fault volcano when a new process
begins running:: page faulting in .text, the stack, and data pages
as they get touched.

This way, GuestOS simply considers all pages in its "portfolio" to be
present in memory, and HV does the heavy lifting and page virtualization.

I guess I should have anticipated this. Sorry !!

>>So, having a GuestOS in a position it cannot deal with another page
>>fault is no longer a hindrance:: GuestOS does not see that page fault;
>>it is just handled and goes away.

> There are two levels of page faults - at the guest level, the
> guest handles everything.   When the hypervisors supports
> multplexing multple guests on a core, it will only handle second
> level translation table faults.