[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH] stubdom: foreignmemory: Fix build after 0dbb4be739c5


  • To: Julien Grall <julien@xxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 13 Jul 2021 18:27:28 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IXFogW2iLNr3+ISwri6Kupc6tyJl+RHjwnHigCw36RY=; b=NrXkY44PTjr1uNOd5b9GlSg5O5dVA86heR1IB56lXsJvu/ot6+mAKsWHxFP1omsRpoOWAMCF7n2fKVFQOkmNbQocni9idXyb0FSmM36jou9xmzRCYWt7XclJyVuHRf27rlejTS8/0dvMJvth+dVJhw/pTz5rZjqSkcnmkRdSdHGmnPNixUv8CJmXDTRg8tx3lvnZI3SW9mUdNgD6/wgYedrorQMl4yB7Zpn83Q/kdlbqXFfk9atKBCkO+c0s6xLSmgKNQhDD0qlJr5hOHtSIvgxTtQMgnEKwso8M0VIwENtLSivoFHvbfibcMo7Q9SSffUO84IldzOHwpMGzvXZ8YA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Dbp8smAJ2DzOZwQ/OYPQPpn7mC3PmMLpsgfdAXXFcl++O/EhXnAy4hMQx376ok4RYECIIgI83wUwnCKtUd39HEVxA70LZe7dNP/EoF1aoFLdwEOm2xbbEGaD2A1tghBMUa7aQTcareuus2nKGi7Ro7cqB4QAU7HHHAEwMBaCAQUdpKO+7yn4DvPcNXGg/gksLrHGTz7TkC48jnDBpZ4hHFRLfcJLUtBJNOjWJFqxhaayhd/4z5zzNCIYNlloPQqiIXwQU8UoIsmx8iWuMwU+3T9MCJSXC5RjLkzx0WcML5lk8xmiqogTleV1KXa1hUytxR+PZ90o/ptXL48B6vQ5cQ==
  • Authentication-results: suse.com; dkim=none (message not signed) header.d=none;suse.com; dmarc=none action=none header.from=suse.com;
  • Cc: Julien Grall <jgrall@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Costin Lupu <costin.lupu@xxxxxxxxx>, Juergen Gross <jgross@xxxxxxxx>
  • Delivery-date: Tue, 13 Jul 2021 16:27:44 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 13.07.2021 18:15, Julien Grall wrote:
> On 13/07/2021 16:52, Jan Beulich wrote:
>> On 13.07.2021 16:33, Julien Grall wrote:
>>> On 13/07/2021 15:23, Jan Beulich wrote:
>>>> On 13.07.2021 16:19, Julien Grall wrote:
>>>>> On 13/07/2021 15:14, Jan Beulich wrote:
>>>>>>> And I don't think it should be named XC_PAGE_*, but rather XEN_PAGE_*.
>>>>>>
>>>>>> Even that doesn't seem right to me, at least in principle. There 
>>>>>> shouldn't
>>>>>> be a build time setting when it may vary at runtime. IOW on Arm I think a
>>>>>> runtime query to the hypervisor would be needed instead.
>>>>>
>>>>> Yes, we want to be able to use the same userspace/OS without rebuilding
>>>>> to a specific hypervisor page size.
>>>>>
>>>>>> And thinking
>>>>>> even more generally, perhaps there could also be mixed (base) page sizes
>>>>>> in use at run time, so it may need to be a bit mask which gets returned.
>>>>>
>>>>> I am not sure to understand this. Are you saying the hypervisor may use
>>>>> at the same time different page size?
>>>>
>>>> I think so, yes. And I further think the hypervisor could even allow its
>>>> guests to do so.
>>>
>>> This is already the case on Arm. We need to differentiate between the
>>> page size used by the guest and the one used by Xen for the stage-2 page
>>> table (what you call EPT on x86).
>>>
>>> In this case, we are talking about the page size used by the hypervisor
>>> to configure the stage-2 page table
>>>
>>>> There would be a distinction between the granularity at
>>>> which RAM gets allocated and the granularity at which page mappings (RAM
>>>> or other) can be established. Which yields an environment which I'd say
>>>> has no clear "system page size".
>>>
>>> I don't quite understand why you would allocate and etablish the memory
>>> with a different page size in the hypervisor. Can you give an example?
>>
>> Pages may get allocated in 16k chunks, but there may be ways to map
>> 4k MMIO regions, 4k grants, etc. Due to the 16k allocation granularity
>> you'd e.g. still balloon pages in and out at 16k granularity.
> Right, 16KB is a multiple of 4KB, so a guest could say "Please allocate 
> a contiguous chunk of 4 4KB pages".
> 
>  From my understanding, you are suggesting to tell the guest that we 
> "support 4KB, 16KB, 64KB...". However, it should be sufficient to say 
> "we support 4KB and all its multiple".

No - in this case it could legitimately expect to be able to balloon
out a single 4k page. Yet that's not possible with 16k allocation
granularity.

Jan

> For hypervisor configured with 16KB (or 64KB) as the smaller page 
> granularity, then we would say "we support 16KB (resp. 64KB) and all its 
> multiple".
> 
> So the only thing we need is a way to query the small page granularity 
> supported. This could be a shift, size, whatever...
> 
> If the guest is supporting a small page granularity, then the guest 
> would need to make sure to adapt the balloning, grants... so they are at 
> least a multiple of the page granularity supported by the hypervisor.
> 
> Cheers,
> 




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.