[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 0/2] xen: Allow xen tools to run in guest using 64K page granularity



Hi Ian,

On 07/07/15 16:19, Ian Campbell wrote:
> On Tue, 2015-07-07 at 13:20 +0100, Julien Grall wrote:
>> Hi Ian,
>>
>> On 25/06/15 12:23, Ian Campbell wrote:
>>> On Thu, 2015-06-25 at 11:21 +0100, Wei Liu wrote:
>>>> On Mon, May 11, 2015 at 12:55:34PM +0100, Julien Grall wrote:
>>>>> Hi all,
>>>>>
>>>>> This small series are the only changes required in Xen in order to run a 
>>>>> guest
>>>>> using 64K page granularity on top of an unmodified Xen.
>>>>>
>>>>> I'd like feedback from maintainers tools to know if it might be worth to
>>>>> introduce a function xc_pagesize() replicating the behavior of 
>>>>> getpagesize()
>>>>> for Xen.
>>>>>
>>>>
>>>> Can we start with documenting the ABI (?) for communicating between
>>>> guests with different page sizes?
>>>
>>> We should certainly make it clearer what things are in terms of "Xen ABI
>>> page size" vs "the guest's page size" and other things.
>>>
>>> I think we can commit these two without that though?
>>
>> Ping? It would be nice to have these 2 patches for Xen 4.5. This is the
>> only patch necessary in Xen in order to run OS using 64KB page
>> granularity as DOM0.
> 
> Wei acked the patch at "Thu, 25 Jun 2015 13:58:26 +0100" after the
> discussion around the docs which concluded at "Thu, 25 Jun 2015 12:45:49
> +0100", IOW I have assumed this Ack to mean he is ok with them going in
> with a docs patch to follow.

Would it be possible to backport those 2 patches on Xen 4.5 (I don't
think it worth to do it for Xen 4.4)? It's the only changes to allow
Linux using 64KB page granularity running on Xen 4.5.

Regards,

-- 
Julien Grall

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.