[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 07/10] xen/arm: Add handling write fault for dirty-page tracing

> -----Original Message-----
> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
> bounces@xxxxxxxxxxxxx] On Behalf Of Ian Campbell
> Sent: Tuesday, August 06, 2013 10:17 PM
> To: Jaeyong Yoo
> Cc: xen-devel@xxxxxxxxxxxxx; 'Stefano Stabellini'
> Subject: Re: [Xen-devel] [PATCH v3 07/10] xen/arm: Add handling write
> fault for dirty-page tracing
> On Tue, 2013-08-06 at 20:56 +0900, Jaeyong Yoo wrote:
> > > I hope my description of a linear map makes sense, hard to do
> > > without a whiteboard ;-)
> >
> > Thanks a lot for the ascii art! Even without whiteboard, it works very
> > nicely for me :)
> Oh good, I was worried I'd made it very confusing !
> >
> > I think I understand your points. Previously, in my implementation, I
> > created the Xen mapping to leaf PTE of the P2M by looking up the guest
> > leaf p2m and create_xen_table, but everything could be better if I
> > just map the xen_second to the guest's P2M first. Then, by just
> > reading the correct VA, I can immediately access leaf PTE of guest p2m.
> Correct.
> > As a minor issue, I don't correctly understand your numbers within
> > XEN_SECOND[64..80] for p2m third and XEN_SECOND[80..144] for p2m second.
> > I think p2m third should have larger VA ranges than the one in p2m
> second.
> The numbers are slot number (i.e. entries) within the xen_second table.
> I think I was just a bit confused, the p2m_second entry only actually
> needs one entry I think, since you just need it to loopback to the entry
> containing the p2m_third mapping.
> > If I'm not mistaken, if we try to migrate domU with memory size 4GB,
> > it requires VA sizes of 8MB for p2m third and 16KB for p2m second.
> Sounds about right. The reason I used 16 slots is that we, at least in
> theory, support domU memory size up to 16GB and each slot == 1GB.
> > Since we have 128MB size for vlpt, how about allocating vlpt for
> > different ranges within 128MB to each migrating domU? This way, we
> > don't need to context switch the xen second page tables. Although it
> > limits the simultaneous live migration with large memory size DomU's,
> > but for ARM, I think it is reasonable.
> You'd only be able to fit 4, I think, if supporting 16GB guests. I'm not
> sure how I feel about making a slightly random limitation like that.
> Why don't we just context switch the slots for now, only for domains where
> log dirty is enabled, and then we can measure and see how bad it is etc.

Here goes the measurement results:

For better understanding of trade-off between vlpt and page-table 
walk in dirty-page handling, let's consider the  following two cases:
 - Migrating a single domain at a time:
 - Migrating multiple domains concurrently:

For each case, the metrics that we are going to see is the following:
 - page-table walk overhead: for handling a single dirty-page, 
    page-table requires 6us and vlpt (improved version) requires 1.5us. 
    From this, we consider 4.5 us for pure overhead compared to vlpt. 
    And it happens every dirty-pages.
 - vlpt overhead: the only vlpt overhead is the flushes at context
    switch. And flushing 34MB (which is for supporting 16GB domU)
    virtual address range requires 130us. And it happens when two
    migrating domUs are contexted switched.

Here goes the results:

 - Migrating a domain at a time:
    * page-table walk overhead: 4.5us * 611 times = 2.7ms
    * vlpt overhead: 0 (no flush required)

 - Migrating two domains concurrently:
    * page-table walk overhead: 4.5us * 8653 times = 39 ms
    * vlpt overhead: 130us * 357 times = 46 ms

Although page-table walk gives little bit better performance in
migrating two domains, I think it is better to choose vlpt due to 
the following reasons:
 - In the above tests, I did not run any workloads at migrating domU,
    and IIRC, when I run gzip or bonnie++ in domU, the dirty-pages grow
    to few thousands. Then, page-table walk overhead becomes few hundred
    milli-seconds even in migrating a domain.
 - I would expect that migrating a single domain would be used more
    Frequently than migrating multiple domains at a time. 

One more thing: regarding your comments about tlb lockdown, which is:
> It occurs to me now that with 16 slots changing on context switch and 
> a further 16 aliasing them (and hence requiring maintenance too) for 
> the super pages it is possible that the TLB maintenance at context 
> switch might get prohibitively expensive. We could address this by 
> firstly only doing it when switching to/from domains which have log 
> dirty mode enabled and then secondly by seeing if we can make use of 
> global or locked down mappings for the static Xen .text/.data/.xenheap 
> mappings and therefore allow us to use a bigger global flush.

Unfortunately Cortex A15 looks like not supporting tlb lockdown.
And, I am not sure that setting global of page table entry prevents being
flushed from TLB flush operation.
If it works, we may decrease the vlpt overhead a lot.


> Ian.
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.