[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RESEND v5 5/6] xen/arm: Implement hypercall for dirty page tracing



On Wed, 2013-11-20 at 18:49 +0900, Jaeyong Yoo wrote:
> > -----Original Message-----
> > From: Ian Campbell [mailto:Ian.Campbell@xxxxxxxxxx]
> > Sent: Tuesday, November 19, 2013 8:57 PM
> > To: Jaeyong Yoo
> > Cc: xen-devel@xxxxxxxxxxxxx
> > Subject: Re: [Xen-devel] [PATCH RESEND v5 5/6] xen/arm: Implement
> > hypercall for dirty page tracing
> > 
> > On Tue, 2013-11-19 at 10:32 +0900, Jaeyong Yoo wrote:
> > > > > > diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c index
> > > > > > c0b5dd8..0a32301 100644
> > > > > > --- a/xen/arch/arm/domain.c
> > > > > > +++ b/xen/arch/arm/domain.c
> > > > > > @@ -215,6 +215,12 @@ static void ctxt_switch_to(struct vcpu *n)
> > > > > >      WRITE_SYSREG(hcr, HCR_EL2);
> > > > > >      isb();
> > > > > >
> > > > > > +    /* for dirty-page tracing
> > > > > > +     * XXX: how do we consider SMP case?
> > > > > > +     */
> > > > > > +    if ( n->domain->arch.dirty.mode )
> > > > > > +        restore_vlpt(n->domain);
> > > > >
> > > > > This is an interesting question. xen_second is shared between all
> > > > > pcpus, which means that the vlpt is currently only usable from a
> > > > > single physical CPU at a time.
> > > > >
> > > > > Currently the only per-cpu area is the domheap region from 2G-4G.
> > > > > We could steal some address space from the top or bottom of there?
> > > >
> > > > oh right hmm. Then, how about place the vlpt area starting from 2G
> > > > (between dom heap and xen heap), and let the map_domain_page map the
> > > > domain page starting from the VLPT-end address?
> > > >
> > > > For this, I think just changing DOMHEAP_VIRT_START to some place
> > > > (maybe
> > > > 0x88000000) would be suffice.
> > > >
> > >
> > > I just found out that DOMHEAP_VIRT_START should be aligned to
> > > first-level size.
> > > So, 0x88000000 wouldn't work.
> > > So, I'm thinking to use the back of the domheap area and piggybacking
> > > the VLPT into the dom-heap region. For this, we should use something
> > > like the following layout:
> > >
> > > #define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
> > > #define VIRT_LIN_P2M_START     _AT(vaddr_t,0xf8000000)
> > > #define VIRT_LIN_P2M_END       _AT(vaddr_t<0xffffffff)
> > > #define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
> > >
> > > where VLPT area is overlapped to domheap. This is necessary since
> > > dommap size is decided at booting time by the calculation of
> > > DOMHEAP_VIRT_END - DOMHEAP_VIRT_START, and at this time we have to
> > > also allocate the dommap for VLPT.
> > 
> > I'd prefer to make that explicit at setup time than artificially
> > overlaying the two.
> > 
> > I was half wondering about providing a function to allocate a chunk of
> > domheap address range which could then be used by this code to get a bit
> > of memory dynamically. I'm not sure about this. One concern is that it
> > might provoke pessimal behaviour from the domheap (which is a hash table,
> > so it would be stealing slots).
> 
> I think so. If we are going to provide such a function like that, we have to
> think about how to handle a pool of memory region ready for allocation, and
> how does it work with 'map_domain_page'. One quick thought is to make
> "DOMHEAP_ENTRIES" as a variable and adjust this value as the large-chunk
> memory is allocated or freed. But, honestly I don't want to increase the
> probability of breaking anything.
> 
> I think it would be better off setting it statically at booting time. And,
> by doing so, DOMHEAP_ENTRIES may not be the power of 2. I think it is OK, but
> you suggested that DOMHEAP_ENTRIES being power of 2 is good. Could you explain
> a bit more about this?

I was just thinking that the it would make the modular arithmetic
simpler and/or that the hashing algorithm might rely on it somehow. But
thinking again I think this mustn't be the case, and anyway keeping with
powers of two would require halving from 2GB to 1GB which is
undesirable.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.