|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 4/4] xen/arm: clean and invalidate all guest caches by VMID after domain build.
On Wed, 2014-01-29 at 13:00 +0000, Jan Beulich wrote:
> >>> On 29.01.14 at 13:11, Ian Campbell <ian.campbell@xxxxxxxxxx> wrote:
> > --- a/tools/libxc/xc_domain.c
> > +++ b/tools/libxc/xc_domain.c
> > @@ -48,6 +48,14 @@ int xc_domain_create(xc_interface *xch,
> > return 0;
> > }
> >
> > +int xc_domain_cacheflush(xc_interface *xch, uint32_t domid)
> > +{
> > + DECLARE_DOMCTL;
> > + domctl.cmd = XEN_DOMCTL_cacheflush;
> > + domctl.domain = (domid_t)domid;
>
> Why can't the function parameter be domid_t right away?
It seemed that the vast majority of the current libxc functions were
using uint32_t for whatever reason.
>
> > + case XEN_DOMCTL_cacheflush:
> > + {
> > + long rc = p2m_cache_flush(d, &domctl->u.cacheflush.start_mfn);
> > + if ( __copy_to_guest(u_domctl, domctl, 1) )
>
> While you certainly say so in the public header change, I think you
> recall that we pretty recently changed another hypercall to not be
> the only inconsistent one modifying the input structure in order to
> handle hypercall preemption.
That was a XNEMEM though IIRC -- is the same requirement also true of
domctl's?
How/where would you recommend saving the progress here?
>
> Further - who's responsible for initiating the resume after a
> preemption? p2m_cache_flush() returning -EAGAIN isn't being
> handled here, and also not in libxc (which would be awkward
> anyway).
I've once again fallen into the trap of thinking the common domctl code
would do it for me.
>
> > +static void do_one_cacheflush(paddr_t mfn)
> > +{
> > + void *v = map_domain_page(mfn);
> > +
> > + flush_xen_dcache_va_range(v, PAGE_SIZE);
> > +
> > + unmap_domain_page(v);
> > +}
>
> Sort of odd that you have to map a page in order to flush cache
> (which I very much hope is physically indexed, or else this
> operation wouldn't have the intended effect anyway). Can this
> not be done based on the machine address?
Sadly not, yes it is very annoying.
Yes, the cache is required to be physically indexed from armv7 onwards.
>
> > /* Preempt every 2MiB (mapped) or 32 MiB (unmapped) - arbitrary */
> > - if ( op == RELINQUISH && count >= 0x2000 )
> > + switch ( op )
> > {
> > - if ( hypercall_preempt_check() )
> > + case RELINQUISH:
> > + case CACHEFLUSH:
> > + if (count >= 0x2000 && hypercall_preempt_check() )
> > {
> > p2m->lowest_mapped_gfn = addr >> PAGE_SHIFT;
> > rc = -EAGAIN;
> > goto out;
> > }
> > count = 0;
> > + break;
> > + case INSERT:
> > + case ALLOCATE:
> > + case REMOVE:
> > + /* No preemption */
> > + break;
> > }
>
> Unrelated to the patch here, but don't you have a problem if you
> don't preempt _at all_ here for certain operation types? Or is a
> limit on the number of iterations being enforced elsewhere for
> those?
Good question.
The tools/guest accessible paths here are through
guest_physmap_add/remove_page. I think the only paths which are exposed
that pass a non-zero order are XENMEM_populate_physmap and
XENMEM_exchange, bot of which restrict the maximum order.
I don't think those guest_physmap_* are preemptible on x86 either?
It's possible that we should nevertheless handle preemption on those
code paths as well, but I don't think it is critical right now (*or at
least not critical enough to warrant an freeze exception for 4.4).
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |