[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 01/10] xen: vnuma topology and subop hypercalls



On Sun, Jul 20, 2014 at 09:16:11AM -0400, Elena Ufimtseva wrote:
[...]
> >> +
> >
> > I have question about this strategy. Is there any reason to choose to
> > fallback to this one node? In that case the toolstack will have
> > different view of the guest than the hypervisor. Toolstack still thinks
> > this guest has several nodes while this guest has only one. The can
> > cause problem when migrating a guest. Consider this, toolstack on the
> > remote end still builds two nodes given the fact that it's what it
> > knows, then the guest originally has one node notices the change in
> > underlying memory topology and crashes.
> >
> > IMHO we should just fail in this case. It's not that common to fail a
> > small array allocation anyway. This approach can also save you from
> > writing this function. :-)
> 
> I see and agree )
> 
> Do you mean fail as to not set any vnuma for domain? If yes, it sort of
> contradicts with statement 'every pv domain has at least one vnuma node'.
> Would be it reasonable on failed call to xc_domain_setvnuma from libxl
> to fallback to
> one node in tool stack as well?
> 

I mean:
1. we should setup one node if user doesn't specify one (the default
   case)
2. we should fail if user specifies vnuma config but we fail to allocate
   relevant structures, instead of falling back to one node setup,
   because this leads to discrepancy between hypervisor and toolstack

By "fail" I mean returning error code from hypervisor to toolstack
saying that domain creation fails.

#1 is consistent with what we already have. It's #2 here that I
proposed.

> >
> >> +/*
> >> + * construct vNUMA topology form u_vnuma struct and return
> >> + * it in dst.
> >> + */
> > [...]
> >> +
[...]
> >> +struct vnuma_topology_info {
> >> +    /* IN */
> >> +    domid_t domid;
> >> +    /* IN/OUT */
> >> +    unsigned int nr_vnodes;
> >> +    unsigned int nr_vcpus;
> >> +    /* OUT */
> >> +    union {
> >> +        XEN_GUEST_HANDLE(uint) h;
> >> +        uint64_t pad;
> >> +    } vdistance;
> >> +    union {
> >> +        XEN_GUEST_HANDLE(uint) h;
> >> +        uint64_t pad;
> >> +    } vcpu_to_vnode;
> >> +    union {
> >> +        XEN_GUEST_HANDLE(vmemrange_t) h;
> >> +        uint64_t pad;
> >> +    } vmemrange;
> >
> > Why do you need to use union? The other interface you introduce in this
> > patch doesn't use union.
> 
> This is one is for making sure on 32 and 64 bits the structures are of
> the same size.
> 

I can see other similiar occurences of XEN_GUEST_HANDLE don't need
padding. Did I miss something?

Wei.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.