[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH xen v2] xen: arm: fully implement multicall interface.



On Mon, 2014-04-07 at 16:13 +0100, Jan Beulich wrote:
> >>> On 07.04.14 at 15:42, <Ian.Campbell@xxxxxxxxxx> wrote:
> > On Mon, 2014-04-07 at 14:27 +0100, Jan Beulich wrote:
> >> >>> On 07.04.14 at 14:48, <ian.campbell@xxxxxxxxxx> wrote:
> >> > +static void check_multicall_32bit_clean(struct multicall_entry *multi)
> >> > +{
> >> > +    int i;
> >> > +
> >> > +    for ( i = 0; i < arm_hypercall_table[multi->op].nr_args; i++ )
> >> > +    {
> >> > +        if ( unlikely(multi->args[i] & 0xffffffff00000000ULL) )
> >> > +        {
> >> > +            printk("%pv: multicall argument %d is not 32-bit clean 
> >> > %"PRIx64"\n",
> >> > +                   current, i, multi->args[i]);
> >> > +            domain_crash_synchronous();
> >> 
> >> On x86 we actually decided quite a long while ago to try to avoid
> >> domain_crash_synchronous() whenever possible.

I meant to ask why this was?

I'm supposing that the for(;;) do_softirqs is not considered very
nice...

>  Would
> >> domain_crash(current->domain) not work for you here?
> > 
> > I wondered about that and concluded that I didn't really grok the
> > difference so I went with what I thought was the safer option. I'd be
> > happy to change if that's considered the right thing to do.
> > 
> > My concern was that domain_crash returns and I wasn't sure what I was
> > then supposed to do (obviously not actually run the call though) or when
> > the domain was actually guaranteed to die. In particular I might get
> > more calls to multicall_call for the rest of the batch. I'm not sure
> > that matters other than the possibility that skipping one call in the
> > middle might lead to confusing log spam from the remaining calls.
> 
> You'd want to return some error indication, avoid anything else to be
> done that might confusion on a dead domain (read: abort the entire
> multicall),

Hrm, that will involve frobbing around with the common do_multicall code
since it currently doesn't consider the possibility of do_multicall_call
failing in a fatal way.

>  and on the hypercall exit path the vCPU would be taken off
> the scheduler, i.e. you run your normal call tree to completion and
> you're guaranteed that the vCPU in question won't make it back into
> guest context.

What about other vcpus? I suppose they get nobbled as and when they
happen to enter the hypervisor?

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.