[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v4 3/8] ioreq-server: create basic ioreq server abstraction.



>>> On 08.04.14 at 11:32, <Paul.Durrant@xxxxxxxxxx> wrote:
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> >>> On 02.04.14 at 17:11, <paul.durrant@xxxxxxxxxx> wrote:
>> > +
>> > +    rc = alloc_unbound_xen_event_channel(v, s->domid, NULL);
>> > +    if ( rc < 0 )
>> > +        goto fail2;
>> > +
>> > +    sv->ioreq_evtchn = rc;
>> > +
>> > +    if ( v->vcpu_id == 0 )
>> > +    {
>> 
>> I generally dislike needless dependencies on vCPU 0 being the first
>> one to make it into any specific function. Can't you check emptiness
>> of s->ioreq_vcpu_list instead?
>> 
> 
> That will need a bit more code. I could create the buffered channel on first 
> cpu addition but, I'd then need to track which cpu that  was and re-plumb the 
> event channel if that cpu disappears. Also, the default server has always 
> bound the buffered channel to cpu 0 so would I not risk a compatibility issue 
> by changing this?

Hmm, good point. Albeit I still wonder what would happen if vCPU 0
went away.

But yes, considering that this is code effectively getting moved here
(i.e. having special cased vCPU 0 already before), I guess I withdraw
my change request for now.

>> > +static int hvm_set_ioreq_pfn(struct domain *d, bool_t buf,
>> > +                             unsigned long pfn)
>> > +{
>> > +    struct hvm_ioreq_server *s = d->arch.hvm_domain.ioreq_server;
>> > +    struct hvm_ioreq_page *iorp = buf ? &s->bufioreq : &s->ioreq;
>> > +    int rc;
>> > +
>> > +    spin_lock(&s->lock);
>> > +
>> > +    rc = hvm_map_ioreq_page(d, iorp, pfn);
>> 
>> While I realize that at this point there's only one server per domain,
>> the locking and operation still look to be out of sync at the first
>> glance: As this can't remain that way anyway till the end of the series,
>> can't this be brought back in sync here right away (whether that's by
>> passing s instead of d into the function or acquiring the lock only after
>> the call I don't know offhand)?
> 
> I don't follow what you mean by 'out of sync' here: The lock is taken, the 
> pfn is mapped, the lock is dropped. What am I missing?

You lock "s" but operate on "d".

>> > @@ -1339,30 +1549,10 @@ int hvm_vcpu_initialise(struct vcpu *v)
>> >           && (rc = nestedhvm_vcpu_initialise(v)) < 0 ) /* teardown:
>> > nestedhvm_vcpu_destroy */
>> >          goto fail5;
>> >
>> > -    dm_domid = d-
>> >arch.hvm_domain.params[HVM_PARAM_DM_DOMAIN];
>> > -
>> > -    /* Create ioreq event channel. */
>> > -    rc = alloc_unbound_xen_event_channel(v, dm_domid, NULL); /*
>> teardown: none */
>> > -    if ( rc < 0 )
>> > +    rc = hvm_ioreq_server_add_vcpu(s, v);
>> > +    if ( rc != 0 )
>> 
>> Can this really be > 0 now, and if so is this being handled correctly in
>> the caller(s)?
> 
> It's not the same function being called. hvm_ioreq_server_add_vcpu () can 
> only return 0 or a -ve errno.

Oh, sorry, didn't pay close enough attention.

> I can test for < 0 but IIRC you objected to doing 
> that in a review of a previous patch if the function never returns > 0. Would 
> you prefer 'if ( rc )'?

Yes, "if ( rc )" would seem better, but with the called function changing
it doesn't really matter all that much.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.