[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 6/9] ioreq-server: add support for multiple servers



> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 07 May 2014 13:23
> To: Paul Durrant
> Cc: Ian Campbell; Ian Jackson; Stefano Stabellini; xen-devel@xxxxxxxxxxxxx
> Subject: RE: [PATCH v5 6/9] ioreq-server: add support for multiple servers
> 
> >>> On 07.05.14 at 14:06, <Paul.Durrant@xxxxxxxxxx> wrote:
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> >>> On 01.05.14 at 14:08, <paul.durrant@xxxxxxxxxx> wrote:
> >> > +static int hvm_access_cf8(
> >> > +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> >> > +{
> >> > +    struct vcpu *curr = current;
> >> > +    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
> >> > +    int rc;
> >> > +
> >> > +    BUG_ON(port < 0xcf8);
> >> > +    port -= 0xcf8;
> >> > +
> >> > +    spin_lock(&hd->pci_lock);
> >>
> >> Is there really any danger in not having this lock at all? On real
> >> hardware, if the OS doesn't properly serialize accesses, the
> >> result is going to be undefined too. All I think you need to make
> >> sure is that ->pci_cf8 never gets updated non-atomically.
> >>
> >
> > I could remove it, but is it doing any harm?
> 
> Any spin lock it harmful to performance.
> 

Ok, I'll remove it.

> >> > +static int hvm_access_cfc(
> >> > +    int dir, uint32_t port, uint32_t bytes, uint32_t *val)
> >> > +{
> >> > +    struct vcpu *curr = current;
> >> > +    struct hvm_domain *hd = &curr->domain->arch.hvm_domain;
> >> > +    int rc;
> >> > +
> >> > +    BUG_ON(port < 0xcfc);
> >> > +    port -= 0xcfc;
> >> > +
> >> > +    spin_lock(&hd->pci_lock);
> >> > +
> >> > +    if ( hd->pci_cf8 & (1 << 31) ) {
> >> > +        /* Fall through to an emulator */
> >> > +        rc = X86EMUL_UNHANDLEABLE;
> >> > +    } else {
> >> > +        /* Config access disabled */
> >>
> >> Why does this not also get passed through to an emulator?
> >>
> >
> > I was trying to be consistent with QEMU here. It squashes any data
> accesses
> > if cf8 has the top bit set.
> 
> But afaict with that dropped the entire function can go away.
> 

Yes, it can. Do you not think it would be a good idea to be consistent with 
QEMU though?

> >> > +void hvm_broadcast_assist_req(ioreq_t *p)
> >> > +{
> >> > +    struct vcpu *v = current;
> >> > +    struct domain *d = v->domain;
> >> > +    struct hvm_ioreq_server *s;
> >> > +
> >> > +    list_for_each_entry ( s,
> >> > +                          &d->arch.hvm_domain.ioreq_server_list,
> >> > +                          list_entry )
> >> > +        (void) hvm_send_assist_req_to_ioreq_server(s, v, p);
> >> > +}
> >>
> >> Is there possibly any way to make sure only operations not having
> >> any results and not affecting guest visible state changes can make
> >> it here?
> >>
> >
> > Well, I could whitelist the IOREQ type(s) here I guess.
> 
> By way of ASSERT() perhaps then...

Yes.

> 
> >> >  struct hvm_domain {
> >> > -    struct hvm_ioreq_server *ioreq_server;
> >> > +    /* Guest page range used for non-default ioreq servers */
> >> > +    unsigned long           ioreq_gmfn_base;
> >> > +    unsigned long           ioreq_gmfn_mask;
> >> > +    unsigned int            ioreq_gmfn_count;
> >> > +
> >> > +    /* Lock protects all other values in the following block */
> >> >      spinlock_t              ioreq_server_lock;
> >> > +    ioservid_t              ioreq_server_id;
> >> > +    unsigned int            ioreq_server_count;
> >> > +    struct list_head        ioreq_server_list;
> >>
> >> Rather than using all these redundant prefixes, could I talk you into
> >> using sub-structures instead:
> >
> > Ok, if you prefer that style.
> 
> I'm not insisting here since I know some others are of different
> opinion. But doing it that way may potentially allow passing the
> address to just a sub-structure to functions, in turn making those
> functions more legible. But as said - if you're not convinced, leave
> it as is.
> 

Ok, I'll see how it looks.

  Paul

> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.