[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 03/11] x86/mmcfg: add handlers for the PVH Dom0 MMCFG areas



>>> On 04.10.17 at 12:21, <roger.pau@xxxxxxxxxx> wrote:
> On Wed, Oct 04, 2017 at 08:31:18AM +0000, Jan Beulich wrote:
>> >>> On 19.09.17 at 17:29, <roger.pau@xxxxxxxxxx> wrote:
>> > +static int vpci_mmcfg_read(struct vcpu *v, unsigned long addr,
>> > +                           unsigned int len, unsigned long *data)
>> > +{
>> > +    struct domain *d = v->domain;
>> > +    const struct hvm_mmcfg *mmcfg;
>> > +    unsigned int reg;
>> > +    pci_sbdf_t sbdf;
>> > +
>> > +    *data = ~0ul;
>> > +
>> > +    read_lock(&d->arch.hvm_domain.mmcfg_lock);
>> > +    mmcfg = vpci_mmcfg_find(d, addr);
>> > +    if ( !mmcfg )
>> > +    {
>> > +        read_unlock(&d->arch.hvm_domain.mmcfg_lock);
>> > +        return X86EMUL_OKAY;
>> > +    }
>> 
>> With the lock dropped between accept() and read() (or write() below),
>> is it really appropriate to return OKAY here? The access again should
>> be forwarded to qemu, I would think.
> 
> That's right, the MCFG area could have been removed in the meantime.
> I guess it is indeed more appropriate to return X86EMUL_UNHANDLEABLE
> or X86EMUL_RETRY.
> 
> It would seem like RETRY is better, since a new call to _accept should
> return false now.

Ah, yes, I'm fine with RETRY.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.