[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [XEN][RFC PATCH 03/15] hvm-pci: Handle PCI config space in Xen



>>> On 22.03.12 at 16:59, Julien Grall <julien.grall@xxxxxxxxxx> wrote:
> --- /dev/null
> +++ b/xen/arch/x86/hvm/pci_emul.c
> @@ -0,0 +1,147 @@
> +#include <asm/hvm/support.h>
> +#include <xen/hvm/pci_emul.h>
> +#include <xen/pci.h>
> +#include <xen/sched.h>
> +#include <xen/xmalloc.h>
> +
> +#define PCI_DEBUGSTR "%x:%x.%x"
> +#define PCI_DEBUG(bdf) ((bdf) >> 16) & 0xff, ((bdf) >> 11) & 0x1f, ((bdf) >> 
> 8) & 
> 0x7
> +
> +static int handle_config_space(int dir, uint32_t port, uint32_t bytes,
> +                               uint32_t *val)
> +{
> +    uint32_t pci_cf8;
> +    struct pci_device_emul *pci;
> +    ioreq_t *p = get_ioreq(current);
> +    int rc = X86EMUL_UNHANDLEABLE;
> +    struct vcpu *v = current;
> +
> +    spin_lock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
> +
> +    if (port == 0xcf8)

You need to be considerably more careful here: This code should
handle only 32-bit wide aligned accesses, and you need to make
sure that everything else still gets properly forwarded to qemu
(so that e.g. port CF9 could still be properly emulated if desired).

> +    {
> +        rc = X86EMUL_OKAY;
> +        v->arch.hvm_vcpu.pci_cf8 = *val;
> +        goto end_handle;
> +    }
> +
> +    pci_cf8 = v->arch.hvm_vcpu.pci_cf8;
> +
> +    /* Retrieve PCI */
> +    pci = v->domain->arch.hvm_domain.pci_root.pci;
> +
> +    while (pci && !PCI_CMP_BDF(pci, pci_cf8))
> +        pci = pci->next;

Is there a reasonably low enforced boundary on the number
of devices? Otherwise, a linear lookup would seem overly
simple to me.

Further, with how PCI_CMP_BDF() is defined, you're doing the
wrong thing here anyway - bit 31 is required to be set for the
port CFC access to be a config space one. Plus there's an AMD
extension to this interface, so I think other than shifting out
the low 8 bits and checking that the high bit is set, you shouldn't
do any other masking here.

Jan

> +
> +    /* We just fill the ioreq, hvm_send_assist_req will send the request */
> +    if (unlikely(pci == NULL))
> +    {
> +        *val = ~0;
> +        rc = X86EMUL_OKAY;
> +        goto end_handle;
> +    }
> +
> +    p->type = IOREQ_TYPE_PCI_CONFIG;
> +    p->addr = (pci_cf8 & ~3) + (p->addr & 3);
> +
> +    set_ioreq(v, &pci->server->ioreq, p);
> +
> +end_handle:
> +    spin_unlock(&v->domain->arch.hvm_domain.pci_root.pci_lock);
> +    return rc;
> +}



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.