[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [Xen-devel] [Draft E] Xen on ARM vITS Handling
Hi Ian,
This draft looks good to me. I have only few comments (see below).
On 09/06/2015 11:42, Ian Campbell wrote:
## LPI Configuration Table Virtualisation
A `vLPI` cannot in general be associated with a specific
`pLPI`. Therefore there is no virtualisation to be done here.
Instead a lookup is made into the table each time a `vLPI` is to be
delivered.
define get_vlpi_cfg(domain, vlpi, uint8_t *cfg):
offset = vlpi*sizeof(uint8_t)
if offset > <LPI CFG Table size>: error
lpi_entry = <LPI CFG IPA> + offset
page = p2m_lookup(domain, lpi_entry, p2m_ram)
Note that we may want to use get_page_from_gfn in order to get a
reference to the page and avoid the guest removing the page under our feet.
if !page: error
/* nb: non-RAM pages, e.g. grant mappings,
* are rejected by this lookup */
lpi_mapping = map_domain_page(page)
*cfg = lpi_mapping[<appropriate page offset>];
unmap_domain_page(lpi_mapping)
Note that physical interrupts are always configured with a priority of
`GIC_PRI_IRQ`, regardless of the priority of any virtual interrupt.
### Enabling and disabling LPIs
Since there is no 1:1 link between a `vLPI` and `pLPI` enabling and
disabling of phyiscal LPIs cannot be driven from the state of an
associated vLPI.
Each `pLPI` is routed and enabled dureing device assignment, therefore
s/dureing/during/
[..]
## Virtual LPI injection
As discussed above the `vgic_vcpu_inject_irq` functionality will need
to be extended to cover this new case, most likely via a new
`vgic_vcpu_inject_lpi` frontend function. `vgic_vcpu_inject_irq` will
also require some refactoring to allow the priority to be passed in
from the caller (since `LPI` proprity comes from the `LPI` CFG table,
while `SPI` and `PPI` priority is configured via other means).
`vgic_vcpu_inject_lpi` receives a `struct domain *` and a virtual
interrupt number (corresponding to a vLPI) and needs to figure out
which vcpu this should map to.
To do this it must look up the Collection ID associated (via the vITS)
with that LPI.
Proposal: Add a new `its_device` field to `struct irq_guest`, a
pointer to the associated `struct its_device`. The existing `struct
irq_guest.virq` field contains the event ID (perhaps use a `union`
to give a more appropriate name) and _not_ the virtual LPI. Injection
then consists of:
d = irq_guest->domain
virq = irq_guest->virq
its_device = irq_guest->its_device
get_vitt_entry(d, its_device->virt_device_id, virq, &vitt)
vcpu = d->its_collections[vitt.collection]
if !is_valid_lpi(vitt.vlpi): error
get_vlpi_cfg(d, vitt.vlpi, &cfg)
if !cfg.enabled: ignore
Why? If you ignore it, it won't be possible anymore to inject the same
LPI to the guest when it's re-enabled.
This is a valid use case and can happen if you decouple the pLPI
configuration and vLPI configuration or because the value is not yet
replicate. AFAIU, you are using the latter in this spec.
vgic_vcpu_inject_irq(&d->vcpus[vcpu], vitt.vlpi, cfg.priority)
Regards,
--
Julien Grall
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|