[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 14/24] [xen-unstable.hg] xen-internal API to allow xenstore stubdom to be registered to handle VIRQ_DOM_EXC
I sent these to Keir in an earlier state. He semeed to agree with the general approach. Here was my explanation: Xenstored needs to know when domains are dead/dying. When xenstored ran in dom0, this was the process: 1) Hypervisor sends VIRQ_DOM_EXC to dom0. 2) xenstored gets the domain info on each of its known domains to figure out who is dead/dying. Those were two separate items that no longer worked when moving xenstored into a dedicated domain: 1) The xenstore domain does not receive the event if it is sent to dom0. 2) Getting domain info is restricted to fully privileged domains. I've created a new hypercall so that dom0 can delegate the handling of VIRQ_DOM_EXC to the xenstore domain. I've also added an exception allowing the handler of VIRQ_DOM_EXC to query for domain info. 01_xen_virq_handler_api Adds VIRQ handler functions [1]. 02_xen_use_virq_handler_api Converts most users to the above. 03_xen_virq_handler_hypercall New hypercall to set VIRQ handler. 04_xen_virq_handler_dom_exc Whitelist VIRQ_DOM_EXC for handling, permissions exception. [1] There's a new table of global VIRQ handlers: indexed by VIRQ number, values are domain struct pointers. When you want to send a global VIRQ with this mechanism, you only give the VIRQ number (not a domain). If the corresponding domain struct pointer is not NULL, the event is sent to that domain. Otherwise, it's sent to dom0. These handlers can't be modified arbitrarily: changing the handler to the explicit dom0 pointer allows that handler to be modified in the future. Signed-off-by: Diego Ongaro <diego.ongaro@xxxxxxxxxx> Signed-off-by: Alex Zeffertt <alex.zeffertt@xxxxxxxxxxxxx> --- diff -r 62da7b4ee641 xen/common/domain.c --- a/xen/common/domain.c Wed Mar 18 12:59:28 2009 +0000 +++ b/xen/common/domain.c Wed Mar 18 15:00:11 2009 +0000 @@ -560,6 +560,8 @@ arch_domain_destroy(d); + clear_global_virq_handler(d); + rangeset_domain_destroy(d); sched_destroy_domain(d); diff -r 62da7b4ee641 xen/common/event_channel.c --- a/xen/common/event_channel.c Wed Mar 18 12:59:28 2009 +0000 +++ b/xen/common/event_channel.c Wed Mar 18 15:00:11 2009 +0000 @@ -56,6 +56,8 @@ rc = (_errno); \ goto out; \ } while ( 0 ) + +extern struct domain *dom0; static int evtchn_set_pending(struct vcpu *v, int port); @@ -643,6 +645,92 @@ return evtchn_set_pending(d->vcpu[chn->notify_vcpu_id], port); } +/* NULL => locked to dom0 */ +/* dom0 => unlocked, but send to dom0 currently */ +/* else => unlocked, but send to this domain currently */ +static struct domain* global_virq_handlers[NR_VIRQS]; + +/* TODO: check use of locking and {get,put}_domain */ +spinlock_t global_virq_handlers_lock = SPIN_LOCK_UNLOCKED; + +static struct domain* _get_global_virq_handler(int virq) +{ + struct domain *d; + + d = global_virq_handlers[virq]; + return d != NULL ? d : dom0; +} + +int is_global_virq_handler(struct domain *d, int virq) +{ + ASSERT(virq >= 0 && virq < NR_VIRQS); + ASSERT(virq_is_global(virq)); + + return _get_global_virq_handler(virq) == d; +} + +void send_handler_global_virq(int virq) +{ + ASSERT(virq >= 0 && virq < NR_VIRQS); + ASSERT(virq_is_global(virq)); + + send_guest_global_virq(_get_global_virq_handler(virq), virq); +} + +int set_global_virq_handler(struct domain *d, int virq) +{ + struct domain *old; + int rc; + + if (virq < 0 || virq >= NR_VIRQS) + return -EINVAL; + if (!virq_is_global(virq)) + return -EINVAL; + + if (global_virq_handlers[virq] == d) + return 0; + + if (unlikely(!get_domain(d))) + return -EINVAL; + + spin_lock(&global_virq_handlers_lock); + + old = global_virq_handlers[virq]; + if (d != dom0 && old == NULL) { + rc = -EINVAL; + put_domain(d); + } else { + if (old != NULL) { + global_virq_handlers[virq] = NULL; + put_domain(old); + } + global_virq_handlers[virq] = d; + rc = 0; + } + + spin_unlock(&global_virq_handlers_lock); + + return rc; +} + +void clear_global_virq_handler(struct domain *d) +{ + int virq; + + if (d == dom0) + return; + + spin_lock(&global_virq_handlers_lock); + + for (virq = 0; virq < NR_VIRQS; virq++) { + if (unlikely(global_virq_handlers[virq] == d)) { + global_virq_handlers[virq] = dom0; + put_domain(d); + } + } + + spin_unlock(&global_virq_handlers_lock); +} static long evtchn_status(evtchn_status_t *status) { diff -r 62da7b4ee641 xen/include/xen/event.h --- a/xen/include/xen/event.h Wed Mar 18 12:59:28 2009 +0000 +++ b/xen/include/xen/event.h Wed Mar 18 15:00:11 2009 +0000 @@ -29,6 +29,36 @@ * @virq: Virtual IRQ number (VIRQ_*) */ void send_guest_global_virq(struct domain *d, int virq); + +/* TODO: check comment formatting style: */ + +/* + * send_handler_global_virq: Notify handler via a global VIRQ. + * @virq: Virtual IRQ number (VIRQ_*), must be global + * The event will be sent to the explicitly set handler (see + * set_global_virq_handler), or dom0. + */ +void send_handler_global_virq(int virq); + +/* + * sent_global_virq_handler: Set a global VIRQ handler. + * @d: Domain to which future virtual IRQ's should be sent + * @virq: Virtual IRQ number (VIRQ_*), must be global + */ +int set_global_virq_handler(struct domain *d, int virq); + +/* + * clear_global_virq_handler: Remove this handler from all global VIRQs. + * @d: Domain to no longer handle global virtual IRQs + */ +void clear_global_virq_handler(struct domain *d); + +/* + * is_global_virq_handler: Query if domain is the handler for this global VIRQ. + * @d: Domain which may be the handler + * @virq: Virtual IRQ number (VIRQ_*), must be global + */ +int is_global_virq_handler(struct domain *d, int virq); /* * send_guest_pirq: _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |