[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v5 19/22] hvm/params: Add a new delivery type for event-channel in HVM_PARAM_CALLBACK_IRQ



On Fri, Mar 04, 2016 at 02:15:49PM +0800, Shannon Zhao wrote:
> From: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
> 
> Add a new delivery type:
> val[63:56] == 3: val[15:8] is flag: val[7:0] is a PPI.
> To the flag, bit 8 stands the interrupt mode is edge(1) or level(0) and
> bit 9 stands the interrupt polarity is active low(1) or high(0).
> 
> Cc: Jan Beulich <jbeulich@xxxxxxxx>
> Signed-off-by: Shannon Zhao <shannon.zhao@xxxxxxxxxx>
> ---
> v5: fix the statement
> ---
>  xen/include/public/hvm/params.h | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/xen/include/public/hvm/params.h b/xen/include/public/hvm/params.h
> index 73d4718..2ea8d62 100644
> --- a/xen/include/public/hvm/params.h
> +++ b/xen/include/public/hvm/params.h
> @@ -55,6 +55,16 @@
>   * if this delivery method is available.
>   */
>  
> +#define HVM_PARAM_CALLBACK_TYPE_EVENT    3
> +/*
> + * val[55:16] need to be zero.
> + * val[15:8] is flag of event-channel interrupt:
> + *  bit 8: interrupt is edge(1) or level(0) triggered
> + *  bit 9: interrupt is active low(1) or high(0)
> + * val[7:0] is PPI number used by event-channel.
> + * This is only used by ARM/ARM64.


How do you deal with masking and EOI?

One of the things that I have realized when looking at vAPIC, 
HVMOP_set_evtchn_upcall_vector,
and this vector callback is that it is a bit ... wild west.

On x86 when we inject the vector - the Linux (and FreeBSD) call right in
the event channel code. Both of them clear the evtchn_upcall_pending right
away but do not set evtchn_upcall_mask.

That means if the hypervisor is told to set another event (via EVTCHNOP_SEND) 
that
is say more than 64 bits away from the other prior one (so that the
check for evtchn_pending_sel succeeds) it can do so (see evtchn_2l_set_pending
and vcpu_mark_events_pending).

And then we can go on sending an IPI, causing the guest an VMEXIT, and
reinjection of the vector callback. ...

I think if the events are spaced just right you can do this injection up to
16 times. Granted the 16th callback will work just fine and work on all
the events and then EOI all of them. And the other 15 will just exit out
since evtchn_pending_sel will be zero.

But it is a nuisance.

Anyhow what I am wondering if there are some semantincs when it comes to PPI
and it being able to 'mask' an vector until it exits or such? If so
you should document that.

[Yes, I am going to send a patch for the the above 'interesting' behavior].

>  /*
>   * These are not used by Xen. They are here for convenience of HVM-guest
>   * xenbus implementations.
> -- 
> 2.0.4
> 
> 
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.