[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] LWP Interrupt Handler



Tested and it worked. Thanks,

-Wei

On 03/30/2012 03:06 PM, Keir Fraser wrote:
On 30/03/2012 20:44, "Wei Huang"<wei.huang2@xxxxxxx>  wrote:

Thanks. I created the LWP interrupt patch based on your solution. It is
attached in this email. This patch was tested using Hans Rosenfeld's LWP
tree (http://www.amd64.org/gitweb/?p=linux/lwp.git;a=summary).
Thanks, I've simplified the patch a bit and applied it as
xen-unstable:25115. Please check it out.

  K.

=================
AMD_LWP: add interrupt support for AMD LWP

This patch adds interrupt support for AMD lightweight profiling. It
registers interrupt handler using alloc_direct_apic_vector(). When
notified, SVM reinjects virtual interrupts into guest VM using
guest's virtual local APIC.

Signed-off-by: Wei Huang<wei.huang2@xxxxxxx>
=================

-Wei

On 03/30/2012 03:21 AM, Keir Fraser wrote:
On 29/03/2012 23:54, "Wei Huang"<wei.huang2@xxxxxxx>   wrote:

How about the approach in attached patches? A interrupt handler is
registered at vector 0xf6 as a delegate for guest VM. Guest VCPU can
register interrupt handler to receive notification. I will prepare
formal ones after receiving your comments.
Ugh, no, these ad-hoc dynamically-registered subhandlers are getting beyond
the pale. Therefore, I've just spent half hour routing all our interrupts,
even the direct APIC ones, through do_IRQ(), and now allow everything to be
dynamically allocated.

Please pull up to xen-unstable tip (>= 25113) and use
alloc_direct_apic_vector(). You can find a few callers of it that I added
myself, to give you an idea how to use it. In essence, you can call it from
your SVM-specific code, to allocate a vector for your irq handler. And you
can do it immediately before you poke the vector into your special MSR. You
can call alloc_direct_apic_vector() unconditionally -- it will deal with
ensuring the allocation happens only once.

   -- Keir

Thanks,
-Wei


On 03/25/2012 07:08 PM, Zhang, Xiantao wrote:
Please make sure the per-cpu vector is considered in your case.  For CPU's
built-in event, it always happens on all cpus,  but IRQ-based events may
only
happen on some special cpus which are determined by apic's mode.
Xiantao

-----Original Message-----
From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
bounces@xxxxxxxxxxxxx] On Behalf Of Keir Fraser
Sent: Saturday, March 24, 2012 6:50 AM
To: wei.huang2@xxxxxxx; xen-devel@xxxxxxxxxxxxx; Jan Beulich
Subject: Re: [Xen-devel] LWP Interrupt Handler

On 23/03/2012 22:03, "Wei Huang"<wei.huang2@xxxxxxx>    wrote:

I am adding interrupt support for LWP, whose spec is available at
http://support.amd.com/us/Processor_TechDocs/43724.pdf. Basically OS
can specify an interrupt vector in LWP_CFG MSR; the interrupt will be
triggered when event buffer overflows. For HVM guests, I want to
re-inject this interrupt back into the guest VM. Here is one idea
similar to virtualized PMU: It first registers a special interrupt
handler (say on vector 0xf6) using set_intr_gate(). When triggered,
this handler injects an IRQ (with vector copied from LWP_CFG) into
guest VM via virtual local APIC. This worked from my test.

But adding a interrupt handler seems to be an overkill. Is there any
better way to create a dummy interrupt receiver on be-behalf of guest
VMs? I also looked into IRQ and MSI solutions inside Xen. But most of
them assume that interrupts are from physical device (but not in this
LWP case, where interrupt is initiated from CPU itself); so they don't
fit very well.
I think just allocating a vector is fine. If we get too many we could move
to
dynamic allocation of them.

    -- Keir

Thanks,
-Wei


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel






_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.