[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] PVH phase I complete....

On Mon, Jun 02, 2014 at 04:47:19PM -0700, Mukesh Rathor wrote:
> Yay.... 

Wohooo! Congratulations!
>     author    Mukesh Rathor <mukesh.rathor@xxxxxxxxxx>        
>     Mon, 2 Jun 2014 08:32:22 +0000 (10:32 +0200)
>     committer Jan Beulich <jbeulich@xxxxxxxx> 
>     Mon, 2 Jun 2014 08:32:22 +0000 (10:32 +0200)
>     commit    e6390ad57bdf11881e6375f6227b5a594b4bcc3f
> and with that many thanks to all, specially Jan Beulich for all the
> reviewing. Also thanks to Tim Deegan and those reviewing the linux side.
> It's been fun, often arduous, developing the feature that found 
> me hacking linux schedular one day, and xen pv mmu the other..  In the
> end it feels worth it - I personally know of one banking customer who has 
> been waiting anxiously for pvh for about 3 years now.
> But we are not completely done, here is my list of pending items in random
> order:
>   - add mem removed by holes to end of memmap and send dom0 custom e820
>           : Roger on xen/BSD side, David V on linux side

This would be the one reported by Sander?
>   - AMD port: i'm looking at it (grrr those invalid guest states.....)
>   - MSI-x support: last I recall, it needed some work on the linux side. 

It is already in.
>   - tsc emulation support
>   - migration 
>   - passthru
>   - 32bit guest kernel support
>   - PIRQ EOI: I believe the xen work on this got done, and it seems to be
>     working, so I think the comment in linux is outdated in xen_init_IRQ:
>          /* TODO: No PVH support for PIRQ EOI */
>          if (rc != 0) {
>              free_page((unsigned long) pirq_eoi_map);
>              pirq_eoi_map = NULL;
>          ....

>     Konrad, David: is there anything more to be done here you can think of?

Tim, and Roger: Ditto

I've some comments below.
>   - control domain : p2m must be cleaned of any foreign types (refcnt 
> released)
>   - Shadow: at present HAP is required
>   - Roger, you had a small patch for guest shutdown crashing the guest
>     occassionally. It had to do something with a bit in one of the CRs iirc.
>   - performance: last i ran lmbench, some othe numbers for pvh were not the
>                  best of pv and hvm.

Roger here has been thinking about this quite a lot and we also had a
chat about this in the Xen 4.5 roadmap discussion. Roger, could you
please mention which ones you would like to take a peek please?

The priority that was laid out to broaden the scope of this was:

 - AMD PVH support. This will allow more users to use. It is independent
   of Tim's merging of PVH and HVM code as I surmise the AMD code will
   be a bunch of 'if (xen_pvh_domain()) { configure something less than
   the HVM path}'

 - Bug fixing (should be probably at the same priority as the above
   one). Linux, FreeBSD, and Xen. Sander (thank you!) had reported a
   bug which I think the patches that David posted should have fixed.

 - Control domain : p2m must be cleaned of any foreign types (refcnt released)

 - FreeBSD PVH dom0 support. This also allows more users to use it. It
   is independent of the Xen code, but as important as the Linux code.
   I think Roger is the only one equipped to do this, but other folks
   can at least review the hypercall usage, etc.

 - PVH migration. This will be dependent on Migrationv2 code that Andrew
   and David are working. That is due to the fact that PVH requires less
   stuff in the datastream (no P2M). There is a tangent part which is
   the Linux code in the suspend path needs tweaking.

 - PVH shadow support. This would be dependent on Tim's rework so that
   would have to wait. This would allow us to run PVH also on older
   hardware (like the Intel box Boris had run in - that did not have

 - The rest of the items you mentioned. Probably Xen 4.6 material as

 - PCI passthrough for PVH. Xen 4.6 material.

> thanks,
> Mukesh

Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.