|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] RFC: making the PVH 64bit ABI as stableo
On Tue, 2015-06-02 at 13:12 -0400, Boris Ostrovsky wrote:
> On 06/02/2015 12:51 PM, Stefano Stabellini wrote:
> > On Tue, 2 Jun 2015, Jan Beulich wrote:
> >>>>> On 02.06.15 at 17:11, <roger.pau@xxxxxxxxxx> wrote:
> >>> Hello,
> >>>
> >>> The document describing the PVH interface was committed 9 months ago
> >>> [1], and since then there hasn't been any change regarding the
> >>> interface. PVH is still missing features in order to have feature parity
> >>> with pure PV, mainly:
> >>>
> >>> - DomU miration support.
> >>> - PCI passthrough support.
> >>> - 32bit support.
> >>> - AMD support.
> >>>
> >>> AFAICT however none of these features are going to change the current
> >>> ABI.
> >>
> >> This your guess; I don't think there's any guarantee.
> >
> > Let's make it a guarantee.
> >
> >
> >> The more that talk was about making PVH uniformly enter the kernel in
> >> 32-bit mode.
> >
> > What talk? IRL talks are irrelevant in this context. If it is not on the
> > list, it doesn't exist.
> >
> >
> >>> PCI passthrough might expand it, by adding new hypercalls, but I
> >>> don't think this should prevent us from marking the current ABI as
> >>> stable. ARM for example doesn't have PCI passthrough yet or migration
> >>> support, but the ABI has been marked as stable.
> >>>
> >>> To that end, I would like to request the 64bit PVH ABI to be marked as
> >>> stable for DomUs. This is needed so external projects (like PVH support
> >>> for grub2) can progress.
> >>
> >> Understandable, but no, not before all the fixmes in the tree got
> >> dealt with.
> >
> > What is your timeline for that? In fact does anybody have any timelines
> > for it?
> >
> > We need to have a clear idea of what exactly needs to happen. We also
> > need to have confidence that is going to happen in a reasonable time
> > frame. At the moment we have some various mumblings about things, but we
> > don't have a clear breakdown of the outstanding work and names
> > associated with each work item.
> >
> > Is anybody going to volunteer writing that todo list?
> >
> > Are we going to be able to find enough volunteers with the right skills
> > to be confident that PVH is going to be out of "experimental" within a
> > reasonable time frame? It is clear that some of the clean-ups require an
> > hypervisor expert.
> >
> > If not, I suggest we rethink our priorities and we consider dropping PVH
> > entirely. I don't think is fair to expect Roger or anybody else to keep
> > their efforts up on PVH, when actually we don't know if we'll be able to
> > land it.
>
>
>
> Roger, Tim, Elena, Konrad and I had a conversation a few months ago and
> at that time we came up with a (somewhat informal) list of what we knew
> was broken:
>
> - 32-bit cannot boot.
> - Does not work under AMD.
> - Migration
> - PCI passthrough
> - Memory ballooning
> - Multiple VBDs, NICs, etc.
> - CPUID filtering. There are no filtering done at all which means
> that certain cpuid flags are exposed to the guest.
> - The x2apic will cause a crash if the NMI handler is invoked.
> - The APERF will cause inferior scheduling decisions.
> - working with shadow code (which is what we use when migrating HVM
> guests). But the nice side-benefit is that we can then run PVH on
> machines without VMX or SVM support.
> - TSC modes are broken.
This is missing at least:
- Remove all /* pvh fixme */ from the code
What I'm hearing from the x86 maintainers is that this is actually a
high priority and not a "nice to have cleanup".
> I picked 32-bit support, Elena is looking into AMD
With the TODOs + these 2 being the things which the x86 maintainers have
highlighted in this thread as being most critical for marking the ABI as
stable (or at least moving experimental->tech preview) let me ask
explicotly:
What are the current time frames on these two items?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |