[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/22] Vixen: A PV-in-HVM shim



On Jan 8, 2018 8:28 AM, "George Dunlap" <dunlapg@xxxxxxxxx> wrote:
On Mon, Jan 8, 2018 at 4:02 PM, Anthony Liguori <anthony@xxxxxxxxxxxxx> wrote:
>>> I do want to make the shim be able to run in both pvh and hvm mode
>>> (which doesn't seem to be too hard in practice).
>>
>> AFAIK the pv-shim code will already work in HVM mode. It's just that
>> booting the pv-shim in HVM mode requires that you install the shim
>> inside of the guest and then boot it using grub or a similar loader
>> that can do multiboot.
>
> I'm happy to work on either approach.  I just want to get something
> merged to have
> an upstream solution to this issue.  I think this particular CVE for
> Xen PV is the worst
> of this batch of issues so I'm super eager on getting a solution
> straightened out.  I'd
> really like to hear from others on what the right approach should be
> and I'll work on
> whatever the consensus is.
>
> I think PVH is a good long term solution but I think it's a poor short
> term solution.
> PVH isn't widely deployed so it's asking people to upgrade their
> infrastructure to a
> very new version of Xen.  It also requires tools changes which means
> that even if
> you are on a newer version of Xen, you still have to upgrade.  The
> patch series is
> also pretty big which means I suspect people will need to wait to 4.11 at best.
>
> OTOH, the HVM version of the series requires no tools changes and works on Xen
> versions going back to 3.4 (at least).  What this means practically
> speaking is that
> if it were merged, we can tell people that they can solve this problem
> by building the
> HVM shim and modifying their launch config to boot from an ISO or
> something similar.
>
> This gives people an immediate solution that does not require major
> changes to their
> underlying infrastructure.

Solving the "how to we boot the shim" question is the main reason that
we decided to start with PVH-only back to 4.8.

We didn't consider working around it by having a special boot disk
(ISO or otherwise); it's hard to know how well that will work for most
people.  You don't think that "having to create and boot from a custom
ISO" would count as "major changes to underlying infrastructure"?

If you are a use of xl, then it's a one time config file conversion plus iso generation.  If you use pvgrub, then the same iso can be reused widely.

You can imagine a script to automate the conversion too.

It may be harder if you are using management tools but it's still easier than a major version upgrade.

> The series now is also reasonably contained and small enough that
> IMHO, it could go
> into the stable tree.  That means that once merged, we could cut a
> stable release giving
> people an official release that could be used for this purpose.
>
> If it was entirely my call, I would work on merging HVM shim first,
> get a 4.10 stable release
> cut with it, and then focus on getting PVH shim in place for the 4.11
> release.  I think
> this is the right balance of addressing the short term needs while
> also having the best long
> term solution.

If I understand correctly, this series is missing a number of features
from the other series -- migration being the key one, but perhaps
others (vcpu hot-plug? ballooning?).

I haven't tested vcpu hotplug.  The xenstore communication works but we would have to also pass through the reservations calls.  Not hard at all to do.

In terms of completeness, as I mentioned, all PV instances in EC2 are using this today.  We don't make use of all Xen features so some others may be missing.


In either case, it sounds like "additional boot disk" should work for
older versions, it sounds like.

Right.

Regards,

Anthony Liguori


 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.