|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 7/7] xen: Enable event channel of PV extension of HVM
On 03/08/2010 05:53 PM, Sheng Yang wrote: On Tuesday 09 March 2010 01:10:56 Stefano Stabellini wrote:I think that mapping interrupts into VIRQs is a bad idea: you should map interrupts into pirqs instead, the code exists already on the kernel side so we don't need to do any (ugly) change ther.The code existed in the pv_ops dom0 side, but not in the upstream Linux. The latter is our target. We want this work to be accepted by upstream Linux soon. I don't think its a sound idea to rush this stuff upstream, for several reasons:
1. It is currently based on 2.6.33, which means it will not get into
any distros in the near future. If it were based on 2.6.32 then
it has a much greater chance of being accepted into distros.
(They may decide to backport the work, but it would be helpful if
that were done in advance.)
2. It has significant overlaps with the current xen.git development
which is also targeted for upstream. There's no point in creating
an unnecessary duplicate mechanism when the infrastructure will be
in place anyway.
3. The code has had very little exposure within the Xen community,
especially since the Xen-side patches have not been accepted into
xen-unstable yet, let alone a released hypervisor. On the kernel
side, it would help if the patches were based on 2.6.32 (or even
.31) so they can be merged into the xen.git kernels people are
actually using.
4. The kernel Xen code is already complicated with a number of
different code paths according to architecture, options, features
etc. I would like to see some strong concrete evidence that these
changes make a worthwhile improvement on a real workload before
adding yet another thing which needs to be tested and maintained
indefinitely.
J
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |