[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] HVM op changes (was: Re: [xen-unstable test] 25689: regressions - FAIL)



>>> On 01.04.14 at 16:41, <Ian.Campbell@xxxxxxxxxx> wrote:
> On Tue, 2014-04-01 at 14:22 +0100, Jan Beulich wrote:
>> >>> On 01.04.14 at 10:38, <JBeulich@xxxxxxxx> wrote:
>> >>>> On 31.03.14 at 22:45, <boris.ostrovsky@xxxxxxxxxx> wrote:
>> >> On 03/31/2014 04:25 PM, xen.org wrote:
>> >>> flight 25689 xen-unstable real [real]
>> >>> http://www.chiark.greenend.org.uk/~xensrcts/logs/25689/ 
>> >>>
>> >>> Regressions :-(
>> >>>
>> >>> Tests which did not succeed and are blocking,
>> >>> including tests which could not be run:
>> >>>   test-amd64-i386-qemut-rhel6hvm-intel  7 redhat-install    fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-i386-qemut-rhel6hvm-amd  7 redhat-install      fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-i386-xl-winxpsp3-vcpus1 12 guest-localmigrate.2 fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-i386-xl-qemut-winxpsp3  7 windows-install      fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-i386-xl-qemut-winxpsp3-vcpus1 7 windows-install fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-amd64-xl-qemut-win7-amd64  7 windows-install   fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-amd64-xl-qemut-winxpsp3  7 windows-install     fail REGR. 
>> >>> vs. 
> 25685
>> >>>   test-amd64-i386-xl-qemut-win7-amd64  7 windows-install    fail REGR. 
>> >>> vs. 
> 25685
>> >> 
>> >> I was just about to report a regression that may be what these failures 
>> >> also are.
>> >> 
>> >> Looks like 8bad6c562 (x86/HVM: fix preemption handling in do_hvm_op() ) 
>> >> broke qemu-traditional with HVM guests. Quite a few of
>> >> 
>> >> (XEN) hvm.c:2762:d25v0 guest attempted write to read-only memory page. 
>> >> gfn=0x102, mfn=0x239086
>> > 
>> > I can see the change to be broken on 32-bit control domains (i.e.
>> > Dom0 here), but I can't explain the breakage on 64-bit ones yet, nor
>> > why only qemu-trad would be affected.
>> 
>> Found this one: In both HVMOP_modified_memory and
>> HVMOP_set_mem_type the "pfn" variable got incremented
>> twice. Looks like qemu-trad makes (more) use of the latter.
>> 
>> As to dealing with 32-bit callers, I see three possible routes:
>> - ignore the interface inconsistency (i.e. drop the patch altogether;
>>   obviously not my preference)
>> - limit the permitted range for them to e.g. 27 bits (we need at least
>>   5 bits for representing the operation itself, unless introducing
>>   trickery; not too nice on its own considering future new sub-ops)
>> - introduce a new __HYPERVISOR_hvm_op (marking the current one
>>   legacy) with one more (fake) argument: The caller doesn't need to
>>   pass any specific value, but we're allowed to alter the register
>>   contents, i.e. can store the continuation information there. This
>>   would then go along with the range restriction above (but allow
>>   those guests to still issue full-range requests).
>> 
>> Opinions or other ideas anyone?
> 
> Did you mean to CC Keir and/or Tim (who may not read this thread based
> on the title).

Indeed I should have, and have done so now.

> 2^27 pages seems like a plenty large enough reasonable limit for a batch
> size to me, although maybe 5 bits for the op is a bit tight given we are
> at 16 right now?

Right, that's what I was thinking too.

> Could you do continuations at some larger granularity (e.g. 2MB) and
> save some bits that way?

Right - I had this idea in the morning, but forgot about it again by
time I got to write that mail.

> With your third idea it might actually be quite nice and natural to
> split the hvm_op interface into parts anyway, for guest vs. toolstack
> (vs. device model?() accessible functionality.
> 
> The guest accessible stuff could stay on the current op with the other
> uses being deprecated.

That's a pretty neat idea, I'll see to carry that out unless others
object violently.

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.