[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH RFC] ioreq-server: handle the lack of a default emulator properly


  • To: Jan Beulich <JBeulich@xxxxxxxx>
  • From: Paul Durrant <Paul.Durrant@xxxxxxxxxx>
  • Date: Mon, 29 Sep 2014 09:33:17 +0000
  • Accept-language: en-GB, en-US
  • Cc: "Keir \(Xen.org\)" <keir@xxxxxxx>, "xen-devel@xxxxxxxxxxxxx" <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Mon, 29 Sep 2014 09:33:26 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHP2Z7qNW8ETXKza06GH2pudtaU5ZwTd5iAgARbVxD//+eqAIAAIoOg
  • Thread-topic: [PATCH RFC] ioreq-server: handle the lack of a default emulator properly

> -----Original Message-----
> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> Sent: 29 September 2014 10:28
> To: Paul Durrant
> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> Subject: RE: [PATCH RFC] ioreq-server: handle the lack of a default emulator
> properly
> 
> >>> On 29.09.14 at 10:59, <Paul.Durrant@xxxxxxxxxx> wrote:
> >>  -----Original Message-----
> >> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
> >> Sent: 26 September 2014 17:23
> >> To: Paul Durrant
> >> Cc: xen-devel@xxxxxxxxxxxxx; Keir (Xen.org)
> >> Subject: Re: [PATCH RFC] ioreq-server: handle the lack of a default
> emulator
> >> properly
> >>
> >> >>> On 26.09.14 at 17:31, <paul.durrant@xxxxxxxxxx> wrote:
> >> > This fix should probably go in 4.5 but this patch is RFC for the moment
> >> > because of uncertainty about what to do for unemulated MMIO
> accesses.
> >> > Originally I forced a domain crash in this case, but hvmloader actually
> >> > hit the crash because the code that deals with building the ACPI TPM
> >> > info reads from 0xFED40F00 looking for a signature value and there is
> >> > nothing backing this access in my configuration. So, the question is
> >> > whether to whitelist this access in some way or make building that
> >> > table optional in some way so that it is only invoked if an emulated
> >> > TPM is definitely present.
> >>
> >> I think in the sense of acting like real hardware, having a definite
> >> point to complete such I/O is quite desirable, as opposed to killing
> >> the guest.
> >
> > Ok. The doubt was around the fact that on bare metal an unhandled MMIO
> would
> > normally cause an MCE (by way of a bus error).
> 
> Hmm, no, I don't think so, particularly on x86: Together with the
> chipset, all transactions should have an instance completing them.
> And I don't think MCEs would be the way to signal failure thereof.
> All this may be different on other architectures, but x86's heritage
> is that you can access any "unused" memory location without
> causing the machine to lock up (other than e.g. on ia64).
> 

Ok, I'll drop the warning printk then.

> >> > --- a/xen/arch/x86/hvm/hvm.c
> >> > +++ b/xen/arch/x86/hvm/hvm.c
> >> > @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server
> >> *hvm_select_ioreq_server(struct domain *d,
> >> >      if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) )
> >> >          return NULL;
> >> >
> >> > -    if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) ||
> >> > -         (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) )
> >> > +    if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO )
> >>
> >> I'm having some trouble understanding the reason for this change.
> >>
> >
> > The implication that list_is_singular -> 'singular ioreq server is default' 
> > is
> > wrong; it was an 'optimization' too far. Now that I have QEMU ported, in
> my
> > tests it is the only emulator but it is not a default emulator (because it
> > registers explicit ranges).
> 
> Ah, that makes sense of course. In which case I'm fine with the
> patch as is.
> 

Cool. I'll resubmit non-RFC without the printk.

Thanks,

  Paul


> Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.