[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2] xen: x86_32: do not enable iterrupts when returning from exception in interrupt context


  • To: Igor Mammedov <imammedo@xxxxxxxxxx>, <xen-devel@xxxxxxxxxxxxxxxxxxx>
  • From: Keir Fraser <keir.xen@xxxxxxxxx>
  • Date: Fri, 02 Sep 2011 11:00:48 +0100
  • Cc:
  • Delivery-date: Fri, 02 Sep 2011 03:07:39 -0700
  • List-id: Xen developer discussion <xen-devel.lists.xensource.com>
  • Thread-index: AcxpVzBI3XgzhCy+0EW9yfvxBvTCqQ==
  • Thread-topic: [Xen-devel] [PATCH v2] xen: x86_32: do not enable iterrupts when returning from exception in interrupt context

On 02/09/2011 10:19, "Igor Mammedov" <imammedo@xxxxxxxxxx> wrote:

> BTW, while debugging this issue, I've tried to print saved_upcall_mask
> inside xen when handling page fault from guest. And it value is always
> 0. Looking at asm code for example in xen/arch/x86/x86_32/entry.S:382
> 
>          movzwl TRAPBOUNCE_cs(%edx),%eax 
>                 ^^^^^ upper 16-bit is 0 set in propagate_page_fault
>                       by "tb->cs         = ti->cs;"
> 
>          /* Null selectors (0-3) are not allowed. */
>          testl $~3,%eax
>          jz   domain_crash_synchronous
>          movl %eax,UREGS_cs+4(%esp)
>                    ^^^^^^^^^^^^^^^^ and here is the only place we set
>                    saved_upcall_mask field in current cpu_user_regs
> 
> It looks like "saved_upcall_mask" in cpu_user_regs is not really used
> any more for what it means and it's presence in struct is just confusing
> and misleading.
> 
> So why not delete it and extend _pad1 to 4 bytes?

It's part of the guest ABI.

 -- Keir

> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.