[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-ia64-devel] [PATCH] Fix xencomm for xm mem-set command



Hi,

When I tested xm mem-set command and xm mem-max command, I found a bug 
in xencomm.  About error message, please refer an attached file: 
xm-memset20061111.log.  My operations are as follows. 
  1. Boot Xen/dom0. (dom0_mem is default.)
  2. xm mem-set Domain-0 400
  3. xm mem-max Domain-0 400
  4. xm mem-set Domain-0 512

When balloon driver called the memory_op hypercall, xencomm overwrite 
in the hypercall parameter.  It is extent_start of xen_memory_reservation_t 
that is overwritten.  However, xencomm does not restore the hypercall 
parameter that overwrote.  As a result, hypercall of the 200th line in 
balloon driver fails.

linux-2.6-xen-sparse/drivers/xen/balloon/balloon.c
 167 static int increase_reservation(unsigned long nr_pages)
 168 {
 169     unsigned long  pfn, i, flags;
 170     struct page   *page;
[snip]
 190     set_xen_guest_handle(reservation.extent_start, frame_list);
 191     reservation.nr_extents   = nr_pages;
 192     rc = HYPERVISOR_memory_op(
 193         XENMEM_populate_physmap, &reservation);
 194     if (rc < nr_pages) {
 195         if (rc > 0) {
 196             int ret;
 197
 198             /* We hit the Xen hard limit: reprobe. */
 199             reservation.nr_extents = rc;
 200             ret = HYPERVISOR_memory_op(XENMEM_decrease_reservation,
 201                 &reservation);
 202             BUG_ON(ret != rc);
 203         }
[snip]


This patch saves and restores the hypercall parameter within xencomm.

Signed-off-by: Masaki Kanno <kanno.masaki@xxxxxxxxxxxxxx>

Best regards,
 Kan

Attachment: xm-memset20061111.log
Description: Binary data

Attachment: xcom_memory_op.patch
Description: Binary data

_______________________________________________
Xen-ia64-devel mailing list
Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-ia64-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.