[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: vmport patch for run Fedora 14 [PATCH V14 00/17] Xen device model support



Hi,

On Tue, 26 Apr 2011, Takeshi HASEGAWA wrote:

> Anthony and upstream-qemu testers,
>
> I tried to run upstream-qemu with v14 patch set and found Fedora 14
> not bootable on HVM domain.
> Now it's running with vmport workaround patch at bottom of this mail.
>
>
> vmport device enables guests to know some configurations of the
> virtual machine itself. Actually it is not mandatory run guests,
> so disabled it to avoid qemu crash.
>
> I am not sure whether this fix is best way or not.
> In my thought it is ideal to take one of these solutions, but I could not.
>
> 1) fix vmport so that it works on Xen.
>
> I need to read vCPU registers, however I could not figure out
> how to do that on Xen. ;-)
>
> 2) disable vmport device entirely.

I think it's better to disable vmport with Xen, because we will not sync
the vcpu registers, at least in this patch series.

So I will just add the patch below to the patch series.


diff --git a/hw/pc.c b/hw/pc.c
index ebdf3b0..8106197 100644
--- a/hw/pc.c
+++ b/hw/pc.c
@@ -1082,7 +1082,8 @@ static void cpu_request_exit(void *opaque, int irq, int 
level)
 }

 void pc_basic_device_init(qemu_irq *isa_irq,
-                          ISADevice **rtc_state)
+                          ISADevice **rtc_state,
+                          bool no_vmport)
 {
     int i;
     DriveInfo *fd[MAX_FD];
@@ -1127,8 +1128,12 @@ void pc_basic_device_init(qemu_irq *isa_irq,
     a20_line = qemu_allocate_irqs(handle_a20_line_change, first_cpu, 2);
     i8042 = isa_create_simple("i8042");
     i8042_setup_a20_line(i8042, &a20_line[0]);
-    vmport_init();
-    vmmouse = isa_try_create("vmmouse");
+    if (!no_vmport) {
+        vmport_init();
+        vmmouse = isa_try_create("vmmouse");
+    } else {
+        vmmouse = NULL;
+    }
     if (vmmouse) {
         qdev_prop_set_ptr(&vmmouse->qdev, "ps2_mouse", i8042);
         qdev_init_nofail(&vmmouse->qdev);
diff --git a/hw/pc.h b/hw/pc.h
index cc7ba58..0dcbee7 100644
--- a/hw/pc.h
+++ b/hw/pc.h
@@ -137,7 +137,8 @@ void pc_memory_init(const char *kernel_filename,
 qemu_irq *pc_allocate_cpu_irq(void);
 void pc_vga_init(PCIBus *pci_bus);
 void pc_basic_device_init(qemu_irq *isa_irq,
-                          ISADevice **rtc_state);
+                          ISADevice **rtc_state,
+                          bool no_vmport);
 void pc_init_ne2k_isa(NICInfo *nd);
 void pc_cmos_init(ram_addr_t ram_size, ram_addr_t above_4g_mem_size,
                   const char *boot_device,
diff --git a/hw/pc_piix.c b/hw/pc_piix.c
index 4ff4a55..9a22a8a 100644
--- a/hw/pc_piix.c
+++ b/hw/pc_piix.c
@@ -141,7 +141,7 @@ static void pc_init1(ram_addr_t ram_size,
     pc_vga_init(pci_enabled? pci_bus: NULL);

     /* init basic PC hardware */
-    pc_basic_device_init(isa_irq, &rtc_state);
+    pc_basic_device_init(isa_irq, &rtc_state, xen_enabled());

     for(i = 0; i < nb_nics; i++) {
         NICInfo *nd = &nd_table[i];



> I found vmmouse expects initialization of vmport,
> therefore additional patch will be needed.
>
>
> This may be  a side-effect of the change below.
>
> > > > +        /* Xen require only one Qemu VCPU */
> > > >          pc_new_cpu(cpu_model);
> > >
> > > This looks a bit fishy. What is the semantic of -smp 2 or more in Xen
> > > mode? If that is an invalid/unused configuration option, catch that and
> > > reject it instead of installing this workaround. If it has a valid
> > > semantic, please elaborate why you need to restrict the number of
> > > instantiated cpus. Just to optimize memory usage?
> >
> > I thought in a first place that was needed to avoid errors. But it works
> > also when we initialise other CPUs. But I prefere to keep it that way to
> > save memory and in the case where there is a thread for each cpu that
> > will also avoid to have many useless threads.
> >
> > Also, I use -smp i to initialise the xen's structures related to the
> > vcpu.
>

Thanks,

-- 
Anthony PERARD

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.