[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] [PATCH v1] Remove XenPTReg->data and use dev.config for guest configuration values.



Since RFC [https://lists.gnu.org/archive/html/qemu-devel/2015-06/msg07350.html]
 - Added Acks
 - Fixed bugs

This patchset is dependent on the "Cleanups + various fixes due to libxl ABI
+       more logging on errors." 
(http://lists.xen.org/archives/html/xen-devel/2015-07/msg00431.html)
just posted.


During the discussion of 
http://lists.xen.org/archives/html/xen-devel/2015-06/msg01504.html
"[PATCH QEMU-XEN] xen/pt: Start with emulated PCI_COMMAND set to zero."
we got in a discussion about the xen/pt code usage of XenPTRreg->data
and dev.config. The crux of the matter was that the PCI_COMMAND
register read from the host changed from 0 to 3 (so PCI_COMMAND_IO
and PCI_COMMAND_MEM were enabled). That messed up QEMU badly (thought
it did recover) with some nasty messages on the Xen's ring buffer.

Fast-forward and we came to the conclusion that:

1) The dev.config is (by Xen code) used as the cache of the
   host configuration devices (which is OK at init time, but not
   during run-time as the generic API uses it as the guest cache
   of configuration registers).

2). The dev.config is (by the generic code) used as a view of what
   the guest should see (cache of guest values). 

This patchset means to fix that by having the dev.config be used
by the Xen code as the guest configuration registers and nothing more.

Since the Xen code uses the XenPTRreg->data as the configuration
registers we MUST redo some ot the init code to read the host
values directly (instead of using dev.config as a cache), and save
the guest configuration values in dev.config. We also want
dev.confg and ->data be synced up so that the initial guest values
are not polluted with host register values which we are emulating.

See: "[PATCH v1 02/10] xen/pt: Sync up the dev.config and data values."
for that logic.

And then the axe-patch: "[PATCH v1 05/10] xen/pt: Remove XenPTReg->data field."
which rips out the ->data and from there on we only use dev.config for
guest configuration registers.

The rest of the patches add extra logging, error checking, sanity
checking so that if bugs show up because of this patchset it will
be easier to identify them (I hope!).

Stefano, patch #6, #7, #9 are unchanged (I've added the Acked tag).
#10 is a bit changed, but it has the same flow so I took the liberty
of putting your Ack there.

Please advise on:
 [PATCH v1 03/10] xen/pt: Check if reg->init function sets the 'data'

especially the return code - I picked ENXIO but perhaps there is a better
one?

Also the:
 [PATCH v1 08/10] xen/pt: Make xen_pt_unregister_device idempotent

is unchanged since the last posting, becasue I could not implement your
suggestion to use QTAILQ_EMPTY (and QTAILQ_INIT).


 hw/xen/xen-host-pci-device.c |   5 +
 hw/xen/xen-host-pci-device.h |   1 +
 hw/xen/xen_pt.c              | 157 +++++++++++++++++++------------
 hw/xen/xen_pt.h              |   8 +-
 hw/xen/xen_pt_config_init.c  | 216 ++++++++++++++++++++++++++++++++-----------
 hw/xen/xen_pt_msi.c          |  18 +++-
 6 files changed, 288 insertions(+), 117 deletions(-)

Konrad Rzeszutek Wilk (10):
      xen/pt: Use xen_host_pci_get_[byte|word] instead of dev.config
      xen/pt: Sync up the dev.config and data values.
      xen/pt: Check if reg->init function sets the 'data' past the reg->size
      xen/pt: Use xen_host_pci_get_[byte,word,long] instead of 
xen_host_pci_get_long
      xen/pt: Remove XenPTReg->data field.
      xen/pt: Log xen_host_pci_get in two init functions
      xen/pt: Log xen_host_pci_get/set errors in MSI code.
      xen/pt: Make xen_pt_unregister_device idempotent
      xen/pt: Move bulk of xen_pt_unregister_device in its own routine.
      xen/pt: Check for return values for xen_host_pci_[get|set] in init


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.