|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH v6 3/4] x86/ioreq server: Handle read-modify-write cases for p2m_ioreq_server pages.
In ept_handle_violation(), write violations are also treated as
read violations. And when a VM is accessing a write-protected
address with read-modify-write instructions, the read emulation
process is triggered first.
For p2m_ioreq_server pages, current ioreq server only forwards
the write operations to the device model. Therefore when such page
is being accessed by a read-modify-write instruction, the read
operations should be emulated here in hypervisor. This patch provides
such a handler to copy the data to the buffer.
Note: MMIOs with p2m_mmio_dm type do not need such special treatment
because both reads and writes will go to the device mode.
Signed-off-by: Paul Durrant <paul.durrant@xxxxxxxxxx>
Signed-off-by: Yu Zhang <yu.c.zhang@xxxxxxxxxxxxxxx>
---
Cc: Paul Durrant <paul.durrant@xxxxxxxxxx>
Cc: Jan Beulich <jbeulich@xxxxxxxx>
Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
xen/arch/x86/hvm/emulate.c | 45 ++++++++++++++++++++++++++++++++++++++++++++-
1 file changed, 44 insertions(+), 1 deletion(-)
diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index a33346e..f92de48 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -95,6 +95,41 @@ static const struct hvm_io_handler null_handler = {
.ops = &null_ops
};
+static int mem_read(const struct hvm_io_handler *io_handler,
+ uint64_t addr,
+ uint32_t size,
+ uint64_t *data)
+{
+ struct domain *currd = current->domain;
+ unsigned long gmfn = paddr_to_pfn(addr);
+ unsigned long offset = addr & ~PAGE_MASK;
+ struct page_info *page = get_page_from_gfn(currd, gmfn, NULL, P2M_UNSHARE);
+ uint8_t *p;
+
+ ASSERT(offset + size < PAGE_SIZE);
+
+ if ( !page )
+ return X86EMUL_UNHANDLEABLE;
+
+ p = __map_domain_page(page);
+ p += offset;
+ memcpy(data, p, size);
+
+ unmap_domain_page(p);
+ put_page(page);
+
+ return X86EMUL_OKAY;
+}
+
+static const struct hvm_io_ops mem_ops = {
+ .read = mem_read,
+ .write = null_write
+};
+
+static const struct hvm_io_handler mem_handler = {
+ .ops = &mem_ops
+};
+
static int hvmemul_do_io(
bool_t is_mmio, paddr_t addr, unsigned long *reps, unsigned int size,
uint8_t dir, bool_t df, bool_t data_is_addr, uintptr_t data)
@@ -204,7 +239,15 @@ static int hvmemul_do_io(
/* If there is no suitable backing DM, just ignore accesses */
if ( !s )
{
- rc = hvm_process_io_intercept(&null_handler, &p);
+ /*
+ * For p2m_ioreq_server pages accessed with read-modify-write
+ * instructions, we provide a read handler to copy the data to
+ * the buffer.
+ */
+ if ( p2mt == p2m_ioreq_server )
+ rc = hvm_process_io_intercept(&mem_handler, &p);
+ else
+ rc = hvm_process_io_intercept(&null_handler, &p);
vio->io_req.state = STATE_IOREQ_NONE;
}
else
--
1.9.1
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |