[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC][v3][PATCH 4/6] xen:x86: add XENMEM_reserved_device_memory_map to expose RMRR



On 2014/8/15 20:15, Andrew Cooper wrote:
On 15/08/14 09:27, Tiejun Chen wrote:
We should expose RMRR mapping to libxc, then setup_guest() can
check if current MMIO is not covered by any RMRR mapping.

Signed-off-by: Tiejun Chen <tiejun.chen@xxxxxxxxx>
---
  xen/arch/x86/mm.c | 32 ++++++++++++++++++++++++++++++++
  1 file changed, 32 insertions(+)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d23cb3f..fb6e92f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4769,6 +4769,38 @@ long arch_memory_op(unsigned long cmd, 
XEN_GUEST_HANDLE_PARAM(void) arg)
          return 0;
      }

+    case XENMEM_reserved_device_memory_map:
+    {
+        struct xen_memory_map map;
+        XEN_GUEST_HANDLE(e820entry_t) buffer;
+        XEN_GUEST_HANDLE_PARAM(e820entry_t) buffer_param;
+        unsigned int i;
+
+        if ( copy_from_guest(&map, arg, 1) )
+            return -EFAULT;

This hypercall implementation is looking somewhat more plausible, but
still has some issues.

+        if ( map.nr_entries < rmrr_maps.nr_map + 1 )
+            return -EINVAL;

This causes a fencepost error, does it not?

map.nr_entries = E820MAX, and obviously rmrr_maps.nr_map should be smaller than far E820MAX. So what is your problem?

Here I have a reference to XENMEM_machine_memory_map.


Furthermore, the useful error would be to return -ENOBUFS and fill
arg.nr_entries with the rmrr_maps.nr_map so the caller can allocate an
appropriately sized buffer.


It is also very common with hypercalls like this to have allow a null
guest handle as an explicit request for size.

Looks you like to issue twice time with a hypercall to finish, but what's wrong with my way?

Again, here I have a reference to XENMEM_machine_memory_map.


See XEN_DOMCTL_get_vcpu_msrs as an (admittedly more complicated) example.

+
+        buffer_param = guest_handle_cast(map.buffer, e820entry_t);
+        buffer = guest_handle_from_param(buffer_param, e820entry_t);
+        if ( !guest_handle_okay(buffer, map.nr_entries) )
+            return -EFAULT;
+
+        /* Currently we just need to cover RMRR. */
+        for ( i = 0; i < rmrr_maps.nr_map; ++i )
+        {
+            if ( __copy_to_guest_offset(buffer, i, rmrr_maps.map + i, 1) )
+                return -EFAULT;

What is wrong with a single copy_to_guest_offset() ?

I didn't try this but I'd like to try this to check if this still works well.

Thanks
Tiejun


+        }
+
+        map.nr_entries = rmrr_maps.nr_map;
+
+        if ( __copy_to_guest(arg, &map, 1) )
+            return -EFAULT;
+
+        return 0;
+    }
+
      case XENMEM_machphys_mapping:
      {
          struct xen_machphys_mapping mapping = {



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.