[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [PATCH 4/6] mm: New XENMEM space, XENMAPSPACE_gmfn_range



>>> On 10.11.11 at 09:44, Jean Guyader <jean.guyader@xxxxxxxxxxxxx> wrote:

In the native implementation I neither see the XENMAPSPACE_gmfn_range
case getting actually handled in the main switch (did you mean to change
xatp.space to XENMAPSPACE_gmfn in that case?), nor do I see how you
communicate back how many of the pages were successfully processed in
the event of an error in the middle of the processing or when a
continuation is required.

But with the patch being pretty hard to read, maybe I'm simply
overlooking something?

Further (I realize I should have commented on this earlier) I think that
in order to allow forward progress you should not check for preemption
on the very first iteration of each (re-)invocation. That would also
guarantee no behavioral change to the original single-page variants.

>--- a/xen/arch/x86/x86_64/compat/mm.c
>+++ b/xen/arch/x86/x86_64/compat/mm.c
>@@ -63,6 +63,16 @@ int compat_arch_memory_op(int op, XEN_GUEST_HANDLE(void) 
>arg)
> 
>         XLAT_add_to_physmap(nat, &cmp);
>         rc = arch_memory_op(op, guest_handle_from_ptr(nat, void));
>+        if ( rc < 0 )
>+            return rc;
>+
>+        if ( rc == __HYPERVISOR_memory_op )
>+            hypercall_xlat_continuation(NULL, 0x2, nat, arg);
>+
>+        XLAT_add_to_physmap(&cmp, nat);
>+
>+        if ( copy_to_guest(arg, &cmp, 1) )
>+            return -EFAULT;

Other than in the XENMEM_[gs]et_pod_target you (so far, subject to the
above comment resulting in a behavioral change) don't have any real
outputs here, and hence there's no need to always to the outbound
translation - i.e. all of this could be moved into the if ()'s body.

Jan

> 
>         break;
>     }


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.