[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v7 2/5] sysctl: Add sysctl interface for querying PCI topology



On 05/01/2015 09:56 AM, Ian Campbell wrote:
On Fri, 2015-04-24 at 13:42 -0400, Boris Ostrovsky wrote:
On 04/24/2015 10:09 AM, Boris Ostrovsky wrote:
On 04/24/2015 03:19 AM, Jan Beulich wrote:
On 24.04.15 at 00:20, <boris.ostrovsky@xxxxxxxxxx> wrote:
On 04/21/2015 03:01 AM, Jan Beulich wrote:
+                 ((++dev_cnt > 0x3f) && hypercall_preempt_check()) )
+                break;
+        }
+
+        if ( (!ret || (ret == -ENODEV)) &&
+             __copy_field_to_guest(u_sysctl, op,
u.pcitopoinfo.first_dev) )
+            ret = -EFAULT;
+    }
+    break;
+#endif
With the continuation-less model now used I don't think it makes
sense to have first_dev and num_devs - for re-invocation all the
caller needs to do is increment the buffer pointer suitably. I.e.
you can get away with just a count of devices afaict.

This would require walking xc_hypercall_buffer_t->hbuf. Would
something like

       set_xen_guest_handle_raw(sysctl..., (void
*)HYPERCALL_BUFFER_AS_ARG(foo) + offset)

be acceptable? I don't think I see anything better.

I thought of adding set_xen_guest_handle_offset() that would look
similar to set_xen_guest_handle() but then I felt that having this in
API may not be a good idea since xc_hypercall_buffer_t->hbuf would end
up pointing to memory that is not allocated for full
xc_hypercall_buffer_t->sz.
There ought to be a way to create a guest handle from other than
the start of an allocated hypercall buffer, but that's really more a
question for the tool stack folks.
Yes, this was question for toolstack people.
(Adjusted TO/CC to reflect this).
Not sure if I've got the full context

The hypervisor only processes part of the buffer and returns back to the caller how many elements are done. The caller advances the buffer pointer and retries the hypercall.


  but would it be possible to
rearrange things such that libxc was bouncing/unbouncing only the slice
of the input which is needed for the invocation? Perhaps by pushing it
into whichever loop there is.

Oh yes, that is certainly possible. I am just trying to avoid doing this because of the overhead of copying buffers back and forth many times. The worst thing is that with current bounce APIs I'd need to bounce full buffer, even though I will only need to do so for a part of it.

In other words, imagine we have 1024 elements in the buffer originally and the hypervisor only completes 256 elements at a time. I will then need to bounce [0-1024], [256-1024], [512-1024] and then [768-1024] elements. Not 4 chunks of 256 element as one would want.

I looked at HYPERCALL_BOUNCE_SET_SIZE but I don't think it will help here since we only know "useful" (i.e. processed by the hypervisor) buffer size after the hypercall (i.e. for the unbounce path).


If not then I think set_xen_guest_handle_offset or perhaps
guest_handle_add_offset (in line with the xen side equivalent) would be
the way to go.

Under no circumstances should set_xen_guest_handle_raw be being used.

But it can be used inside the new set_xen_guest_handle_offset(), right?

-boris


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.