[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 1/2] x86/mem-sharing: Bulk mem-sharing entire domains





On Fri, Oct 16, 2015 at 12:46 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>> On 15.10.15 at 20:09, <tamas@xxxxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -1293,6 +1293,42 @@ int relinquish_shared_pages(struct domain *d)
>Â Â Â return rc;
>Â }
>
> +static int bulk_share(struct domain *d, struct domain *cd, unsigned long max,
> +Â Â Â Â Â Â Â Â Â Â Â struct mem_sharing_op_bulk *bulk)
> +{
> +Â Â int rc = 0;
> +Â Â shr_handle_t sh, ch;
> +
> +Â Â while( bulk->start <= max )
> +Â Â {
> +Â Â Â Â rc = mem_sharing_nominate_page(d, bulk->start, 0, &sh);
> +Â Â Â Â if ( rc == -ENOMEM )
> +Â Â Â Â Â Â break;
> +Â Â Â Â else if ( rc )

Pointless else.

Pointless but harmless.
Â

> +Â Â Â Â Â Â goto next;
> +
> +Â Â Â Â rc = mem_sharing_nominate_page(cd, bulk->start, 0, &ch);
> +Â Â Â Â if ( rc == -ENOMEM )
> +Â Â Â Â Â Â break;
> +Â Â Â Â else if ( rc )
> +Â Â Â Â Â Â goto next;
> +
> +Â Â Â Â mem_sharing_share_pages(d, bulk->start, sh, cd, bulk->start, ch);
> +
> +next:

Labels indented by at least one space please.

Yes, keep forgetting..
Â

> @@ -1467,6 +1503,69 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>Â Â Â Â Â }
>Â Â Â Â Â break;
>
> +Â Â Â Â case XENMEM_sharing_op_bulk_share:
> +Â Â Â Â {
> +Â Â Â Â Â Â unsigned long max_sgfn, max_cgfn;
> +Â Â Â Â Â Â struct domain *cd;
> +
> +Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â if ( !mem_sharing_enabled(d) )
> +Â Â Â Â Â Â Â Â goto out;
> +
> +Â Â Â Â Â Â rc = rcu_lock_live_remote_domain_by_id(mso.u.bulk.client_domain,
> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â&cd);
> +Â Â Â Â Â Â if ( rc )
> +Â Â Â Â Â Â Â Â goto out;
> +
> +Â Â Â Â Â Â rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
> +Â Â Â Â Â Â if ( rc )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â if ( !mem_sharing_enabled(cd) )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â if ( !atomic_read(&d->pause_count) ||
> +Â Â Â Â Â Â Â Â Â!atomic_read(&cd->pause_count) )

As said in the reply to your respective inquiry - you want to look at
controller_pause_count here instead.

Thanks!
Â

> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);

Considering how many times you do this, perhaps worth putting
a label at the (current) success path's unlock?

I'll take a look.
Â

> +Â Â Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â max_sgfn = domain_get_maximum_gpfn(d);
> +Â Â Â Â Â Â max_cgfn = domain_get_maximum_gpfn(cd);
> +
> +Â Â Â Â Â Â if ( max_sgfn != max_cgfn || max_sgfn < mso.u.bulk.start )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â rcu_unlock_domain(cd);
> +Â Â Â Â Â Â Â Â rc = -EINVAL;
> +Â Â Â Â Â Â Â Â goto out;
> +Â Â Â Â Â Â }
> +
> +Â Â Â Â Â Â rc = bulk_share(d, cd, max_sgfn, &mso.u.bulk);
> +Â Â Â Â Â Â if ( rc > 0 )
> +Â Â Â Â Â Â {
> +Â Â Â Â Â Â Â Â if ( __copy_to_guest(arg, &mso, 1) )

Wouldn't copying just the one field you care about suffice here?

Sure.
Â

> +Â Â Â Â Â Â Â Â Â Â rc = -EFAULT;
> +Â Â Â Â Â Â Â Â else
> +Â Â Â Â Â Â Â Â Â Â rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â"lh", XENMEM_sharing_op,
> +Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Â Âarg);
> +Â Â Â Â Â Â } else {

Coding style.

Coding style matches the rest of the file.
Â

> +Â Â Â Â Â Â Â Â mso.u.bulk.applied = atomic_read(&d->shr_pages);

Is there no possibility for something else to also modify d->shr_pages
in parallel, misguiding the caller when looking at the output field. Also
you're not copying this back to guest memory...

It shouldn't be an issue as I don't really care about how many pages were shared by the operation itself, I care about the total number of pages shared on the domain. If there were already some pages shared before this operation, those should be counted here again. I could of course issue a separate getdomaininfo to get this same information, it's just more expedient to report it right away instead of having to do a separate call. By default shr_pages will be equivalent to the number of pages successfully shared during this operation. If someone decides to also do unsharing in parallel to this op (..how would that make sense?).. well, that's not supported right now so all bets are off from my perspective.

Also, we are copying it back to guest memory when the operation finishes for all mso. It's not bulk specific, applies to all !rc cases further down.
Â

> @@ -482,7 +483,16 @@ struct xen_mem_sharing_op {
>Â Â Â Â Â Â Â uint64_aligned_t client_gfn;Â Â /* IN: the client gfn */
>Â Â Â Â Â Â Â uint64_aligned_t client_handle; /* IN: handle to the client page */
>       domid_t client_domain; /* IN: the client domain id */
> -Â Â Â Â } share;
> +Â Â Â Â } share;
> +Â Â Â Â struct mem_sharing_op_bulk {Â Â Â Â Â/* OP_BULK_SHARE */
> +Â Â Â Â Â Â domid_t client_domain;Â Â Â Â Â Â/* IN: the client domain id */

Explicit padding here please (albeit I see already from the context
that this isn't being done in e.g. the share sub-structure above
either).

Not sure what you mean by explicit padding here. The way it's right now matches pretty much what was already in place.

Tamas
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.