[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 1/3] x86/mem_access: Make the mem_access ops generic
>> -int mem_access_memop(struct domain *d, xen_mem_event_op_t *meo) >> +int mem_access_memop(unsigned long cmd, >> + XEN_GUEST_HANDLE_PARAM(xen_mem_access_op_t) arg) >> { >> - int rc; >> + long rc; >> + xen_mem_access_op_t mao; >> + struct domain *d; >> + >> + if ( copy_from_guest(&mao, arg, 1) ) >> + return -EFAULT; >> + >> + rc = rcu_lock_live_remote_domain_by_id(mao.domid, &d); >> + if ( rc ) >> + return rc; >> + >> + if ( !is_hvm_domain(d) ) >> + return -EINVAL; >> + >> + rc = xsm_mem_event_op(XSM_TARGET, d, XENMEM_access_op); >> + if ( rc ) >> + goto out; >> >> if ( unlikely(!d->mem_event->access.ring_page) ) >> return -ENODEV; > >This and the earlier return need to become "goto out". Ah yes. I will fix that. >> >> - switch( meo->op ) >> + switch ( mao.op ) >> { >> case XENMEM_access_op_resume: >> { >> p2m_mem_access_resume(d); >> rc = 0; >> + break; >> + } > >Please remove the pointless opening brace rather than adding a closing >one here. Sorry, I should have caught that one. >> + >> + case XENMEM_access_op_set_access: >> + { >> + unsigned long start_iter = cmd & ~MEMOP_CMD_MASK; >> + >> + rc = -EINVAL; >> + if ( (mao.pfn != ~0ull) && >> + (mao.nr < start_iter || >> + ((mao.pfn + mao.nr - 1) < mao.pfn) || >> + ((mao.pfn + mao.nr - 1) > domain_get_maximum_gpfn(d))) ) >> + break; >> + >> + rc = p2m_set_mem_access(d, mao.pfn, mao.nr, start_iter, >> + MEMOP_CMD_MASK, mao.access); >> + if ( rc > 0 ) >> + { >> + ASSERT(!(rc & MEMOP_CMD_MASK)); >> + rc = hypercall_create_continuation(__HYPERVISOR_memory_op, >"lh", >> + cmd | rc, arg); > >Either you need to mask cmd properly, or directly use >XENMEM_access_op_set_access. I will directly use XENMEM_access_op as XENMEM_access_op_set_access is a sub-op. Thanks, Aravindh _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |