[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V5 1/3] xen/mem_access: Support for memory-content hiding
On 07/14/2015 03:22 PM, Jan Beulich wrote: >>>> On 13.07.15 at 19:14, <rcojocaru@xxxxxxxxxxxxxxx> wrote: >> --- a/xen/arch/x86/domain.c >> +++ b/xen/arch/x86/domain.c >> @@ -511,6 +511,8 @@ int vcpu_initialise(struct vcpu *v) >> >> void vcpu_destroy(struct vcpu *v) >> { >> + xfree(v->arch.vm_event.emul_read_data); > > Is this still needed now that you have > vm_event_cleanup_domain()? I had thought that there might be a code path where vm_event_cleanup_domain() doesn't get called and yet the domain is being destroyed, but I can't find anything obvious in the code except a comment in arch/x86/mm/shadow/common.c - shadow_final_teardown() - stating that "It is possible for a domain that never got domain_kill()ed to get here with its shadow allocation intact.". Since common/domain.c's domain_kill() seems to be the only caller of vm_event_cleanup(), I took that comment to mean that it could be possible to end up in vcpu_destroy() without vm_event_cleanup_domain() having been called, so I took the better-safe-than-sorry approach. That is also why I'm setting v->arch.vm_event.emul_read_data to NULL in the vm_event domain cleanup function, so that this xfree() does not double-free. But this xfree() is probably not needed, so if confirmed I'll remove it. >> --- a/xen/arch/x86/hvm/emulate.c >> +++ b/xen/arch/x86/hvm/emulate.c >> @@ -67,6 +67,27 @@ static int null_write(const struct hvm_io_handler >> *handler, >> return X86EMUL_OKAY; >> } >> >> +static int set_context_data(void *buffer, unsigned int bytes) > > I think in the context of this function alone, "size" would be better > than "bytes". Ack. >> +{ >> + struct vcpu *curr = current; >> + >> + if ( !curr->arch.vm_event.emul_read_data ) >> + return X86EMUL_UNHANDLEABLE; >> + else > > Else after the respective if() unconditionally returning is odd. > Perhaps better to invert the if() condition... I agree, I only used the else so that I wouldn't need to put safe_bytes on the stack if I didn't have to. I'll invert the condition. >> + { >> + unsigned int safe_bytes = >> + min(bytes, curr->arch.vm_event.emul_read_data->size); >> + >> + if ( safe_bytes ) >> + memcpy(buffer, curr->arch.vm_event.emul_read_data->data, >> + safe_bytes); > > So why did you still keep this conditional? I thought a memcpy() call that ends up doing nothing (copying 0 bytes) would be expensive and I've tried to optimize the code by not doing the call if safe_bytes == 0. Since nobody else seems to think it's worth it, I'll remove it. >> @@ -1005,6 +1043,36 @@ static int hvmemul_rep_ins( >> !!(ctxt->regs->eflags & X86_EFLAGS_DF), gpa); >> } >> >> +static int hvmemul_rep_outs_set_context( >> + enum x86_segment src_seg, >> + unsigned long src_offset, >> + uint16_t dst_port, >> + unsigned int bytes_per_rep, >> + unsigned long *reps, >> + struct x86_emulate_ctxt *ctxt) >> +{ >> + unsigned int bytes = *reps * bytes_per_rep; >> + char *buf; >> + int rc; >> + >> + buf = xmalloc_array(char, bytes); >> + >> + if ( buf == NULL ) >> + return X86EMUL_UNHANDLEABLE; >> + >> + rc = set_context_data(buf, bytes); >> + >> + if ( rc != X86EMUL_OKAY ) >> + goto out; >> + >> + rc = hvmemul_do_pio_buffer(dst_port, bytes, IOREQ_WRITE, buf); >> + >> +out: > > Labels should be indented by at least one space. But realistically > the code wouldn't look worse without any goto... Understood, will remove the label completely. >> @@ -1133,7 +1205,20 @@ static int hvmemul_rep_movs( >> */ >> rc = hvm_copy_from_guest_phys(buf, sgpa, bytes); >> if ( rc == HVMCOPY_okay ) >> + { >> + if ( unlikely(hvmemul_ctxt->set_context) ) >> + { >> + rc = set_context_data(buf, bytes); >> + >> + if ( rc != X86EMUL_OKAY) >> + { >> + xfree(buf); >> + return rc; >> + } >> + } >> + >> rc = hvm_copy_to_guest_phys(dgpa, buf, bytes); >> + } > > Why do you not bypass hvm_copy_from_guest_phys() here? This > way it would - afaict - become consistent with the other cases. You're right, it's unnecessary. Will remove the hvm_copy_from_guest_phys(). >> --- a/xen/arch/x86/vm_event.c >> +++ b/xen/arch/x86/vm_event.c >> @@ -23,6 +23,36 @@ >> #include <xen/sched.h> >> #include <asm/hvm/hvm.h> >> >> +int vm_event_init_domain(struct domain *d) >> +{ >> + struct vcpu *v; >> + >> + for_each_vcpu( d, v ) > > Either you consider for_each_vcpu a keyword-like thing, then this > is missing a blank before the opening parenthesis, or you don't, in > which case the blanks immediately inside the parentheses should > be dropped. Understood, will fix it. >> + { >> + if ( v->arch.vm_event.emul_read_data ) >> + continue; >> + >> + v->arch.vm_event.emul_read_data = >> + xzalloc(struct vm_event_emul_read_data); >> + >> + if ( !v->arch.vm_event.emul_read_data ) >> + return -ENOMEM; >> + } >> + >> + return 0; >> +} >> + >> +void vm_event_cleanup_domain(struct domain *d) >> +{ >> + struct vcpu *v; >> + >> + for_each_vcpu( d, v ) >> + { >> + xfree(v->arch.vm_event.emul_read_data); >> + v->arch.vm_event.emul_read_data = NULL; >> + } >> +} > > There not being a race here is attributed to vm_event_enable() > being implicitly serialized by the domctl lock, and > vm_event_disable(), beyond its domctl use, only being on domain > cleanup paths? Would be nice to have a comment to that effect. Indeed, that's how I understood it to happen. I'll add a comment. Thanks, Razvan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |