[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH V5 1/3] xen/mem_access: Support for memory-content hiding
>>> On 13.07.15 at 19:14, <rcojocaru@xxxxxxxxxxxxxxx> wrote: > --- a/xen/arch/x86/domain.c > +++ b/xen/arch/x86/domain.c > @@ -511,6 +511,8 @@ int vcpu_initialise(struct vcpu *v) > > void vcpu_destroy(struct vcpu *v) > { > + xfree(v->arch.vm_event.emul_read_data); Is this still needed now that you have vm_event_cleanup_domain()? > --- a/xen/arch/x86/hvm/emulate.c > +++ b/xen/arch/x86/hvm/emulate.c > @@ -67,6 +67,27 @@ static int null_write(const struct hvm_io_handler *handler, > return X86EMUL_OKAY; > } > > +static int set_context_data(void *buffer, unsigned int bytes) I think in the context of this function alone, "size" would be better than "bytes". > +{ > + struct vcpu *curr = current; > + > + if ( !curr->arch.vm_event.emul_read_data ) > + return X86EMUL_UNHANDLEABLE; > + else Else after the respective if() unconditionally returning is odd. Perhaps better to invert the if() condition... > + { > + unsigned int safe_bytes = > + min(bytes, curr->arch.vm_event.emul_read_data->size); > + > + if ( safe_bytes ) > + memcpy(buffer, curr->arch.vm_event.emul_read_data->data, > + safe_bytes); So why did you still keep this conditional? > @@ -1005,6 +1043,36 @@ static int hvmemul_rep_ins( > !!(ctxt->regs->eflags & X86_EFLAGS_DF), gpa); > } > > +static int hvmemul_rep_outs_set_context( > + enum x86_segment src_seg, > + unsigned long src_offset, > + uint16_t dst_port, > + unsigned int bytes_per_rep, > + unsigned long *reps, > + struct x86_emulate_ctxt *ctxt) > +{ > + unsigned int bytes = *reps * bytes_per_rep; > + char *buf; > + int rc; > + > + buf = xmalloc_array(char, bytes); > + > + if ( buf == NULL ) > + return X86EMUL_UNHANDLEABLE; > + > + rc = set_context_data(buf, bytes); > + > + if ( rc != X86EMUL_OKAY ) > + goto out; > + > + rc = hvmemul_do_pio_buffer(dst_port, bytes, IOREQ_WRITE, buf); > + > +out: Labels should be indented by at least one space. But realistically the code wouldn't look worse without any goto... > @@ -1133,7 +1205,20 @@ static int hvmemul_rep_movs( > */ > rc = hvm_copy_from_guest_phys(buf, sgpa, bytes); > if ( rc == HVMCOPY_okay ) > + { > + if ( unlikely(hvmemul_ctxt->set_context) ) > + { > + rc = set_context_data(buf, bytes); > + > + if ( rc != X86EMUL_OKAY) > + { > + xfree(buf); > + return rc; > + } > + } > + > rc = hvm_copy_to_guest_phys(dgpa, buf, bytes); > + } Why do you not bypass hvm_copy_from_guest_phys() here? This way it would - afaict - become consistent with the other cases. > --- a/xen/arch/x86/vm_event.c > +++ b/xen/arch/x86/vm_event.c > @@ -23,6 +23,36 @@ > #include <xen/sched.h> > #include <asm/hvm/hvm.h> > > +int vm_event_init_domain(struct domain *d) > +{ > + struct vcpu *v; > + > + for_each_vcpu( d, v ) Either you consider for_each_vcpu a keyword-like thing, then this is missing a blank before the opening parenthesis, or you don't, in which case the blanks immediately inside the parentheses should be dropped. > + { > + if ( v->arch.vm_event.emul_read_data ) > + continue; > + > + v->arch.vm_event.emul_read_data = > + xzalloc(struct vm_event_emul_read_data); > + > + if ( !v->arch.vm_event.emul_read_data ) > + return -ENOMEM; > + } > + > + return 0; > +} > + > +void vm_event_cleanup_domain(struct domain *d) > +{ > + struct vcpu *v; > + > + for_each_vcpu( d, v ) > + { > + xfree(v->arch.vm_event.emul_read_data); > + v->arch.vm_event.emul_read_data = NULL; > + } > +} There not being a race here is attributed to vm_event_enable() being implicitly serialized by the domctl lock, and vm_event_disable(), beyond its domctl use, only being on domain cleanup paths? Would be nice to have a comment to that effect. Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |