[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 2/5] xen/vm_access: Support for memory-content hiding
On Fri, May 8, 2015 at 6:49 PM, Razvan Cojocaru <rcojocaru@xxxxxxxxxxxxxxx> wrote: > On 05/08/2015 07:07 PM, Jan Beulich wrote: >>>>> On 06.05.15 at 19:12, <rcojocaru@xxxxxxxxxxxxxxx> wrote: >>> --- a/xen/arch/x86/hvm/emulate.c >>> +++ b/xen/arch/x86/hvm/emulate.c >>> @@ -578,6 +578,25 @@ static int hvmemul_read( >>> container_of(ctxt, struct hvm_emulate_ctxt, ctxt)); >>> } >>> >>> +static int hvmemul_read_set_context( >>> + enum x86_segment seg, >>> + unsigned long offset, >>> + void *p_data, >>> + unsigned int bytes, >>> + struct x86_emulate_ctxt *ctxt) >>> +{ >>> + struct vcpu *curr = current; >>> + unsigned int len = >>> + (bytes > curr->arch.vm_event.emul_read_data.size ? >>> + curr->arch.vm_event.emul_read_data.size : bytes); >> >> min() > > Ack. > >>> + if ( len ) >>> + memcpy(p_data, curr->arch.vm_event.emul_read_data.data, >>> + curr->arch.vm_event.emul_read_data.size); >> >> Not len? >> >> And what happens to the portion of the read beyond "bytes" (in >> case that got reduced above)? > > Yes, it should be len. I've been porting these patches from an older > version of Xen and somehow lost that in translation. I'll correct it in V2. > >>> @@ -970,20 +990,38 @@ static int hvmemul_rep_movs( >>> if ( df ) >>> dgpa -= bytes - bytes_per_rep; >>> >>> - /* Allocate temporary buffer. Fall back to slow emulation if this >>> fails. */ >>> - buf = xmalloc_bytes(bytes); >>> - if ( buf == NULL ) >>> - return X86EMUL_UNHANDLEABLE; >>> + if ( unlikely(set_context) ) >>> + { >>> + struct vcpu* curr = current; >> >> * and blank need to switch places. > > Ack. > >>> - /* >>> - * We do a modicum of checking here, just for paranoia's sake and to >>> - * definitely avoid copying an unitialised buffer into guest address >>> space. >>> - */ >>> - rc = hvm_copy_from_guest_phys(buf, sgpa, bytes); >>> - if ( rc == HVMCOPY_okay ) >>> - rc = hvm_copy_to_guest_phys(dgpa, buf, bytes); >>> + bytes = bytes < curr->arch.vm_event.emul_read_data.size ? >>> + bytes : curr->arch.vm_event.emul_read_data.size; >> >> min() again. > > Ack. > >>> @@ -1107,6 +1171,22 @@ static int hvmemul_rep_stos( >>> } >>> } >>> >>> +static int hvmemul_rep_stos_set_context( >>> + void *p_data, >>> + enum x86_segment seg, >>> + unsigned long offset, >>> + unsigned int bytes_per_rep, >>> + unsigned long *reps, >>> + struct x86_emulate_ctxt *ctxt) >>> +{ >>> + struct vcpu *curr = current; >>> + unsigned long local_reps = 1; >>> + >>> + return hvmemul_rep_stos(curr->arch.vm_event.emul_read_data.data, seg, >>> + offset, >>> curr->arch.vm_event.emul_read_data.size, >>> + &local_reps, ctxt); >>> +} >> >> How come here you can get away by simply reducing *reps to 1? And >> considering this patch is about supplying read values - why is fiddling >> with STOS emulation necessary in the first place? > > I got away with it because *reps is always one in our test scenario - > emulating instructions with the REP part disabled, as done here: > > http://lists.xen.org/archives/html/xen-devel/2014-11/msg00243.html > > But it's indeed not reasonable to assume that that is the case with all > users, so I'll have to rethink that. > >>> +static const struct x86_emulate_ops hvm_emulate_ops_set_context = { >>> + .read = hvmemul_read_set_context, >>> + .insn_fetch = hvmemul_insn_fetch, >>> + .write = hvmemul_write, >>> + .cmpxchg = hvmemul_cmpxchg, >> >> How about this one? >> >>> + .rep_ins = hvmemul_rep_ins, >>> + .rep_outs = hvmemul_rep_outs, >> >> And this? >> >>> + .rep_movs = hvmemul_rep_movs_set_context, >>> + .rep_stos = hvmemul_rep_stos_set_context, >>> + .read_segment = hvmemul_read_segment, >>> + .write_segment = hvmemul_write_segment, >>> + .read_io = hvmemul_read_io, >> >> And this? > > Yes, those require attention too. I thought those would use the read > function supplied, but I see now that I've been mislead by a comment > about __hvmemul_read() in hvmemul_cmpxchg(). > >>> +void hvm_mem_access_emulate_one(enum emul_kind kind, unsigned int trapnr, >>> unsigned int errcode) >>> { >>> struct hvm_emulate_ctxt ctx = {{ 0 }}; >>> - int rc; >>> + int rc = X86EMUL_UNHANDLEABLE; >>> >>> hvm_emulate_prepare(&ctx, guest_cpu_user_regs()); >>> >>> - if ( nowrite ) >>> + switch ( kind ) { >>> + case EMUL_KIND_NOWRITE: >>> rc = hvm_emulate_one_no_write(&ctx); >>> - else >>> + break; >>> + case EMUL_KIND_NORMAL: >>> rc = hvm_emulate_one(&ctx); >> >> Mind having "NORMAL" as the first case in the switch()? > > No problem, I'll change it. > >>> --- a/xen/include/asm-x86/domain.h >>> +++ b/xen/include/asm-x86/domain.h >>> @@ -512,6 +513,7 @@ struct arch_vcpu >>> uint32_t emulate_flags; >>> unsigned long gpa; >>> unsigned long eip; >>> + struct vm_event_emul_read_data emul_read_data; >> >> Considering the size of this structure I don't think this should be >> there for all vCPU-s of all guests. IOW - please allocate this >> dynamically only on domains where it's needed. > > Ack. > >>> @@ -193,6 +200,11 @@ struct vm_event_xsetbv { >>> uint64_t value; >>> }; >>> >>> +struct vm_event_emul_read_data { >>> + uint32_t size; >>> + uint8_t data[164]; >> >> This number needs an explanation. > > It's less than the size of the x86_regs and enough for all the cases > we've tested so far... > > > Thanks, > Razvan I feel like 164 bytes is really wasteful for all vm_events considering this would be useful only in a very specific subset of cases. Not sure what would be the right way to get around that.. Maybe having another hypercall (potentionally under memop?) place that buffer somewhere where Xen can access it before the vm_event response is processed? That would require two hypercalls to be issued by the monitoring domain, one to place the buffer and one for the event channel notification being sent to Xen to that the response is placed on the ring, but it might save space on the ring buffer for all other cases/users. Tamas _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |