|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 7/7] x86/HVM: reduce scope of pfec in hvm_emulate_init_per_insn()
> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Jan
> Beulich
> Sent: 31 January 2020 16:46
> To: xen-devel@xxxxxxxxxxxxxxxxxxxx
> Cc: George Dunlap <George.Dunlap@xxxxxxxxxxxxx>; Andrew Cooper
> <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné <roger.pau@xxxxxxxxxx>; Wei
> Liu <wl@xxxxxxx>; Paul Durrant <paul@xxxxxxx>
> Subject: [Xen-devel] [PATCH v4 7/7] x86/HVM: reduce scope of pfec in
> hvm_emulate_init_per_insn()
>
> It needs calculating only in one out of three cases. Re-structure the
> code a little such that the variable truly gets calculated only when we
> don't get any insn bytes from elsewhere, and hence need to (try to)
> fetch them. Also OR in PFEC_insn_fetch right in the initializer.
>
> While in this mood, restrict addr's scope as well.
>
> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
Reviewed-by: Paul Durrant <pdurrant@xxxxxxxxxx>
> ---
> v4: New.
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -2762,8 +2762,6 @@ void hvm_emulate_init_per_insn(
> unsigned int insn_bytes)
> {
> struct vcpu *curr = current;
> - unsigned int pfec = PFEC_page_present;
> - unsigned long addr;
>
> hvmemul_ctxt->ctxt.lma = hvm_long_mode_active(curr);
>
> @@ -2778,14 +2776,23 @@ void hvm_emulate_init_per_insn(
> hvmemul_ctxt->seg_reg[x86_seg_ss].db ? 32 : 16;
> }
>
> - if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
> - pfec |= PFEC_user_mode;
> -
> hvmemul_ctxt->insn_buf_eip = hvmemul_ctxt->ctxt.regs->rip;
> - if ( !insn_bytes )
> +
> + if ( insn_bytes )
> {
> + hvmemul_ctxt->insn_buf_bytes = insn_bytes;
> + memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes);
> + }
> + else if ( !(hvmemul_ctxt->insn_buf_bytes =
> + hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf)) )
> + {
> + unsigned int pfec = PFEC_page_present | PFEC_insn_fetch;
> + unsigned long addr;
> +
> + if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
> + pfec |= PFEC_user_mode;
> +
> hvmemul_ctxt->insn_buf_bytes =
> - hvm_get_insn_bytes(curr, hvmemul_ctxt->insn_buf) ?:
> (hvm_virtual_to_linear_addr(x86_seg_cs,
> &hvmemul_ctxt->seg_reg[x86_seg_cs],
> hvmemul_ctxt->insn_buf_eip,
> @@ -2795,15 +2802,9 @@ void hvm_emulate_init_per_insn(
> &addr) &&
> hvm_copy_from_guest_linear(hvmemul_ctxt->insn_buf, addr,
> sizeof(hvmemul_ctxt->insn_buf),
> - pfec | PFEC_insn_fetch,
> - NULL) == HVMTRANS_okay) ?
> + pfec, NULL) == HVMTRANS_okay) ?
> sizeof(hvmemul_ctxt->insn_buf) : 0;
> }
> - else
> - {
> - hvmemul_ctxt->insn_buf_bytes = insn_bytes;
> - memcpy(hvmemul_ctxt->insn_buf, insn_buf, insn_bytes);
> - }
> }
>
> void hvm_emulate_writeback(
>
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-devel
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |