[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v6 02/16] x86/hvm: remove multiple open coded 'chunking' loops



>>> On 09.07.15 at 10:15, <Paul.Durrant@xxxxxxxxxx> wrote:
>>  -----Original Message-----
>> From: Jan Beulich [mailto:JBeulich@xxxxxxxx]
>> Sent: 09 July 2015 07:54
>> To: Andrew Cooper
>> Cc: Paul Durrant; xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
>> Subject: Re: [Xen-devel] [PATCH v6 02/16] x86/hvm: remove multiple open
>> coded 'chunking' loops
>> 
>> >>> On 08.07.15 at 18:43, <andrew.cooper3@xxxxxxxxxx> wrote:
>> > On 08/07/2015 16:57, Paul Durrant wrote:
>> >>> -----Original Message-----
>> >>> From: xen-devel-bounces@xxxxxxxxxxxxx [mailto:xen-devel-
>> >>> bounces@xxxxxxxxxxxxx] On Behalf Of Jan Beulich
>> >>> Sent: 08 July 2015 16:53
>> >>> To: Andrew Cooper; Paul Durrant
>> >>> Cc: xen-devel@xxxxxxxxxxxxxxxxxxxx; Keir (Xen.org)
>> >>> Subject: Re: [Xen-devel] [PATCH v6 02/16] x86/hvm: remove multiple
>> open
>> >>> coded 'chunking' loops
>> >>>
>> >>>>>> On 03.07.15 at 18:25, <paul.durrant@xxxxxxxxxx> wrote:
>> >>>> +static int hvmemul_linear_mmio_access(
>> >>>> +    unsigned long gla, unsigned int size, uint8_t dir, uint8_t *buffer,
>> >>>> +    uint32_t pfec, struct hvm_emulate_ctxt *hvmemul_ctxt, bool_t
>> >>> known_gpfn)
>> >>>> +{
>> >>>> +    struct hvm_vcpu_io *vio = &current->arch.hvm_vcpu.hvm_io;
>> >>>> +    unsigned long offset = gla & ~PAGE_MASK;
>> >>>> +    unsigned int chunk;
>> >>>> +    paddr_t gpa;
>> >>>> +    unsigned long one_rep = 1;
>> >>>> +    int rc;
>> >>>> +
>> >>>> +    chunk = min_t(unsigned int, size, PAGE_SIZE - offset);
>> >>>> +
>> >>>> +    if ( known_gpfn )
>> >>>> +        gpa = pfn_to_paddr(vio->mmio_gpfn) | offset;
>> >>>> +    else
>> >>>> +    {
>> >>>> +        rc = hvmemul_linear_to_phys(gla, &gpa, chunk, &one_rep, pfec,
>> >>>> +                                    hvmemul_ctxt);
>> >>>> +        if ( rc != X86EMUL_OKAY )
>> >>>> +            return rc;
>> >>>> +    }
>> >>>> +
>> >>>> +    for ( ;; )
>> >>>> +    {
>> >>>> +        rc = hvmemul_phys_mmio_access(gpa, chunk, dir, buffer);
>> >>>> +        if ( rc != X86EMUL_OKAY )
>> >>>> +            break;
>> >>>> +
>> >>>> +        gla += chunk;
>> >>>> +        buffer += chunk;
>> >>>> +        size -= chunk;
>> >>>> +
>> >>>> +        if ( size == 0 )
>> >>>> +            break;
>> >>>> +
>> >>>> +        ASSERT((gla & ~PAGE_MASK) == 0);
>> >>> Does this really matter for the code below?
>> >>>
>> >>>> +        chunk = min_t(unsigned int, size, PAGE_SIZE);
>> >>> Iirc Andrew had asked for this, but I still don't see why: "size" is the
>> >>> width of an instruction operand, and hence won't even come close
>> >>> to PAGE_SIZE.
>> >
>> > The original version of the code asserted that size was less than
>> > PAGE_SIZE around here.  This is not true in the general case, given a
>> > for loop like this and can in principle be hit if we ever got into a
>> > position of emulating an xsave instruction to an MMIO region.
>> 
>> That's a hypothetical future extension to XSAVE, isn't it? I.e. only
>> when AVX-1024 eventually arrives ...
>> 
>> > This specific example is not as far fetched as it seems.  The VM
>> > instrospection people are looking to emulate more and more instructions,
>> > while the GVT-g are working on the mmio_write_dm side of things which
>> > causes real RAM to be treated as MMIO from Xens point of view.
>> 
>> ... and when we really think we need to support XSAVEing to MMIO
>> or MMIO-like memory (which I doubt we will).
>> 
>> > If we had some blanket sanity checks for size at the top of the
>> > emulation calltree it would be less of an issue, but we don't and I have
>> > a nagging feeing that assumptions like this are going to bite us in an
>> > XSA-kind-of-way.
>> 
>> Hence the ASSERT() against PAGE_SIZE that originally was there.
>> 
> 
> Is there any harm in leaving the function as-is so that it can cope with size 
> > PAGE_SIZE in future?

Well, not a big problem, but I generally prefer any unneeded
conditionals to be removed. Considering that you and Andrew
agree with one another, I guess I'll give in.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.