[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 3/3] libxc/PV: save/restore data breakpoint extension registers



On Mon, Apr 14, 2014 at 8:50 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>>> On 11.04.14 at 18:37, <dunlapg@xxxxxxxxx> wrote:
>> On Mon, Apr 7, 2014 at 10:39 AM, Jan Beulich <JBeulich@xxxxxxxx> wrote:
>>> @@ -634,11 +640,42 @@ static int buffer_tail_pv(xc_interface *
>>>              ERROR("Error allocating VCPU ctxt tail buffer");
>>>              goto free_pfntab;
>>>          }
>>> +        alloc = 1;
>>>      }
>>>      // DPRINTF("Reading VCPUS: %d bytes\n", vcpulen);
>>> -    if ( RDEXACT(fd, buf->vcpubuf, vcpulen) ) {
>>> -        PERROR("Error when reading ctxt");
>>> -        goto free_vcpus;
>>> +    for ( total = i = 0, ptr.raw = buf->vcpubuf; ext_vcpucontext; ) {
>>
>> This isn't actually a for loop.  Would it really be so terrible to do
>> if(ext_vcpucontext) { while(1) {}}?
>
> Nothing really terrible, but it would increase indentation by one level
> for no real reason.

Making the intent of the code immediately understandable to anyone
looking at it is far from "no real reason".  If indentation makes the
code difficult to read, you can use a goto to skip the section.
Having this for-loop-that's-not-really-a-for-loop is *far* less
readable than code with an extra goto.

>
>>> +        if ( RDEXACT(fd, buf->vcpubuf + total, vcpulen) ) {
>>> +            PERROR("Error when reading ctxt");
>>> +            goto free_vcpus;
>>> +        }
>>> +        total += vcpulen;
>>> +        for ( vcpulen = 0; i < buf->vcpucount; ++i ) {
>>> +            size_t msrlen;
>>> +
>>> +            if ( (const unsigned char *)(ptr.evc + 1) > buf->vcpubuf + 
>>> total )
>>> +                break;
>>> +            if ( ptr.evc->size <
>>> +                 (offsetof(xen_domctl_ext_vcpucontext_t, msrs) +
>>> +                  sizeof(ptr.evc->msrs)) )
>>> +                ptr.evc->msr_count = 0;
>>> +            msrlen = ptr.evc->msr_count * 
>>> sizeof(xen_domctl_ext_vcpu_msr_t);
>>> +            vcpulen += msrlen;
>>> +            ptr.raw += 128 + msrlen + vcpuextstate_size;
>>> +        }
>>> +        if ( !vcpulen ) {
>>> +            assert(i == buf->vcpucount);
>>> +            break;
>>> +        }
>>
>> If you're going to write such twisted logic, you absolutely must
>> comment it -- at very least you need to give a high level description
>> of what it's doing.  I've been staring at this for an hour now, and I
>> only have a vague idea what it's supposed to be doing.  Seriously, is
>> this any better than spaghetti code?
>
> Now do you have any better suggestion? I was just trying to extend
> what is there with as little churn on the code as possible. I'm not even
> certain the model chose is desirable - I could imagine that introducing
> another "extended info block", e.g. tagged "msrs", might be preferable.

It's not so much about the data layout, as that I can't actually
follow what this code does.  Having "for(vcpulen=0;...", then inside
the loop, before doing anything, having a contitional break statement,
then immediately after the loop breaking out of the while(1) if
vcpulen == 0?  Wouldn't it have been better to check for the condition
*before* going into the inner for loop at all?  (Or perhaps even
before going into the outer loop?)  Not to mention the fact that
"vcpulen" doesn't actually mean the length of a vcpu anymore -- and
all without any comments to help someone figuring out what was going
on.

I started to try to re-write the loop in a way that would actually be
comprehensible, but at some point I realized that decoding your
uncommented post-optimized algorithm shouldn't be my job.  It should
at very least be your job to explain it to a reasonable degree.

> In any event, this isn't code I'm feeling comfortable changing, so of all
> possible options I'd prefer to have this whole aspect of the intended
> functionality done by someone else who is. And the patch here then
> might serve as no more than a baseline reference, to be thrown away
> entirely if so desired.
>
> Perhaps the best thing really is to leave the save/restore aspect out
> for now, and get it implemented once Andrew's rewrite is in place.

That might make sense.

>
>>> @@ -1960,16 +1963,42 @@ int xc_domain_save(xc_interface *xch, in
>>>          domctl.domain = dom;
>>>          memset(&domctl.u, 0, sizeof(domctl.u));
>>>          domctl.u.ext_vcpucontext.vcpu = i;
>>> -        if ( xc_domctl(xch, &domctl) < 0 )
>>> +        frc = xc_domctl(xch, &domctl);
>>> +        if ( frc < 0 && errno == ENOBUFS &&
>>> +             domctl.u.ext_vcpucontext.size >=
>>> +             (offsetof(xen_domctl_ext_vcpucontext_t, msrs) +
>>> +              sizeof(domctl.u.ext_vcpucontext.msrs)) &&
>>> +             domctl.u.ext_vcpucontext.msr_count )
>>> +        {
>>> +            msrs = xc_hypercall_buffer_alloc(xch, msrs,
>>> +                                             
>>> domctl.u.ext_vcpucontext.msr_count *
>>> +                                             sizeof(*msrs));
>>> +            set_xen_guest_handle(domctl.u.ext_vcpucontext.msrs, msrs);
>>> +            frc = msrs ? xc_domctl(xch, &domctl) : -1;
>>> +            /* Don't save the actual pointer. */
>>> +            set_xen_guest_handle_raw(domctl.u.ext_vcpucontext.msrs, NULL);
>>> +        }
>>
>> I think this could use a comment as well.
>
> Not sure what you refer to here - the one non-obvious thing (clearing
> out the handle) is being commented. Are you trying to tell me that this
> isn't sufficient of an explanation, or are you referring to other parts of
> this code block?

In this case, I meant a comment explaining pointing to he size change
in 4.5.  Someone casually looking through the code shouldn't be
required to have entire history of the migration protocol at the tips
of their fingers.

 -George

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.