| 
  
  
     On 12/15/13 12:22, Andrew Cooper wrote: 
     
    
      
      On 15/12/2013 17:19, Don Slutz wrote: 
       
      
        On 12/15/13 11:51, Andrew Cooper
          wrote: 
         
        
          On 15/12/2013 00:29, Don Slutz
            wrote: 
           
           
            I think I have corrected all coding errors (please check
            again). And done all requested changes.  I did add the
            reviewed by (not sure if I should since this changes a large
            part of the patch, but they are all what Jan said).  
             
            I have unit tested it and it appears to work the same as the
            previous version (as expected).  
             
            Here is the new version, also attached.  
             
            From e0e8f5246ba492b153884cea93bfe753f1b0782e Mon Sep 17
            00:00:00 2001  
            From: Don Slutz <dslutz@xxxxxxxxxxx>
             
            Date: Tue, 12 Nov 2013 08:22:53 -0500  
            Subject: [PATCH v2 3/4] hvm_save_one: return correct data.  
             
            It is possible that hvm_sr_handlers[typecode].save does not
            use all  
            the provided room.  In that case, using:  
             
               instance * hvm_sr_handlers[typecode].size  
             
            does not select the correct instance.  Add code to search
            for the  
            correct instance.  
             
            Signed-off-by: Don Slutz <dslutz@xxxxxxxxxxx>
             
            Reviewed-by: Jan Beulich <jbeulich@xxxxxxxx>
             
           
           
          but this fairs no better at selecting the correct subset in
          the case that less data than hvm_sr_handlers[typecode].size is
          written by hvm_sr_handlers[typecode].save. 
           
         
        True, but the inverse is the case here; .save writes 'n' 'size'
        blocks.  Form the loop above: 
         
            if ( hvm_sr_handlers[typecode].kind == HVMSR_PER_VCPU ) 
                for_each_vcpu(d, v) 
                    sz += hvm_sr_handlers[typecode].size; 
            else 
                sz = hvm_sr_handlers[typecode].size; 
         
        so sz is in multiples of 'size'.  Normally sz == ctxt.cur.  With
        some offline vcpus it write fewer 'size' blocks. 
        
          It always increments by 'size' bytes, and will only copy the
          data back if the bytes under desc->instance happen to match
          the instance we are looking for. 
           
         
        The only time it does not find one is for an offline vcpu.  Try
        out the unit test code in patch #1 on an unchanged xen.  It
        should not display anything.  Then offline a cpu in a domU (echo
        0 > /sys/devices/system/cpu/cpu1/online).  And with 3 vcpus,
        it will report an error. 
         
           -Don Slutz 
       
       
      Ah - so there are actually two problems.  I see now the one you
      are trying to solve, and would agree that your code does solve it. 
       
      However, some of the save handlers are themselves variable length,
      and will write records shorter than hvm_sr_handlers[typecode].size
      if they can get away with doing so.  In this case, the new logic
      still wont get the correct instance. 
       
     
    Not sure which one(s) you are referring to. 
     
    From the full dump: 
     
    xen-hvmctx 1| grep -i entry 
    Entry 0: type 1 instance 0, length 24 
    Entry 1: type 2 instance 0, length 1024 
    Entry 2: type 2 instance 2, length 1024 
    Entry 3: type 2 instance 3, length 1024 
    Entry 4: type 2 instance 4, length 1024 
    Entry 5: type 2 instance 5, length 1024 
    Entry 6: type 2 instance 6, length 1024 
    Entry 7: type 2 instance 7, length 1024 
    Entry 8: type 3 instance 0, length 8 
    Entry 9: type 3 instance 1, length 8 
    Entry 10: type 4 instance 0, length 400 
    Entry 11: type 5 instance 0, length 24 
    Entry 12: type 5 instance 1, length 24 
    Entry 13: type 5 instance 2, length 24 
    Entry 14: type 5 instance 3, length 24 
    Entry 15: type 5 instance 4, length 24 
    Entry 16: type 5 instance 5, length 24 
    Entry 17: type 5 instance 6, length 24 
    Entry 18: type 5 instance 7, length 24 
    Entry 19: type 6 instance 0, length 1024 
    Entry 20: type 6 instance 1, length 1024 
    Entry 21: type 6 instance 2, length 1024 
    Entry 22: type 6 instance 3, length 1024 
    Entry 23: type 6 instance 4, length 1024 
    Entry 24: type 6 instance 5, length 1024 
    Entry 25: type 6 instance 6, length 1024 
    Entry 26: type 6 instance 7, length 1024 
    Entry 27: type 7 instance 0, length 16 
    Entry 28: type 8 instance 0, length 8 
    Entry 29: type 9 instance 0, length 8 
    Entry 30: type 10 instance 0, length 56 
    Entry 31: type 11 instance 0, length 16 
    Entry 32: type 12 instance 0, length 1048 
    Entry 33: type 13 instance 0, length 8 
    Entry 34: type 14 instance 0, length 240 
    Entry 35: type 14 instance 1, length 240 
    Entry 36: type 14 instance 2, length 240 
    Entry 37: type 14 instance 3, length 240 
    Entry 38: type 14 instance 4, length 240 
    Entry 39: type 14 instance 5, length 240 
    Entry 40: type 14 instance 6, length 240 
    Entry 41: type 14 instance 7, length 240 
    Entry 42: type 16 instance 0, length 856 
    Entry 43: type 16 instance 1, length 856 
    Entry 44: type 16 instance 2, length 856 
    Entry 45: type 16 instance 3, length 856 
    Entry 46: type 16 instance 4, length 856 
    Entry 47: type 16 instance 5, length 856 
    Entry 48: type 16 instance 6, length 856 
    Entry 49: type 16 instance 7, length 856 
    Entry 50: type 18 instance 0, length 24 
    Entry 51: type 18 instance 1, length 24 
    Entry 52: type 18 instance 2, length 24 
    Entry 53: type 18 instance 3, length 24 
    Entry 54: type 18 instance 4, length 24 
    Entry 55: type 18 instance 5, length 24 
    Entry 56: type 18 instance 6, length 24 
    Entry 57: type 18 instance 7, length 24 
    Entry 58: type 19 instance 0, length 8 
    Entry 59: type 19 instance 1, length 8 
    Entry 60: type 19 instance 2, length 8 
    Entry 61: type 19 instance 3, length 8 
    Entry 62: type 19 instance 4, length 8 
    Entry 63: type 19 instance 5, length 8 
    Entry 64: type 19 instance 6, length 8 
    Entry 65: type 19 instance 7, length 8 
    Entry 66: type 0 instance 0, length 0 
     
    All typecode's appear to save the same amount per instance. 
     
    Most use hvm_save_entry: 
     
    ... 
            _hvm_write_entry((_h), (_src), HVM_SAVE_LENGTH(_x));    \ 
     
    and 
     
    /* Syntactic sugar around that function: specify the max number
    of                                                       
     * saves, and this calculates the size of buffer needed */ 
    #define HVM_REGISTER_SAVE_RESTORE(_x, _save, _load, _num,
    _k)             \ 
    static int __init
    __hvm_register_##_x##_save_and_restore(void)            \ 
    {                                                                        
    \ 
       
    hvm_register_savevm(HVM_SAVE_CODE(_x),                               
    \ 
                           
    #_x,                                              \ 
                           
    &_save,                                           \ 
                           
    &_load,                                           \ 
                            (_num) *
    (HVM_SAVE_LENGTH(_x)                     \ 
                                      + sizeof (struct
    hvm_save_descriptor)), \ 
                           
    _k);                                              \ 
        return
    0;                                                             \ 
    }                                                                        
    \ 
     
    I do not find any that call on _hvm_write_entry directly. 
     
    The only special one I found: CPU_XSAVE_CODE 
     
    Still "writes" a full sized entry: 
     
            if ( _hvm_init_entry(h, CPU_XSAVE_CODE, v->vcpu_id,
    HVM_CPU_XSAVE_SIZE) ) 
                return 1; 
            ctxt = (struct hvm_hw_cpu_xsave
    *)&h->data[h->cur]; 
            h->cur += HVM_CPU_XSAVE_SIZE; 
            memset(ctxt, 0, HVM_CPU_XSAVE_SIZE); 
     
    It then modifies the zeros conditionaly. 
     
            if ( v->fpu_initialised ) 
                memcpy(&ctxt->save_area, 
                    v->arch.xsave_area, xsave_cntxt_size); 
     
    #define HVM_CPU_XSAVE_SIZE  (3 * sizeof(uint64_t) +
    xsave_cntxt_size) 
     
    is part of this. 
     
    /* We need variable length data chunk for xsave area, hence
    customized                                                   
     * declaration other than
    HVM_REGISTER_SAVE_RESTORE.                                                                   
     
     */ 
    static int __init __hvm_register_CPU_XSAVE_save_and_restore(void) 
    { 
        hvm_register_savevm(CPU_XSAVE_CODE, 
                            "CPU_XSAVE", 
                            hvm_save_cpu_xsave_states, 
                            hvm_load_cpu_xsave_states, 
                            HVM_CPU_XSAVE_SIZE + sizeof (struct
    hvm_save_descriptor), 
                            HVMSR_PER_VCPU); 
        return 0; 
    } 
    __initcall(__hvm_register_CPU_XSAVE_save_and_restore); 
     
    is the final part of this one.  So I do not find any code that does
    what you are wondering about. 
     
       -Don 
     
    
      ~Andrew  
     
  
 |