[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH V7 for-4.5 4/4] xen: Handle resumed instruction based on previous mem_event reply



On 09/11/2014 04:35 PM, Jan Beulich wrote:
>>>> On 11.09.14 at 15:15, <rcojocaru@xxxxxxxxxxxxxxx> wrote:
>> @@ -1448,6 +1449,28 @@ bool_t p2m_mem_access_check(paddr_t gpa, unsigned 
>> long gla,
>>          }
>>      }
>>  
>> +    /* The previous mem_event reply does not match the current state. */
>> +    if ( v->arch.mem_event.gpa != gpa || v->arch.mem_event.eip != eip )
>> +    {
>> +        /* Don't emulate the current instruction, send a new mem_event. */
>> +        v->arch.mem_event.emulate_flags = 0;
>> +
>> +        /* Make sure to mark the current state to match it again against
>> +         * the new mem_event about to be sent. */
> 
> Coding style.

Thank you for the review. The proper way to write multiline comments in
Xen is to always begin with '/*', then each line after preceded by an
'*', ended by a single '*/' below the next line, is that correct?

/*
 * Multiline comment
 * example.
 */

>> +        if ( rsp.flags & MEM_EVENT_FLAG_EMULATE )
>> +        {
>> +            xenmem_access_t access;
>> +            bool_t violation = 1;
>> +
>> +            v->arch.mem_event.emulate_flags = 0;
> 
> Do you really need to write this once here and ...
> 
>> +
>> +            if ( p2m_get_mem_access(d, rsp.gfn, &access) == 0 )
>> +            {
>> +                switch ( access )
>> +                {
>> +                case XENMEM_access_n:
>> +                case XENMEM_access_n2rwx:
>> +                default:
>> +                    violation = rsp.access_r || rsp.access_w || 
>> rsp.access_x;
>> +                    break;
>> +
>> +                case XENMEM_access_r:
>> +                    violation = rsp.access_w || rsp.access_x;
>> +                    break;
>> +
>> +                case XENMEM_access_w:
>> +                    violation = rsp.access_r || rsp.access_x;
>> +                    break;
>> +
>> +                case XENMEM_access_x:
>> +                    violation = rsp.access_r || rsp.access_w;
>> +                    break;
>> +
>> +                case XENMEM_access_rx:
>> +                case XENMEM_access_rx2rw:
>> +                    violation = rsp.access_w;
>> +                    break;
>> +
>> +                case XENMEM_access_wx:
>> +                    violation = rsp.access_r;
>> +                    break;
>> +
>> +                case XENMEM_access_rw:
>> +                    violation = rsp.access_x;
>> +                    break;
>> +
>> +                case XENMEM_access_rwx:
>> +                    violation = 0;
>> +                    break;
>> +                }
>> +            }
>> +
>> +            if ( violation )
>> +                v->arch.mem_event.emulate_flags = rsp.flags;
> 
> ... a second time here (rather making this one simply a conditional
> expression)?

I'll assign to v->arch.mem_event.emulate_flags directly in the switch.

> And I further wonder whether all the MEM_EVENT_FLAG_* values are
> really potentially useful in v->arch.mem_event.emulate_flags - right
> now it rather looks like this field could be a simple bool_t (likely with
> a different name), which would at once make the
> hvm_mem_event_emulate_one() a little better readable.

The value is checked here:

+        hvm_mem_event_emulate_one((v->arch.mem_event.emulate_flags &
+                                   MEM_EVENT_FLAG_EMULATE_NOWRITE) != 0,
+                                  TRAP_invalid_op,
HVM_DELIVER_NO_ERROR_CODE);

where it matters if MEM_EVENT_FLAG_EMULATE_NOWRITE is set. Also, please
bear in mind that in the original series we also had a
MEM_EVENT_FLAG_SKIP flag, so this allows for even more ways to respond
to a mem_event in the future.


Thanks,
Razvan Cojocaru

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.