[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] events/fifo: don't corrupt queues if an old tail moves queues



On 11/11/13 16:46, Jan Beulich wrote:
>>>> On 11.11.13 at 17:03, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -98,6 +98,8 @@ struct evtchn
>>      } u;
>>      u8 priority;
>>      u8 pending:1;
>> +    u16 last_vcpu_id;
>> +    u8 last_priority;
> 
> So this adds up to 5 bytes now, whereas you apparently could do
> with 4. On ARM this means the structure is larger than needed; on
> 64-bit (ARM or x86) it means that further additions are going to
> be less obvious. I'd therefore suggest

ARM:    24 bytes (3 spare bytes), 128 evtchns per page.
x86_64: 32 bytes (3 spare bytes), 128 evtchns per page.

>     } u;
>     u16 last_vcpu_id;
>     u8 priority:4;
>     u8 last_priority:4;
>     u8 pending:1;

ARM:    24 bytes (4 spare bytes), 128 evtchns per page.
x86_64: 32 bytes (4 spare bytes), 128 evtchns per page.

As of ea963e094a (evtchn: allow many more evtchn objects to be allocated
per domain) the number of evtchns per page is a power of two so there is
no change to the number with either layout.

I was avoiding using bitfields for things other than single bits. Is the
extra spare byte preferable to the code size increase from dealing with
the bitfields?

On my x86_64 build text size increases by 1684425 - 1684393 = 32 bytes.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.