[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 2/2] xen/evtchn: optimize XSM ssid field



On 21/03/14 14:54, Daniel De Graaf wrote:
> On 03/21/2014 09:28 AM, David Vrabel wrote:
>> On 20/03/14 15:29, Daniel De Graaf wrote:
>>> When FLASK is the only enabled implementation of the XSM hooks in Xen,
>>> some of the abstractions required to handle multiple XSM providers are
>>> redundant and only produce unneeded overhead.  This patch reduces the
>>> memory overhead of enabling XSM on event channels by replacing the
>>> untyped ssid pointer from struct evtchn with a union containing the
>>> contents of the structure.  This avoids an additional heap allocation
>>> for every event channel, and on 64-bit systems, reduces the size of
>>> struct evtchn by 4 bytes.
>>
>> Without XSM the 64-bit structure is 29 bytes (so 32 including the
>> trailing padding).
>>
>> Adding a 4 byte word or a 8 byte word both results in a 40 byte
>> structure which halves the number of evtchns per page (since each page
>> contains a power of two structs).
>>
>> I think you could swap the order of the fields in u.interdomain to get
>> better packing with the 4 byte flask_sid and end up with a 32 byte
>> struct evtchn (there's 6 bytes of padding between remote_port and
>> remote_dom).
>>
>> You may want to check the EVTCHNS_PER_BUCKET value before/after.
>>
>> David
> 
> Making this work correctly requires adding__packed to the interdomain
> structure (otherwise, the padding just remains at the end); with this,
> the structure is 24 bytes without XSM, 28 with FLASK only, and 32 with
> XSM_NEED_GENERIC_EVTCHN_SSID (all with one byte of padding remaining).

Oh, now I remember why I didn't do this before.  Sorry for wasting your
time.

David

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.