[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] SeaBIOS build issue



>>> On 23.08.13 at 10:18, Ian Campbell <Ian.Campbell@xxxxxxxxxx> wrote:
> On Thu, 2013-08-22 at 17:01 +0100, Jan Beulich wrote:
> 
>> And then to the build problem itself - the way that they put
>> together the binary image (via computing linker scripts listing the
>> sections in machine-adjusted order) makes it close to impossible to
>> find out where things go wrong. I decided to stop my attempts to
>> understand that logic after having wasted 2+ hours on this. I have
>> a vague feeling that less (or no) inlining may be representing part
>> of the problem.
> 
> This is an area of SeaBIOS which I don't really understand myself. I do
> know that it is very sensitive to compiler and binutils versions because
> of some of the magic it does, especially with older ones. IME it
> generally tests for those issues and aborts the build rather than
> building something bad -- but I guess being 256k isn't actually bad in
> isolation.

Correct. It is our glue code that's problematic (and once a few more
additions to the BIOS gets done, more tool chains may end up
running out of the pre-set 128k.

Therefore I went and tried whether removing that limitation just
in hvmloader's glue code would work, and voilà - it does. I'll send
a patch once I settled on a final shape for this.

> I've found the seabios@xxxxxxxxxxx list and Kevin in particular to be
> very helpful in answering these sorts of questions.

And I'll send a desirable in my opinion adjustment to the SeaBIOS
build process there - rather than bumping anything beyond 128k to
256k, it seems slightly better to do the bumps in 64k increments
(and perhaps one could use even smaller steps), thus leaving a
hole of at least 64k even in the problem case here.

Jan

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.