[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 00/07] HVM firmware passthrough



On Thu, 2012-04-05 at 21:06 +0100, Ross Philipson wrote:
> > I see. So you need to be able to find the individual tables so that
> > smbios_type_<N>_init can check for an overriding table <N> in the
> > passed-through tables, it seems reasonable to try and avoid needing to
> > parse a big lump of tables each time to see if the one you are
> > interested in is there.
> > 
> > I think this can work by having .../smbios/<N>/{address,etc} in
> > XenStore. You could also have .../smbios/OEM/{...} for the stuff for
> > smbios_type_vendor_oem_init, which I think could easily just be a single
> > lump?
> > 
> > I think you don't need the same thing for the ACPI side since there you
> > just provide secondary tables?
> 
> There could be quite a few SMBIOS tables being passed in. On the order
> of 20 - 30 of them and thet are pretty small. It seems a bit odd to
> break it up like this for lots of little chunks. There could be more
> than one ACPI table also (multiple SSDTs and/or other static tables).
> Also the OEM SMBIOS tables are all discrete and they could not go in
> as a single lump.

I'm not sure if there are arguments here both for and against having a
single smbios blob vs multiple?

> > At the libxc layer you could have "struct xc_hvm_module *smbios". I
> > expect in the simple case this would just point into an array on the
> > callers stack but a caller could also do something more dynamic if they
> > wanted to. If you think it would be useful then a linked-list thing here
> > would also work.
> 
> Again assuming the xenstore approach, I guess it would probably have
> to be a linked list because there could be a fair number of them
> (mainly SMBIOS).

This is userspace so an array of, say, 100 items isn't really much of a
problem. But if a link list works better for your use cases then that's
fine.

> > 
> > > Given all that, the only real issue I see at the moment is how to deal
> > > with multiple entries within a blob that don't readily specify their
> > > own lengths. I am in favor of a delimiting struct with possibly a
> > > terminating form (like a flag). Then all we put in xenstore is the gpe
> > > for each type.
> > >
> > > Finally I wanted to bring up again the idea of another helper
> > > component (lib maybe) that can be used to build and glom (I like that
> > > word) modules. Note that in many cases the passed in firmware is going
> > > to be pulled from the host firmware. I already have bunches of codes
> > > that do that. Perhaps I should start an RFC thread for that as a
> > > separate task? Thoughts on this?
> > 
> > You mean some sort of libacpi / libsmbios type things, helper routines
> > for parsing and manipulating tables etc? (such a thing doesn't already
> > exist somewhere?)
> > 
> > I suspect your usecases are the ones which would make most extensive use
> > of such a thing but I also assume that at least some of it would be
> > useful to any caller which was passing in these tables. If you want to
> > maintain it within the Xen tree then that works for me.
> 
> Yea along those lines. I was thinking of maybe a libxenhvm that had
> utility routines for collecting the system firmware information that
> one wanted to pass to a guest. It could also provide an API that
> wrapped up the setting of the firmware values that hvmloader consumes
> including the bits that are on the shared info page you want to get
> rid of. The usefulness of this library is diminished if we go the all
> xenstore route above so I am going to shelf this until that is
> settled.

OK.

> Oh and yea, there are tools to get this information
> (acpidump/dmidecode) but they unfortunately do not come in library
> form or necessarily give you the desired output. And in newer kernels
> ACPI/DMI is accessible through sysfs.

Ian.


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.