[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [RFC] scf: SCF device tree and configuration documentation



Hello Julien,

Thank you for your comments.

As you may have seen in the description of the option "device_tree", it is complex to verify the partial device tree because of the libfdt design. So without fully auditing libfdt and fixing the holes, this suggestion would be a vector attack to the hypervisor.
I understand these concerns, but not sure should we be scared of attack from a domain privileged enough to run domains? It seems to me that system hypervisor attack through libfdt is the less valuable benefit from compromised dom0. About a system stability issues due to libfdt holes I've just got a crazy idea: let us put a fdt unflatenning code into a generic EL0 application :)

Whilst I do agree that this could be an interface between the user and the toolstack, we shall look into introducing a series of DOMCTL for the toolstack <-> hypervisor. What would be the issue to do that?
There were two reasons turned me to using device tree as a configuration:
- I did not come up with a clear and flexible enough format to describe a SCF configuration. Any format which would allow us describe a complex virtual coprocessor with several mmios and irqs, and would allow describe breeding for a domain several virtual coprocessors running on one physical coprocessor will turn to a device tree node or something similar. This could fit somehow the DomU configuration file, but not XEN command line. - The idea to reuse the same SCF configuration code both for Dom0 and DomU got me.

All said above is pretty true for me, but recently I have concerned that once we would like to spread SCF to x86 or systems configured through ACPI we will have to reconsider SCF configuration again.

--

*Andrii Anisov*


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.