[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH v2 4/6] xen_pvdev: Do not assume Dom0 when creating a directory



On 23 November 2023 11:43:35 GMT, Volodymyr Babchuk 
<Volodymyr_Babchuk@xxxxxxxx> wrote:
>
>Hi David,
>
>David Woodhouse <dwmw2@xxxxxxxxxxxxx> writes:
>
>> [[S/MIME Signed Part:Undecided]]
>> On Thu, 2023-11-23 at 09:28 +0000, Paul Durrant wrote:
>>> On 23/11/2023 00:07, Volodymyr Babchuk wrote:
>>> > 
>>> > Hi,
>>> > 
>>> > Volodymyr Babchuk <volodymyr_babchuk@xxxxxxxx> writes:
>>> > 
>>> > > Hi Stefano,
>>> > > 
>>> > > Stefano Stabellini <sstabellini@xxxxxxxxxx> writes:
>>> > > 
>>> > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> > > > > On Wed, 2023-11-22 at 15:09 -0800, Stefano Stabellini wrote:
>>> > > > > > On Wed, 22 Nov 2023, David Woodhouse wrote:
>>> > > > > > > On Wed, 2023-11-22 at 14:29 -0800, Stefano Stabellini wrote:
>>> > > > > > > > On Wed, 22 Nov 2023, Paul Durrant wrote:
>>> > > > > > > > > On 21/11/2023 22:10, Volodymyr Babchuk wrote:
>>> > > > > > > > > > From: Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>
>>> > > > > > > > > > 
>>> > > > > > > > > > Instead of forcing the owner to domid 0, use 
>>> > > > > > > > > > XS_PRESERVE_OWNER to
>>> > > > > > > > > > inherit the owner of the directory.
>>> > > > > > > > > 
>>> > > > > > > > > Ah... so that's why the previous patch is there.
>>> > > > > > > > > 
>>> > > > > > > > > This is not the right way to fix it. The QEMU Xen support 
>>> > > > > > > > > is *assuming* that
>>> > > > > > > > > QEMU is either running in, or emulating, dom0. In the 
>>> > > > > > > > > emulation case this is
>>> > > > > > > > > probably fine, but the 'real Xen' case it should be using 
>>> > > > > > > > > the correct domid
>>> > > > > > > > > for node creation. I guess this could either be supplied on 
>>> > > > > > > > > the command line
>>> > > > > > > > > or discerned by reading the local domain 'domid' node.
>>> > > > > > > > 
>>> > > > > > > > yes, it should be passed as command line option to QEMU
>>> > > > > > > 
>>> > > > > > > I'm not sure I like the idea of a command line option for 
>>> > > > > > > something
>>> > > > > > > which QEMU could discover for itself.
>>> > > > > > 
>>> > > > > > That's fine too. I meant to say "yes, as far as I know the 
>>> > > > > > toolstack
>>> > > > > > passes the domid to QEMU as a command line option today".
>>> > > > > 
>>> > > > > The -xen-domid argument on the QEMU command line today is the 
>>> > > > > *guest*
>>> > > > > domain ID, not the domain ID in which QEMU itself is running.
>>> > > > > 
>>> > > > > Or were you thinking of something different?
>>> > > > 
>>> > > > Ops, you are right and I understand your comment better now. The 
>>> > > > backend
>>> > > > domid is not on the command line but it should be discoverable (on
>>> > > > xenstore if I remember right).
>>> > > 
>>> > > Yes, it is just "~/domid". I'll add a function that reads it.
>>> > 
>>> > Just a quick question to QEMU folks: is it better to add a global
>>> > variable where we will store own Domain ID or it will be okay to read
>>> > domid from Xenstore every time we need it?
>>> > 
>>> > If global variable variant is better, what is proffered place to define
>>> > this variable? system/globals.c ?
>>> > 
>>> 
>>> Actually... is it possible for QEMU just to use a relative path for the 
>>> backend nodes? That way it won't need to know it's own domid, will it?
>>
>> That covers some of the use cases, but it may also need to know its own
>> domid for other purposes. Including writing the *absolute* path of the
>> backend, to a frontend node?
>
>Is this case possible? As I understand, QEMU writes frontend nodes only
>when it emulates Xen, otherwise this done by Xen toolstack. And in case
>of Xen emulation, QEMU always assumes role of Domain-0.


No, you can hotplug and unplug devices in QEMU even under real Xen. And if QEMU 
in a driver domain is given sufficient permission to write to its guest's 
frontend nodes, it should not need to be in dom0.




 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.