[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 4/5] libxl: call hotplug scripts from libxl for vif



On 18 Apr 2012, at 14:31, Ian Campbell wrote:

Ok, I will add and extra parameter to libxl__initiate_device_remove that turns on or off the destruction of the front xenstore entries. We will
first call it without removing the fronted, and if that fails we will
call libxl__initiate_device_remove from the callback this time forcing
the removal of the frontend.

If libxl__initiate_device_remove fails then you should be calling
libxl__initiate_device_destroy, I think, so no need for a param to
_remove?

Not reallyâ That's how I think the removal path should be:

1- Wait for backend to turn to state 6 -----> if ok then execute hotplug, and remove front/backend 2- If timeout: nuke frontend and wait for backend state 6 -----> if ok then execute hotplug and remove front/back
3- If timeout: nuke front/backend

Hang on, can't you infer the type from the backend path, one should
contains vif and the other something else (tap)? Or is this because of
the stupid sharing of the vif dir for both vif and tap from the
hotplug
scripts point of view?

Nope, tap doesn't have a backend xenstore entry, only vifs have, so this
is kind of a hack because I was marking a vifs path as tap...

Right, that's the stupid sharing I was referring too (which IIRC I
added :-/)


It's probably too late in the 4.2 cycle to direct the tap hotplug
script
to a different backend dir so I think the best thing to do for now is
to
put this key somewhere else so that it doesn't become a guest visible API (which is what happens with where you have put it). The same place
as udev_disable would work fine.

Does this paths sound ok:

/libxl/devices/<domid>/nic/<devid>/udev_disable
/libxl/devices/<domid>/nic/<devid>/type

Works for me.

Ian.

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.