[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH] libs/light: make it build without setresuid()
On Wed, Jan 27, 2021 at 04:03:04PM +0000, Ian Jackson wrote: > Ian Jackson writes ("Re: [PATCH] libs/light: make it build without > setresuid()"): > > Manuel Bouyer writes ("Re: [PATCH] libs/light: make it build without > > setresuid()"): > > > On Wed, Jan 20, 2021 at 05:10:36PM +0000, Ian Jackson wrote: > > > > My last mail had in it a thing that claims to be a proof that this is > > > > not possible. > > > > > > This code: > ... > > > actually works on NetBSD. processes from 375 are killed, and the > > > seteuid(0) call succeeds (showing that the saved used id is still 0). > > > > I guess I must have been wrong. > > > > > > What do you think ? > > > > > > As this is supported by Xen, I hope I can make at last run qemu with a > > > non-zero uid. > > > > The logic for deciding what user to run qemu as, and whether to kill > > by uid or by pid, is in libxl_dm.c, in the function > > libxl__domain_get_device_model_uid. > > > > The dm_restrict flag turns on various other things too. > > I think I have lost track of where we are with this patch. I would > like to see all this properly sorted in Xen 4.15. > > How about I write a patch splitting the relevant part up into a > version for systems with setresuid and systems without ? Then you > could fill in the missing part. Yesterday I sent a v2 with the rewriting you suggested. But I'm fine with you doing the rewrite. > > Should I expect the non-setresuid OS to provide effectively the whole > orf kill_device_model_uid_child, or just a replacement for the > setresuid call and surrounding logging, something like > kill_device_model_uid_child_setresuid As far as I'm concerned, kill_device_model_uid_child_setresuid() is enough. Unfortunably I don't think I'll have time to work on dm restriction for NetBSD before 4.15 -- Manuel Bouyer <bouyer@xxxxxxxxxxxxxxx> NetBSD: 26 ans d'experience feront toujours la difference --
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |