[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH test-artifacts v3 03/13] Add debian rootfs artifact



On Wed, Apr 15, 2026 at 07:59:34PM +0200, Marek Marczykowski-Górecki wrote:
> On Wed, Apr 15, 2026 at 11:50:38AM +0000, Anthony PERARD wrote:
> > > diff --git a/scripts/debian-rootfs.sh b/scripts/debian-rootfs.sh
> > > new file mode 100755
> > > index 000000000000..7cb8a96e39c0
> > > --- /dev/null
> > > +++ b/scripts/debian-rootfs.sh
> > ...
> > > +PKGS=(
> > > +    # System
> > > +    bridge-utils
> > > +    dropbear
> > > +    udev
> > > +    systemd-sysv
> > > +    iproute2
> > > +    inetutils-ping
> > > +    util-linux
> > > +    cpio
> > 
> > Is `cpio` going to be used in dom0? The alpine rootfs don't have it.
> 
> Alpine does have it, via busybox. That said, I don't see it used in any
> current test.

Turns out I'm actually using `cpio` on the Alpine rootfs. But the only
reason is to work around a very slow download of the rootfs via the UEFI
firmware (and grub) of the machine. I just boot with a smaller rootfs,
then after Linux is started, I download the rootfs that only has the
xentool and extract it with `cpio`. (loading the full rootfs over
netboot takes about 15min on that machine)

> > > +# don't need persistent logging, avoid journal flush service
> > > +rmdir var/log/journal
> > 
> > I think this would better be done with:
> > 
> >     cat >> /etc/systemd/journald.conf.d/storage.conf <<EOF
> >     [Journal]
> >     Storage=volatile
> >     EOF
> > 
> > because I think systemd intend to change the behavior in future release,
> > and we are more explicit with a config file.
> 
> +1 
> 
> > > +# Create rootfs
> > > +cd /
> > > +{
> > > +    PATHS="bin etc home init lib lib64 mnt opt root sbin srv tmp usr var"
> > > +    find $PATHS -print0
> > > +    echo -ne "dev\0proc\0run\0sys\0"
> > > +} | cpio -0 -H newc -o | gzip > "${COPYDIR}/rootfs.cpio.gz"
> > 
> > You should add "-R0:0" to the `cpio` command, like we do for the alpine
> > rootfs.
> 
> Hm, I'm not sure if that's a good idea. There are a few intentionally
> non-root files in Debian. Right now that is:
> 
> -rw-r-----   1 root     42            496 Apr  1 01:08 etc/gshadow
> -rw-r-----   1 root     42            564 Apr  1 01:08 etc/shadow
> -rw-r-----   1 root     42            444 Apr  1 01:08 etc/gshadow-
> -rw-r-----   1 root     42            565 Apr  1 01:08 etc/shadow-
> -rwxr-sr-x   1 root     42          31256 Apr 19  2025 usr/bin/expiry
> -rwxr-sr-x   1 root     42         113848 Apr 19  2025 usr/bin/chage
> -rwsr-xr--   1 root     printadm    51272 Mar  8  2025 
> usr/lib/dbus-1.0/dbus-daemon-launch-helper
> -rwxr-sr-x   1 root     42          43256 Jun 29  2025 usr/sbin/unix_chkpwd
> drwxr-xr-x   2 systemd- systemd-        0 Apr  1 01:08 var/lib/systemd/network
> drwxr-xr-x   2 42       root            0 Apr  1 01:07 
> var/lib/apt/lists/auxfiles
> drwx------   2 42       root            0 Apr  1 01:07 
> var/lib/apt/lists/partial
> drwxrwsr-x   2 root     mem             0 Sep  8  2025 var/mail
> -rw-rw-r--   1 root     43              0 Sep  8  2025 var/log/wtmp
> -rw-rw-r--   1 root     43              0 Sep  8  2025 var/log/lastlog
> -rw-rw----   1 root     43              0 Sep  8  2025 var/log/btmp
> -rw-r-----   1 root     adm         31508 Apr  1 01:08 var/log/apt/term.log
> drwx------   2 42       root            0 Apr  1 01:08 
> var/cache/apt/archives/partial
> 
> While it _might_ not explode right now if we reset it to root, it may
> cause issues in the future (for example APT likes to run downloads as
> unprivileged user, with write access only to
> /var/lib/apt/lists/partial).

Ah, right, "-R0:0" probably only make sense when we build Xen as a build
user.

So, with the change to journald config:
Reviewed-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>

Cheers,


--
Anthony Perard | Vates XCP-ng Developer

XCP-ng & Xen Orchestra - Vates solutions

web: https://vates.tech

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.