[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Debian Sarge Root Raid + LVM + XEN install guide (LONG)


  • To: <xen-devel@xxxxxxxxxxxxxxxxxxxxx>
  • From: "Tom Hibbert" <tom@xxxxxxxxx>
  • Date: Thu, 13 Jan 2005 16:08:03 +1300
  • Delivery-date: Thu, 13 Jan 2005 03:12:48 +0000
  • List-id: List for Xen developers <xen-devel.lists.sourceforge.net>
  • Thread-index: AcT5HRiHvmjCWAtbQBOhqaYTuA4rdg==
  • Thread-topic: Debian Sarge Root Raid + LVM + XEN install guide (LONG)

Hello fellow xenophiles and happy new year!

I've documented the install procedure for a prototype server here since
I found no similar document
Anywhere on the net. It's a Sarge-based Domain0 on linux root raid from
scratch, using LVM to store
the data for the domU mail server and its mailstore. I humbly submit my
notes in the hope that they are useful to some weary traveller.

Have fun!



Debian Sarge XEN dom0 with Linux Root Raid and LVM

Hardware: P4 3.2ghz LG775
            Asus P5GD1-VM
            1gb DDR400 DRAM
            2x80gb Seagate SATA disks

Reasons for using software raid (over Intel ICH raid or more expensive
SCSI raid)
        1. Speed
           Bonnie++ shows Linux Software Raid is MUCH faster than ICH5
(at least under Linux)
        2. Reliability
           I have observed that frequent disk access with small files
has destroyed ICH5 raid arrays in          the past (at least under
Linux)
        3. Recovery
           I had a bad experience with the death of an Adaptec 3200S
controller not long ago. The array
           was nonrecoverable because a replacement card could not be
sourced in time. Additionally the
           firmware revision for the 3200s was unknown. (Recovery from
controller death if even           possible requires the same firmware
revision as the original card, since that was not known
we would have had to guess which takes time and time is money when you
have a dead server)
        4. Price
           Reduce cost of hardware to the client because we arent using
expensive raid controllers
        5. Prevalence
           It is much easier to source standard disks than it is to
source SCSI disks (in the case
           of using SCSI raid controllers). It is also much easier to
source a standard SATA controller          than it is to source a RAID
controller 

Reasons for using XEN
        1. Recovery
           Putting all network services inside XEN virtual machines that
can be backed up makes             disaster recovery a non-brainer
        2. Better utilisation of hardware
           Stacking virtual machines allows more efficient use of
hardware (cost effectiveness)
        3. It's just cooler :)

Methodology
        1. Setting up the hardware - setting SATA to compatible mode
        2. Boot off Feather Linux USB key
        3. Partition primary drive
        4. Install base system
        5. Chroot into base system
        6. Install C/C++ development packages
        7. Install XEN packages
        8. Configure/build/install XEN Dom0 kernel
        9. Install GRUB
        10. Reboot to base system and set SATA to enhanced mode
        11. Migrate system into RAID1 and test
        12. Configure/build/install XEN DomU kernel
        13. Configure LVM
        14. Create DomU environment
      * 15. Install services into DomU
        16. Configure XEN to boot DomU automatically
      * 17. Testing
      * 18. Deployment

* Not covered by this document


1. Setting up the hardware
   -----------------------

Standard stuff here. Set the mode for SATA to Compatible so that
Feather's kernel was able to access the hard disks.

2. Boot off Feather Linux USB key
   ------------------------------

Feather is fantastic because it allows one to setup a Debian system
without having to boot from the now heavily outdated Woody install CD.
It supports more hardware and  also allows easy installation to a system
without a CDRom drive in a build network without an 'evil' segment (PXE
boot). It also makes a convenient rescue platform.
http://featherlinux.berlios.de

3. Partition primary drive
   -----------------------

Feather Linux does not properly support the ICHx and it doesnt have the
administration tools for making raid arrays. Therefore the setup method
we will use is to build the base system on a single disk and then
migrate it into RAID1. Trust me, this is much easier than it sounds!

I partitioned the primary drive as follows

   Device Boot      Start         End      Blocks   Id  System
/dev/hda1               1           3       24066   fd  Linux raid
autodetect
/dev/hda2               4         501     4000185   fd  Linux raid
autodetect
/dev/hda3             502        9605    73127880   fd  Linux raid
autodetect
/dev/hda4            9606        9729      996030   fd  Linux raid
autodetect

using hda2 for root and hda1 for boot with swap on hda4. hda3 is not
used yet.

Format and mount up the drive to /target:

# mkdir /target
# mkfs.ext3 /dev/hda1
# mkfs.ext3 /dev/hda2
# mount /dev/hda2 /target
# mkdir /target/boot
# mount /dev/hda1 /target/boot


4. Install the base system
   ----------------------

Set up Feather with APT and debootstrap:

# dpkg-get
# apt-get install debootstrap

Install the base system

# debootstrap sarge /target

Perform basic configuration

# vi /target/etc/fstab

/dev/sda2        /       ext3    defaults        0       1
/dev/sda1        /boot   ext3    defaults        0       2
proc             /proc   proc    defaults        0       0

You may be asking why am I putting sda here? The reason is because once
I set the ICH6 to use Enhanced Mode and reboot into the fresh 2.6.9 xen0
kernel with SATA support compiled the drives appear as SCSI devices. hda
will be enumerated as /dev/sda.

5. Chroot into base system
   -----------------------

# umount /dev/hda1
# cd /target
# chroot .
# su -
# mount /dev/hda1 /boot

Unmounting and remounting boot is important for configuring GRUB later.

Some more configuration needs to be done at this point:

# rm /etc/resolv.conf
# rm /etc/hostname
# echo xen0-test > /etc/hostname
# echo nameserver 210.55.13.3 > /etc/resolv.conf

6. Install C/C++ packages
   ----------------------

# apt-setup
# apt-get update
# dselect update
# tasksel
(Select C/C++ development packages)

7. Install XEN packages
   --------------------
  
Until Adam's packages get released I am using some homebrew packages
descended from Brian's original 
work.

# mkdir xen
# cd xen
# apt-get install wget
# wget -r http://cryptocracy.hn.org/xen/
# cd cryptocracy.hn.org/xen
# dpkg -i *.deb
# apt-get -f install

8. Configure/build/install XEN dom0 kernel
   ---------------------------------------

Since this is the first time configuring XEN on this hardware I am
building the kernel from scratch.
When we get more of these servers I will install a prebuilt debianised
kernel on them.

# cd /usr/src/
# tar -jxvf ./kernel-source-2.6.9_2.6.9-3_all.deb
# cd kernel-source-2.6.9
# export ARCH=xen
# cp ~/xen/cryptocracy.hn.org/xen/config.xen0 .config
# make menuconfig
(Make changes as appropriate for this hardware)
# make
# make modules_install
# cp vmlinuz /boot/vmlinuz-2.6.9-dom0

9. Configure GRUB
   --------------

# apt-get install grub
# grub-install
# update-grub

Now edit the grub menu.lst file and modify the kernel definition so it
looks like this:

title Xen 2.0.1 / Xenolinux 2.6.9
root (hd0,0)
kernel /xen.gz dom0_mem=131072
module /269-xen0 root=/dev/sda2 ro console=tty0

10. Reboot to base system and revert SATA configuration to Enhanced mode
    --------------------------------------------------------------------

# reboot

Set the relevant option in the BIOS and we're good to go.

11. Migrate to RAID1 and test
    -------------------------

We've just built a complete Dom0 base system on the first disk. In order
to migrate this into RAID1,
we will create a RAID array using the second disk only, duplicate the
data onto the second drive, reboot into it and then readd the first
drive to the array. Sounds complex, but it isnt. This is another
advantage of Linux RAID over conventional RAID: it is easy to migrate
from a single disk to a RAID configuration.


First we need to partition the second disk exactly like the first:

# sfdisk -d /dev/sda > ~/partitions.sda

Having this data backed up is an incredibly good idea. I experienced a
catastrophic faliure on
one server once by enabling DMA with a buggy OSB4 driver. The partition
table was destroyed. Using
the partition data backed up in the manner above i was able to restore
the partition to find
that my data (an important IMAP store) was still intact.

Duplicating the partition table (or restoring from backup) is simple:

# sfdisk /dev/sdb < ~/partitions.sda

That's it. The two drives are now identically partitioned.

Now we need to initialise the RAID on the second disk without destroying
the data on the first.

# apt-get install mdadm raidtools2

Begin by creating the raidtab. My one looks like this:

raiddev /dev/md0
        raid-level 1
        nr-raid-disks 2
        persistent-superblock 1
        chunk-size 8
        
        device  /dev/sda1
        failed-disk 0
        device /dev/sdb1
        raid-disk 1

... repeated for each partition. Marking the partitions on sda - our
source drive - as failed BEFORE
creating the raid array is very important as it prevents them from being
overwritten by mkraid.

Create the RAID disks now.

# for i in 'seq 0 3'; do mkraid /dev/md$i; done

Format and mount the root and boot partitions and initialise swap:

# mkfs.ext3 /dev/md0
# mkfs.ext3 /dev/md1
# mkswap /dev/md2
# mkdir /target
# mount /dev/md1 /target
# mkdir /target/boot
# mount /dev/md0 /target/boot

Copy the contents of our base system into the RAID we've just created:

# ls -1 / | grep -v proc | while read line ; do cp -afx /$line /target;
done
# cp -afx /boot/* /target/boot

Modify the target's fstab and grub configuration as follows:

/target/etc/fstab now looks like this:

/dev/md1        /       ext3    defaults        0       1
/dev/md0        /boot   ext3    defaults        0       2
proc            /proc   proc    defaults        0       0
/dev/md2        none    swap    sw              0       0

And change the kernel definition in /target/boot/menu.lst slightly:

module /269-xen0 root=/dev/md1 ro console=tty0

Umount /target/boot:

# umount /target/boot

Chroot into the target:

# cd /target
# chroot .
# su -

Remount boot and install grub:

# mount -a
# grub-install
# update-grub
# exit
# logout

We're now ready to reboot into our new RAID! 

# reboot

Most modern boards these days (at least the ASUS ones which is all I
use) have an option to select
the boot device. On the P4 and P5 series mainboards this is accessed
through F8. As your system is
booting hit F8 and choose the second drive. If your system does not
support this you can change the
boot order in the bios or if you prefer you can edit the GRUB options by
pressing 'e' at the prompt.

Once the system has rebooted you should now be inside your RAID setup.
It's time to import the first
drive into the array.

First edit the raidtab and mark sda as usable:

raiddev /dev/md0
        raid-level 1
        nr-raid-disks 2
        persistent-superblock 1
        chunk-size 8
        
        device  /dev/sda1
        raid-disk 0
        device /dev/sdb1
        raid-disk 1

... etc. Now add the partitions on sda as members using raidhotadd:

# raidhotadd /dev/md0 /dev/sda1

Rinse and repeat for each partition, or use a tricky bash one liner :)

The mirror is now syncing each partition in sequence. You can check the
status of this process 
by periodically cating /proc/mdstat.

Once each partition is synced your mirror is complete and you can
reboot, remove and shuffle drives
about to your hearts content, or at least until you're satisfied that
the root raid is working
correctly.

12. Configure/build/install XEN domU kernel

There's no point in building the domU kernel until you're ready to use
it. If I was using a prebuilt
kernel package I would have included the domU kernel so this step would
be avoided.

# cd /usr/src/kernel-source-2.6.9
# make clean
# export ARCH=xen
# cp ~/xen/cryptocracy.hn.org/xen/config.xenU .config
# make menuconfig
(Make changes as appropriate)
# make
# make modules_install
# cp vmlinuz /boot/vmlinuz-2.6.9-domU

13. Configure LVM

I use LVM (or devmapper) to store the domU VBDs, including their data.
This allows for easy resizing of 
partitions/images as required by services.

# apt-get install lvm10 lvm2

Initialise the partition as a physical volume:

# pvcreate /dev/md3

Create a volume group for xen:

# vgcreate xen /dev/md3

14. Create domU environment
    -----------------------

Create logical volumes for the service domU and its mailstore:

# lvcreate -L4096M -n mail xen
# lvcreate -L65000M -n store xen

Format and mount the domU VBD:

# mount.ext3 /dev/xen/mail
# mount /dev/xen/mail /target

Install the base system on the domU:

# export ARCH=i386
# apt-get install debootstrap
# debootstrap /target

Configure the target:

# cd /target
# chroot .
# su -
# rm /etc/hostname
# rm /etc/resolv.conf
# echo mail > /etc/hostname
# echo nameserver 210.55.13.3 > /etc/resolv.conf
# apt-setup

Edit /etc/fstab:

/dev/hda1       /       ext3    errors=remount-ro       0       1
/dev/hdb1       /store  reiserfs defaults               0       2
proc            /proc   proc    defaults                0       0

Edit /etc/network/interfaces:

auto lo
iface lo inet loopback

auto eth0
iface eth0 inet dhcp

# exit
# logout

Create the config file for the new domain

# cp /etc/xen/xmexample1 /etc/xen/mail

Edit the file and change the name and disk parameters:

name = mail
disk = [ 'phy:xen/mail,hda1,w', 'phy:xen/store,hdb1,w']

Unmount the target and format the store partition:

# umount /target
# apt-get install reiserfsprogs
# mkfs.reiserfs /dev/xen/store

Fire up your new xenU domain!

# /etc/init.d/xend start
# xm create -f /etc/xen/mail
# xm console mail

Have a play and to return to the xen0 hit ctrl-].

16. Configure xen to start up the domain automatically
    --------------------------------------------------

# ln -s /etc/init.d/xend /etc/rc2.d/S20xen
# ln -s /etc/init.d/xendomains /etc/rc2.d/S21xendomains
# mv /etc/xen/main /etc/xen/auto

That's it! :) Enjoy your fresh new server.


-------------------------------------------------------
The SF.Net email is sponsored by: Beat the post-holiday blues
Get a FREE limited edition SourceForge.net t-shirt from ThinkGeek.
It's fun and FREE -- well, almost....http://www.thinkgeek.com/sfshirt
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxxx
https://lists.sourceforge.net/lists/listinfo/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.