[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-users] Re: Re: Redundant server setup
Have you got any hints for others to set up the nagios server? And how about a CFS<>NFS bridge? Why is that neccessary? ----- Original Message ----- From: "Paul M." <paul@xxxxxxxxxx> To: "Per Andreas Buer" <per.buer@xxxxxxxxx> Cc: "Matthew Wild" <M.Wild@xxxxxxxx>; <xen-users@xxxxxxxxxxxxxxxxxxx> Sent: Tuesday, May 23, 2006 6:21 PM Subject: Re: [Xen-users] Re: Re: Redundant server setup On 5/22/06, Per Andreas Buer <per.buer@xxxxxxxxx> wrote: > On 12. mai. 2006, at 15:31, Matthew Wild wrote: > > > But as far as I can see, DRBD only provides twin machine shared > > storage. I > > have a few servers I want to use as dom0s and since they are > > generally simple > > 1U boxes I want them as clean from local storage as possible. The > > NAS box is > > bigger and generally better protected with redundant PSU, hardware > > RAID with > > hotswap spares etc. If I could pair two machines like that and then > > offer > > storage to the dom0s from that redundant pair that would be better. > > Buy two servers, use DRBD between them and share the storage with > gnbd. gnbd seems to be quite stable and easy to manage. Besides - it > does not exhibit any of the strange issues people see with iSCSI when > under load. If you have a few machines to devote to file storage than a cluster file system is the best way to go. Having dom0 on each of the machines provide a CFS<-->NFS bridge for the domUs and a Nagios server running to monitor services/domU's and restart on other machines if needed would provide a system with the network being the only single point of failure. And that could be taken care of too.... -Paul _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users _______________________________________________ Xen-users mailing list Xen-users@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-users
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |