[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-users] How do *you* go about sizing dom0 RAM?



I use Alpine linux as dom0 with 256M of RAM
It seems like it consumes only 128M of it (from collectd graphs which are actually running from dom0) but tuning it down to 128M prevents it from booting, sooo that's how *I* did it :-p

On Sat, May 23, 2015 at 7:17 AM, Miguel Clara <miguelmclara@xxxxxxxxx> wrote:
On Fri, May 22, 2015 at 8:46 PM, Nuno MagalhÃes <nunomagalhaes@xxxxxxxxx> wrote:
> On Fri, May 22, 2015 at 12:42 PM, Andy Smith <andy@xxxxxxxxxxxxxx> wrote:
>> Hello,
>>
>> I'm just going through the process of a hardware refresh and I was
>> wondering if it might be time to re-evaluate how much memory is
>> dedicated to dom0. That got me wondering how other people approach
>> this.
>
>
> I'm no expert whatsoever, just a desktop hobbyist, but like to keep
> dom0's RAM to a bare minimum and run a minimalist distro with few
> services (ssh via LAN... not much else). Less to hit the fans.
>

Same here, depends on the dom0, I have a laptop run Xen (testing)
where I assigned 2G out of 8GB, but for server as less as possible.

I have a NetBSD dom0 (home server) with 128M assigned, or actually
256M now, cause I tend to do so testing with HVM guests and I decided
to give it a bit more (was doing fine with 128M though, and I've seen
reports of people using 64M, with only PV guests.).

A couple linux hosts with 512M or even 1024M, but they do run more one
or HVM guests.

> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxx
> http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxx
http://lists.xen.org/xen-users

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.