[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-API] xapi VDI.getPhysicalUtilisation not equals .vhd file size on disk in NFS



Hi NDKK,

I have a problem,
We use XCP 1.1 and add a NFS volume on it. This NFS volume from a NFS
Server which install on CentOS 5.6.
When we provided many VMs in this SR, we find that the sum of  all VDIs
Physical Utilisation (get from xapi) in this SR greater than the
SR's Physical Utilisation .

Then, I ssh to the NFS server check the VHD files size
I use command: ls -lh  VHD_FILE_NAME
It show that
/-rw-r--r-- 1 root root *4.9G * Aug  8 02:38
e77405a8-cb42-449c-b20e-912fe2a46ac2.vhd/

use command: du -sh VHD_FILE_NAME
It show that
/*3.0G*  e77405a8-cb42-449c-b20e-912fe2a46ac2.vhd/
/
/
Why they are different?

I think it has something to do with sparse file (http://en.wikipedia.org/wiki/Sparse_file). The "du" command might not be sparse file aware.

The sum of all VDIs physical utilisation get from Xapi
"VDI.getPhysicalUtilisation" is**like the sum of / ls -lh VHD_FILE_NAME/
The SR's physical utilisation get from Xapi "SR.getPhysicalUtilisation"
is like the sum of / du -sh VHD_FILE_NAME/

I don't really know which one should be displayed... Up to now we were only using lvm based vhd in here. Using sparse file one can overcommit physical disk space I guess. I don't know what happenned when a SR is really full and a VDI asks for an extra block.

Cheers,

Denis

/
/
Is it a XAPI's BUG?





This body part will be downloaded on demand.


--
Denis Cardon
Tranquil IT Systems
44 bvd des pas enchantés
44230 Saint Sébastien sur Loire
tel : +33 (0) 2.40.97.57.57
http://www.tranquil-it-systems.fr


_______________________________________________
Xen-api mailing list
Xen-api@xxxxxxxxxxxxx
http://lists.xen.org/cgi-bin/mailman/listinfo/xen-api


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.