[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2] libxl: Implement basic video device selection
On 04.04.2014 12:34, Ian Campbell wrote: > On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote: >> On 04.04.2014 11:48, Ian Campbell wrote: >>> On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote: >>>> + /* >>>> + * Take the first defined video device (graphics card) to display >>>> + * on the first graphics device (display). >>>> + * Right now only type and vram info is used and anything beside >>>> + * type xen and vga is mapped to cirrus. >>>> + */ >>>> + if (def->nvideos) { >>>> + unsigned int min_vram = 8 * 1024; >>>> + >>>> + switch (def->videos[0]->type) { >>>> + case VIR_DOMAIN_VIDEO_TYPE_VGA: >>>> + case VIR_DOMAIN_VIDEO_TYPE_XEN: >>>> + b_info->u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD; >>>> + /* >>>> + * Libxl enforces a minimal VRAM size of 8M when using >>>> + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or >>>> + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN. >>>> + * Avoid build failures and go with the minimum if less >>>> + * is specified. >>> >>> Build failures? Do you mean "domain build" rather than "libvirt build"? >>> >>> I'm not sure about this fixing up the GIGO from the user side, but >>> that's a libvirt policy decision I suppose? If it were me I would just >>> let libxl provide the error and expect people to fix their domain config >>> rather than silently giving them something other than what they asked >>> for. What if increasing the VRAM causes a cascading failure due e.g. to >>> lack of memory? That's going to be tricky to debug I think! >> >> In the end its a start a domain with such a config. Which seems to be what I >> would end up with in my testing with an admittedly older version of >> virt-manager >> connecting to a libvirtd running an initial version of this patch without >> that part. >> The error seen on the front-end was something along the lines of "failed to >> get >> enough memory to start the guest" (the libxl log on the other side had the >> better error message). And the gui always reduces the memory below the >> minimum >> for both the options (VGA and "Xen"). >> That is the reason I went for "meh, go for the minimum anyways". > > Does the libvirt protocol require the client to provide all the sizes > etc with no provision for asking the server side to pick a sane default? I think libvirt itself would be ok with a config that does not enforce a certain amount of VRAM (libvirt devs correct me if I am talking non-sense here). But together with the virt-manager front-end that tries to be "helpful" halfway. I get a incorrect setting which I cannot change from that version of the gui. Again, never versions might be better. Just practically I expect some crazy people (like me *cough*) to try this from an older Desktop release... > >> And btw, it is already confusing enough as with cirrus, I get a 9M by default >> which is passed on to qemu on the command line and then ignored by it and one >> gets 32M in any way... > > Lovely! > > Ian. > Attachment:
signature.asc _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |