[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Qemu-devel] xen/disk: don't leak stack data via response ring



On Sun, 24 Sep 2017, Michael Tokarev wrote:
> 23.09.2017 19:05, Michael Tokarev wrote:
> > 28.06.2017 01:04, Stefano Stabellini wrote:
> >> Rather than constructing a local structure instance on the stack, fill
> >> the fields directly on the shared ring, just like other (Linux)
> >> backends do. Build on the fact that all response structure flavors are
> >> actually identical (aside from alignment and padding at the end).
> >>
> >> This is XSA-216.
> >>
> >> Reported by: Anthony Perard <anthony.perard@xxxxxxxxxx>
> >> Signed-off-by: Jan Beulich <jbeulich@xxxxxxxx>
> >> Signed-off-by: Stefano Stabellini <sstabellini@xxxxxxxxxx>
> >> Acked-by: Anthony PERARD <anthony.perard@xxxxxxxxxx>
> > 
> > Reportedly, after this patch, HVM DomUs running with qemu-system-i386
> > (note i386, not x86_64), are leaking memory and host is running out of
> > memory rather fast.  See for example https://bugs.debian.org/871702
> 
> Looks like this is a false alarm, the problem actually is with
> 04bf2526ce87f21b32c9acba1c5518708c243ad0 (exec: use qemu_ram_ptr_length
> to access guest ram) without f5aa69bdc3418773f26747ca282c291519626ece
> (exec: Add lock parameter to qemu_ram_ptr_length).
> 
> I applied only 04bf2526ce87f to 2.8, without realizing that we also
> need f5aa69bdc3418).
> 
> Now when I try to backport f5aa69bdc3418 to 2.8 (on top of 04bf2526ce87f),
> I face an interesting logic without also applying 1ff7c5986a515d2d936eba0
> (xen/mapcache: store dma information in revmapcache entries for debugging),
> the arguments for xen_map_cache in qemu_ram_ptr_length() in these two patches
> are quite fun.. :)

Sorry about that. Let me know if you need any help with those backport.
Thank you!

- Stefano

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.