[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] QEMU commit 04bf2526ce breaks use of xen-mapcache




----- Original Message -----
> From: "Stefano Stabellini" <sstabellini@xxxxxxxxxx>
> To: "Paolo Bonzini" <pbonzini@xxxxxxxxxx>
> Cc: "Anthony PERARD" <anthony.perard@xxxxxxxxxx>, "Stefano Stabellini" 
> <sstabellini@xxxxxxxxxx>,
> xen-devel@xxxxxxxxxxxxx, qemu-devel@xxxxxxxxxx
> Sent: Tuesday, July 25, 2017 8:08:21 PM
> Subject: Re: QEMU commit 04bf2526ce breaks use of xen-mapcache
> 
> On Tue, 25 Jul 2017, Paolo Bonzini wrote:
> > > Hi,
> > > 
> > > Commits 04bf2526ce (exec: use qemu_ram_ptr_length to access guest ram)
> > > start using qemu_ram_ptr_length() instead of qemu_map_ram_ptr().
> > > That result in calling xen_map_cache() with lock=true, but this mapping
> > > is never invalidated.
> > > So QEMU use more and more RAM until it stop working for a reason or an
> > > other. (crash if host have little RAM or stop emulating but no crash)
> > > 
> > > I don't know if calling xen_invalidate_map_cache_entry() in
> > > address_space_read_continue() and address_space_write_continue() is the
> > > right answer.  Is there something better to do ?
> > 
> > I think it's correct for dma to be true... maybe add a lock argument to
> > qemu_ram_ptr_length, so that make address_space_{read,write}_continue can
> > pass 0 and everyone else passes 1?
> 
> I think that is a great suggestion. That way, the difference between
> locked mappings and unlocked mappings would be explicit, rather than
> relying on callers to use qemu_map_ram_ptr for unlocked mappings and
> qemu_ram_ptr_length for locked mappings. And there aren't that many
> callers of qemu_ram_ptr_length, so adding a parameter wouldn't be an
> issue.

Thanks---however, after re-reading xen-mapcache.c, dma needs to be false
for unlocked mappings.

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.