[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH V5 12/12] net: netvsc: Add Isolation VM support for netvsc driver



On 9/16/2021 12:21 AM, Michael Kelley wrote:
I think you are proposing this approach to allocating memory for the send
and receive buffers so that you can avoid having two virtual mappings for
the memory, per comments from Christop Hellwig.  But overall, the approach
seems a bit complex and I wonder if it is worth it.  If allocating large 
contiguous
chunks of physical memory is successful, then there is some memory savings
in that the data structures needed to keep track of the physical pages is
smaller than the equivalent page tables might be.  But if you have to revert
to allocating individual pages, then the memory savings is reduced.


Yes, this version follows idea from Christop in the previous discussion.(https://lkml.org/lkml/2021/9/2/112) This patch shows the implementation and check whether this is a right direction.

Ultimately, the list of actual PFNs has to be kept somewhere.  Another approach
would be to do the reverse of what hv_map_memory() from the v4 patch
series does.  I.e., you could do virt_to_phys() on each virtual address that
maps above VTOM, and subtract out the shared_gpa_boundary to get the
list of actual PFNs that need to be freed.

virt_to_phys() doesn't work for virtual address returned by vmap/vmap_pfn() (just like it doesn't work for va returned by vmalloc()). The pfn above vTom doesn't have struct page backing and
vmap_pfn() populates the pfn directly in the pte.(Please see the
vmap_pfn_apply()). So it's not easy to convert the va to pa.

  This way you don't have two copies
of the list of PFNs -- one with and one without the shared_gpa_boundary added.
But it comes at the cost of additional code so that may not be a great idea.

I think what you have here works, and I don't have a clearly better solution
at the moment except perhaps to revert to the v4 solution and just have two
virtual mappings.  I'll keep thinking about it.  Maybe Christop has other
thoughts.






 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.