[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Ongoing/future speculative mitigation work



>>> On 26.10.18 at 13:43, <george.dunlap@xxxxxxxxxx> wrote:
> On 10/26/2018 12:33 PM, Jan Beulich wrote:
>>>>> On 26.10.18 at 13:24, <george.dunlap@xxxxxxxxxx> wrote:
>>> On 10/26/2018 12:20 PM, Jan Beulich wrote:
>>>>>>> On 26.10.18 at 12:51, <george.dunlap@xxxxxxxxxx> wrote:
>>>>> The basic solution involves having a xenheap virtual address mapping
>>>>> area not tied to the physical layout of the memory.  domheap and xenheap
>>>>> memory would have to come from the same pool, but xenheap would need to
>>>>> be mapped into the xenheap virtual memory region before being returned.
>>>>
>>>> Wouldn't this most easily be done by making alloc_xenheap_pages()
>>>> call alloc_domheap_pages() and then vmap() the result? Of course
>>>> we may need to grow the vmap area in that case.
>>>
>>> I couldn't answer that question without a lot more digging. :-)  I'd
>>> always assumed that the reason for the original reason for having the
>>> xenheap direct-mapped on 32-bit was something to do with early-boot
>>> allocation; if there is something tricky there, we'd need to
>>> special-case the early-boot allocation somehow.
>> 
>> The reason for the split on 32-bit was simply the lack of sufficient
>> VA space.
> 
> That tells me why the domheap was *not* direct-mapped; but it doesn't
> tell me why the xenheap *was*.  Was it perhaps just something that
> evolved from what we inherited from Linux?

Presumably, but there I'm really the wrong one to ask. When I joined,
things had long been that way.

Jan



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.