|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 05/10] kexec: extend hypercall with improved load/unload ops
>>> On 25.06.13 at 16:30, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
> On 25/06/13 09:31, Jan Beulich wrote:
>>>>> On 24.06.13 at 19:42, David Vrabel <david.vrabel@xxxxxxxxxx> wrote:
>>>
>>> +static int init_transition_pgtable(struct kexec_image *image, l4_pgentry_t
>>> *l4)
> [...]
>>> + l3 = __map_domain_page(l3_page);
>>> + l3 += l3_table_offset(vaddr);
>>> + if ( !(l3e_get_flags(*l3) & _PAGE_PRESENT) )
>>> + {
>>> + l2_page = kimage_alloc_control_page(image, 0);
>>> + if ( !l2_page )
>>> + goto out;
>>> + l3e_write(l3, l3e_from_page(l2_page, __PAGE_HYPERVISOR));
>>> + }
>>> + else
>>> + l2_page = l3e_get_page(*l3);
>>
>> Afaict you're done using "l3" here, so you should unmap it in order
>> to reduce the pressure on the domain page mapping resources.
>
> The unmaps are grouped at the end to make the error paths simpler and I
> would prefer to keep it like this. This is only using 4 entries. Are
> we really that short?
4 entries are fine as long as calling code doesn't also (now or in
the future) want to keep stuff mapped around calling this.
>>> +static int build_reloc_page_table(struct kexec_image *image)
>>> +{
>>> + struct page_info *l4_page;
>>> + l4_pgentry_t *l4;
>>> + int result;
>>> +
>>> + l4_page = kimage_alloc_control_page(image, 0);
>>> + if ( !l4_page )
>>> + return -ENOMEM;
>>> + l4 = __map_domain_page(l4_page);
>>> +
>>> + result = init_level4_page(image, l4, 0, max_page << PAGE_SHIFT);
>>
>> What about holes in the physical address space - not just the
>> MMIO hole below 4Gb is a problem here, but also discontiguous
>> physical memory.
>
> I don't see a problem with creating mappings for non-RAM regions.
You absolutely must not map regions you don't know anything about
with WB attribute, or else side effects of prefetches can create
very hard to debug issues.
>>> --- /dev/null
>>> +++ b/xen/arch/x86/x86_64/kexec_reloc.S
> [...]
>> And just 8 here.
>
> I seem to recall reading that some processors needed 16 byte alignment
> for the GDT. I may be misremembering or this was for an older processor
> that Xen no longer supports.
I'm unaware of such, and e.g. trampoline_gdt is currently not
even 8-byte aligned. But if you can point to documentation
saying so, I certainly won't abject playing by their rules.
>>> +compat_mode_gdt:
>>> + .quad 0x0000000000000000 /* null
>>> */
>>> + .quad 0x00cf92000000ffff /* 0x0008 ring 0 data
>>> */
>>> + .quad 0x00cf9a000000ffff /* 0x0010 ring 0 code, compatibility
>>> */
> [...]
>>> + /*
>>> + * 16 words of stack are more than enough.
>>> + */
>>> + .fill 16,8,0
>>> +reloc_stack:
>>
>> And now you don't care for the stack being mis-aligned?
>
> I do find the way you make some review comments as a question like this
> rather ambiguous. I guess I don't care? But now I'm not sure if I should.
I was just puzzled by the earlier over-aligning and the complete
lack of alignment here.
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |