[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 02/19] xen/arm: Remove vwfi while setting HCR_EL2 in init_traps



On 2017/3/31 6:29, Stefano Stabellini wrote:
> On Thu, 30 Mar 2017, Julien Grall wrote:
>> Hi Wei,
>>
>> On 30/03/17 10:13, Wei Chen wrote:
>>> We will set HCR_EL2 for each domain individually at the place where each
>>> domain is created. vwfi will affect the behavior of VM trap. Initialize
>>> the HCR_EL2 in init_traps is a generic setting for AT translations while
>>> creating dom0.
>>
>> This statement looks wrong. Any AT s1s2 translations (i.e on behalf of the
>> guest) should be done after a call to p2m_restore_state that restore HCR_EL2,
>> SCTLR_EL1, VTTBR_EL1. If it is not the case, then there is an error.
>>
>>> After dom0 has been created, the HCR_EL2 will use the saved
>>> value in dom0's context. So checking vwfi in init_trap is pointless.
>>>
>>> Signed-off-by: Wei Chen <Wei.Chen@xxxxxxx>
>>
>> This is a nack from me. We don't remove feature even temporarily without any
>> strong reasons. This is making more difficult to track the history of a
>> feature and a call to forget to restore it correctly later on or removing the
>> feature if we ever decide to revert the patch which adds back the feature 
>> (see
>> patch #4).
>>
>> I would prefer to see patch #2, #3 squashed into #4, the patch will not be
>> that big (50 lines) and avoid to drop a feature temporarily.
>
> Yes, please.

Ok, I understand. I will squashe #2 #3 into #4 and use the helper.

>
>
>> But I am not convinced about your reasons.
>>> ---
>>> I have tried to remove HCR_EL2 setting from init_traps, but the Xen will
>>> hang at the place of creating domain0. The console hot key can work, so
>>> the Xen is alive, not panic.
>>
>> With the limited description you provided it is hard to know what's going on.
>> In the future please try to provide meaningful details (platform used,
>> debugging you have done...). Anyway, I have done some debug and I don't end 
>> up
>> to the same conclusion as you.
>>
>> I tried on different boards, and it gets stuck when initializing stage-2 page
>> table configuration on each CPU (see setup_virt_paging). The secondary CPUs
>> don't receive the SGI.
>>
>> Reading a bit more the ARM ARM (D7.2.33 in ARM DDI 0487A.k_iss10775), HCR_EL2
>> is unknown at reset. Whilst I already knew that, I would have expected to 
>> have
>> no impact on EL2 (at least in the non-VHE case). However, the value of the
>> {A,I,F}MO bits will affect the routing of physical IRQs to Xen.
>>
>> I have only gone quickly through the spec, so it might be possible to have
>> other necessary bits. It might be good to keep initialization here until
>> someone sit in front of the spec for few hours and check everything.
>>
>> So in this case I would prefer to keep the helper avoiding to have declared
>> twice the same flags. Stefano any opinions?
>
> I don't particularly care whether we keep the helper or not. From what
> you say we need to set HCR_EL2 in init_traps, but given that's only
> temporary, we don't necessarily need to write the same set of flags: for
> example HCR_TWI|HCR_TWE are useless at this stage. A minimal set would
> suffice. Either way, it's fine by me.
>
>
>> Also, whilst I was debugging the problem I noticed the vwfi option does not
>> work properly on CPU0. Indeed, the command line has not been parsed when
>> init_traps is called on CPU0. This is should be fixed by patch #4 in this
>> series, but I don't think we want to backport it.
>
> No, we don't want to backport patch #4.
>
>
>> Stefano, would you be up to write a patch for this?
>
> I am thinking that the patch should only to fix the backports, given
> that unstable and 4.9 will be fixed by this series (and given that it's
> pretty ugly). I'll send it separately shortly.
>


-- 
Regards,
Wei Chen

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.