[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] x86/MCE: Implement clearbank callback for AMD
>>> On 25.10.12 at 11:29, Christoph Egger <Christoph.Egger@xxxxxxx> wrote: > On 10/25/12 10:55, Jan Beulich wrote: >>>>> On 25.10.12 at 10:18, Christoph Egger <Christoph.Egger@xxxxxxx> wrote: >>> On 10/24/12 14:18, Jan Beulich wrote: >>>>>>> On 12.10.12 at 16:46, Christoph Egger <Christoph.Egger@xxxxxxx> wrote: >>>>> +static int k8_need_clearbank_scan(enum mca_source who, uint64_t status) >>>>> +{ >>>>> + switch (who) { >>>>> + case MCA_MCE_SCAN: >>>>> + case MCA_MCE_HANDLER: >>>>> + break; >>>>> + default: >>>>> + return 1; >>>>> + } >>>>> + >>>>> + /* For fatal error, it shouldn't be cleared so that sticky bank >>>>> + * have chance to be handled after reboot by polling. >>>>> + */ >>>>> + if ( (status & MCi_STATUS_UC) && (status & MCi_STATUS_PCC) ) >>>>> + return 0; >>>>> + /* Spurious need clear bank */ >>>>> + if ( !(status & MCi_STATUS_OVER) >>>>> + && (status & MCi_STATUS_UC) && !(status & MCi_STATUS_EN)) >>>>> + return 1; >>>>> + >>>>> + return 1; >>>>> } >>>> >>>> So what's the purpose of first conditionally returning 1, and then >>>> also doing so unconditionally? Do anticipate to insert code between >>>> the two parts within the very near future? Otherwise I'd drop the >>>> whole if() construct. >>> >>> This function is derived from intel_need_clearbank_scan(). >>> I just took over the relevant parts for AMD. >> >> But that would suggest that the final return be "return 0" rather >> than "return 1". >> >> Further, the Intel code does no extra checking for the >> MCA_MCE_HANDLER case, so I'd like you to confirm that this is >> indeed to be different for your CPUs. > > I just noticed that MCA_MCE_HANDLER is not used at all and can be > removed entirely. Will do. But how about the "return" related question above? Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |