|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v4 05/18] x86/mem_sharing: don't try to unshare twice during page fault
On 16.01.2020 16:59, Tamas K Lengyel wrote:
> On Thu, Jan 16, 2020 at 7:55 AM Jan Beulich <jbeulich@xxxxxxxx> wrote:
>>
>> On 08.01.2020 18:14, Tamas K Lengyel wrote:
>>> @@ -1702,11 +1703,14 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned
>>> long gla,
>>> struct domain *currd = curr->domain;
>>> struct p2m_domain *p2m, *hostp2m;
>>> int rc, fall_through = 0, paged = 0;
>>> - int sharing_enomem = 0;
>>> vm_event_request_t *req_ptr = NULL;
>>> bool sync = false;
>>> unsigned int page_order;
>>>
>>> +#ifdef CONFIG_MEM_SHARING
>>> + bool sharing_enomem = false;
>>> +#endif
>>
>> To reduce #ifdef-ary, could you leave this alone (or convert to
>> bool in place, without #ifdef) and ...
>>
>>> @@ -1955,19 +1961,21 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned
>>> long gla,
>>> */
>>> if ( paged )
>>> p2m_mem_paging_populate(currd, gfn);
>>> +
>>> +#ifdef CONFIG_MEM_SHARING
>>> if ( sharing_enomem )
>>> {
>>> - int rv;
>>> -
>>> - if ( (rv = mem_sharing_notify_enomem(currd, gfn, true)) < 0 )
>>> + if ( !vm_event_check_ring(currd->vm_event_share) )
>>> {
>>> - gdprintk(XENLOG_ERR, "Domain %hu attempt to unshare "
>>> - "gfn %lx, ENOMEM and no helper (rc %d)\n",
>>> - currd->domain_id, gfn, rv);
>>> + gprintk(XENLOG_ERR, "Domain %pd attempt to unshare "
>>> + "gfn %lx, ENOMEM and no helper\n",
>>> + currd, gfn);
>>> /* Crash the domain */
>>> rc = 0;
>>> }
>>> }
>>> +#endif
>>
>> ... move the #ifdef inside the braces here? With this
>> Acked-by: Jan Beulich <jbeulich@xxxxxxxx>
>
> SGTM, I assume you are counting on the compiler to just get rid of the
> variable when it sees its never used?
Yes (and for un-optimized code it doesn't matter).
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |