[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 2/2] Xen/mem_event: Prevent underflow of vcpu pause counts

  • To: Andres Lagar Cavilla <andres@xxxxxxxxxxxxxxxx>
  • From: "Aravindh Puthiyaparambil (aravindp)" <aravindp@xxxxxxxxx>
  • Date: Thu, 17 Jul 2014 18:57:33 +0000
  • Accept-language: en-US
  • Cc: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Tim Deegan <tim@xxxxxxx>, Jan Beulich <JBeulich@xxxxxxxx>, Xen-devel <xen-devel@xxxxxxxxxxxxx>
  • Delivery-date: Thu, 17 Jul 2014 18:57:45 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xen.org>
  • Thread-index: AQHPoe+kA8ia8o/QIU60jJK2IdTsdpukm31ggABViQD//6yPMA==
  • Thread-topic: [Xen-devel] [PATCH 2/2] Xen/mem_event: Prevent underflow of vcpu pause counts

>> +void mem_event_vcpu_unpause(struct vcpu *v) {

>> +    if ( test_and_clear_bool(v->paused_for_mem_event) )
>And now that we consider more than one mem event piling up to pause a
>vcpu, this has to become an atomic counter, which unpauses on zero, and
>takes care of underflow.

Very true. I have seen this event pile up occur in practice in our product.

The problem becomes how to tell apart real event responses that should dec the pause count from spurious crap from the toolstack. IOW, how to not unpause the vcpu when count reaches zero due to bad responses. I think the answer is: you can't, if the toolstack is evil, behavior undefined and bigger fish to fry.


Would that be a problem? AFAIK, you can have only one mem_event listener for a domain at a time.



Xen-devel mailing list



Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.