[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] PoD code killing domain before it really gets started
>>> On 07.08.12 at 17:08, George Dunlap <george.dunlap@xxxxxxxxxxxxx> wrote: > On 07/08/12 14:29, Jan Beulich wrote: >> Hmm, would looks more like a hack to me. >> >> How about doing the initial check as suggested earlier >> >> if ( p2md->pod.entry_count == 0 && d->tot_pages > 0 ) >> goto out; >> >> and the latter check in a similar way >> >> if ( pod_target > p2md->pod.entry_count && d->tot_pages > 0 ) >> pod_target = p2md->pod.entry_count; >> >> (which would still take care of any ballooning activity)? Or are >> there any other traps to fall into? > The "d->tot_pages > 0" seems more like a hack to me. :-) What's hackish > about having an interface like this? > * allocate_pod_mem() > * for() { populate_pod_mem() } > * [Boot VM] > * set_pod_target() > > Right now set_pod_mem() is used both for initial allocation and for > adjustments. But it seems like there's good reason to make a distinction. So that one didn't take hypercall preemption into account. While adjusting this to use "d->tot_pages > p2md->pod.count", which in turn I converted to simply "populated > 0" (moving the calculation of "populated" earlier), I noticed that there's quite a bit of type inconsistency: I first stumbled across p2m_pod_set_mem_target()'s "pod_target" being "unsigned" instead of "unsigned long", then noticed that all the fields in struct p2m_domain's pod sub-structure aren't "long" as at least the GFNs ought to be (the counts should be not only for consistency, but also because they're signed, and hence overflow earlier). As a consequence, the PoD code itself would also need careful review, as there are a number of questionable uses of plain "int" (and, quite the opposite, two cases of "order" pointlessly being "unsigned long"). Jan _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |