|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v2 07/10] vm_event: Add vm_event_ng interface
On 17.07.2019 16:41, Petre Ovidiu PIRCALABU wrote:
> On Wed, 2019-07-17 at 10:06 +0000, Jan Beulich wrote:
>> On 16.07.2019 19:06, Petre Pircalabu wrote:
>>> +static void vm_event_channels_free_buffer(struct
>>> vm_event_channels_domain *impl)
>>> {
>>> - vm_event_ring_resume(to_ring(v->domain->vm_event_monitor));
>>> + int i;
>>> +
>>> + vunmap(impl->slots);
>>> + impl->slots = NULL;
>>> +
>>> + for ( i = 0; i < impl->nr_frames; i++ )
>>> + free_domheap_page(mfn_to_page(impl->mfn[i]));
>>> }
>>
>> What guarantees that there are no mappings left of the pages you free
>> here? Sharing pages with guests is (so far) an (almost) irreversible
>> action, i.e. they may generally only be freed upon domain
>> destruction.
>> See gnttab_unpopulate_status_frames() for a case where we actually
>> make an attempt at freeing such pages (but where we fail the request
>> in case references are left in place).
>>
> I've tested manually 2 cases and they both work (no crashes):
> 1: introspected domain starts -> monitor starts -> monitor stops ->
> domain stops
> 2: introspected domain starts -> monitor starts -> domain stops ->
> monitor stops.
Well, testing is important, but doing tests like you describe won't
catch any misbehaving or malicious monitor domain.
> However, I will take a closer look at gnttab_unpopulate_status_frames
> and post a follow up email.
Thanks.
>> Furthermore, is there any guarantee that the pages you free here
>> were actually allocated? ->nr_frames gets set ahead of the actual
>> allocation.
>>
> vm_event_channels_free_buffer is called only from
> vm_event_channels_disable. The latter is called only if vm_event_check
> succeeds ( vm_event_cleanup and vm_event_domctl/VM_EVENT_DISABLE).
> vm_event_check will only return true if vm_event_enable succeeds.
Hmm, looks like I was mislead to believe the freeing function
would be called by vm_event_channels_enable() not itself freeing
what it might have allocated upon error. So perhaps the bug is
there, not where I thought it would be.
>>> +int vm_event_ng_get_frames(struct domain *d, unsigned int id,
>>> + unsigned long frame, unsigned int
>>> nr_frames,
>>> + xen_pfn_t mfn_list[])
>>> +{
>>> + struct vm_event_domain *ved;
>>> + int i;
>>> +
>>> + switch (id )
>>> + {
>>> + case XEN_VM_EVENT_TYPE_MONITOR:
>>> + ved = d->vm_event_monitor;
>>> + break;
>>> +
>>> + default:
>>> + return -ENOSYS;
>>
>> Various other error codes might be fine here, but ENOSYS is not
>> (despite pre-existing misuse elsewhere in the tree).
>
> vm_event_domctl also returns -ENOSYS if the type is neither
> XEN_VM_EVENT_TYPE_PAGING, XEN_VM_EVENT_TYPE_MONITOR, nor
> XEN_VM_EVENT_TYPE_SHARING. I've just followed the existing convention.
Right - that's one of the pre-existing misuses that I was referring
to.
>>> + }
>>> +
>>> + if ( !vm_event_check(ved) )
>>> + return -EINVAL;
>>> +
>>> + if ( frame != 0 || nr_frames != to_channels(ved)->nr_frames )
>>> + return -EINVAL;
>>
>> Is there a particular reason for this all-or-nothing model?
>
> I've added this extra check due to the way acquire_resource interface
> is designed. In our case, the memory is allocated from
> XEN_VM_EVENT_ENABLE and the size is known beforehand: the number of
> pages needed to store (vcpus_count * sizeof vm_event_slot) bytes.
> However the acquire_resource needs a "nr_frames" parameter which is
> computed in a similar manner in the libxc wrapper.
Hmm, maybe I'm not up to date here: Paul, is the general resource
obtaining/mapping logic indeed meant to be an all-or-nothing thing
only?
Jan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |