[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH v3 08/12] livepatch: Add support for inline asm hotpatching expectations
On 9/23/19 8:03 AM, Wieczorkiewicz, Pawel wrote: On 19. Sep 2019, at 18:06, Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> wrote: On 9/16/19 11:59 AM, Pawel Wieczorkiewicz wrote:This is the initial implementation of the expectations enhancement to improve inline asm hotpatching. Expectations are designed as optional feature, since the main use of them is planned for inline asm hotpatching. The flag enabled allows to control the expectation state. Each expectation has data and len fields that describe the data that is expected to be found at a given patching (old_addr) location. The len must not exceed the data array size. The data array size follows the size of the opaque array, since the opaque array holds the original data and therefore must match what is specified in the expectation (if enabled). The payload structure is modified as each expectation structure is part of the livepatch_func structure and hence extends the payload. Each expectation is checked prior to the apply action (i.e. as late as possible to check against the most current state of the code). For the replace action a new payload's expectations are checked AFTER all applied payloads are successfully reverted, but BEFORE new payload is applied. That breaks the replace action's atomicity and in case of an expectation check failure would leave a system with all payloads reverted. That is obviously insecure. Use it with caution and act upon replace errors!snip* Lookup specified section and when exists assign its address to a specified hook. * Perform section pointer and size validation: single hook sections must contain a @@ -1345,6 +1400,20 @@ static void livepatch_do_action(void) if ( rc == 0 ) { + /* + * Make sure all expectation requirements are met. + * Beware all the payloads are reverted at this point. + * If expectations are not met the system is left in a + * completely UNPATCHED state! + */ + rc = livepatch_check_expectations(data); + if ( rc ) + { + printk(XENLOG_ERR LIVEPATCH "%s: SYSTEM MIGHT BE INSECURE: " + "Replace action has been aborted after reverting ALL payloads!\n", data->name); + break; + } + if ( is_hook_enabled(data->hooks.apply.action) ) { printk(XENLOG_INFO LIVEPATCH "%s: Calling apply action hook function\n", data->name); @@ -1798,6 +1867,11 @@ static int livepatch_action(struct xen_sysctl_livepatch_action *action) break; } + /* Make sure all expectation requirements are met. */ + rc = livepatch_check_expectations(data); + if ( rc ) + break; + if ( is_hook_enabled(data->hooks.apply.pre) ) { printk(XENLOG_INFO LIVEPATCH "%s: Calling pre-apply hook function\n", data->name);I wonder if this should be done in the critical region for consistency with the replace code and to minimize the chance of something going wrong between calling the sysctl and the patching actually happening. Thoughts?I would not do it. At least not at the moment. The intention behind the expectation feature is to prevent an attempt to load a livepatch module with some inline asm patching on a machine, whose memory content of particular .text* section does not match. I wanted to do it as early as possible, before any mutating action of a pre apply hook (an for inline asm patching preapply hook can be (ab)used often) kicks in. Also, it would be good to have the expectation check always there, even with replaced default apply/revert action hooks. Except from the replace action situation, the memory content of the affected .text* section is unlikely to change, so it should be good enough for the general case. When it comes to the replace action, we should discuss it further I think, and maybe devise a solution on top of the current one. OK, that makes sense. Reviewed-by: Ross Lagerwall <ross.lagerwall@xxxxxxxxxx> _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |