|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [PATCH 02/10] x86/ucode: Delete the microcode_init() initcall
The comment highlights just how bogus this really is. Being an initcall, the
boot allocator is long gone, and bootstrap_unmap() is a no-op.
The fact there is nothing to do should be a giant red flag about the validity
of the mappings "being freed". Indeed, they both constitute use-after-frees.
We could move the size/data/end clobbering into microcode_init_cache() which
is the final consumer of the information, but we're intending to delete these
static variables entirely, and it's less churn to just leave them dangling for
a few patches.
No functional change.
Signed-off-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
---
CC: Jan Beulich <JBeulich@xxxxxxxx>
CC: Roger Pau Monné <roger.pau@xxxxxxxxxx>
CC: Daniel P. Smith <dpsmith@xxxxxxxxxxxxxxxxxxxx>
After the $N'th rearranging, this could actually be merged into "x86/ucode:
Drop ucode_mod and ucode_blob" with no effect on the intermediate patches.
---
xen/arch/x86/cpu/microcode/core.c | 22 ----------------------
1 file changed, 22 deletions(-)
diff --git a/xen/arch/x86/cpu/microcode/core.c
b/xen/arch/x86/cpu/microcode/core.c
index 9a2cc631d2aa..3e815f1880af 100644
--- a/xen/arch/x86/cpu/microcode/core.c
+++ b/xen/arch/x86/cpu/microcode/core.c
@@ -758,28 +758,6 @@ int microcode_update(XEN_GUEST_HANDLE(const_void) buf,
return continue_hypercall_on_cpu(0, microcode_update_helper, buffer);
}
-static int __init cf_check microcode_init(void)
-{
- /*
- * At this point, all CPUs should have updated their microcode
- * via the early_microcode_* paths so free the microcode blob.
- */
- if ( ucode_blob.size )
- {
- bootstrap_unmap();
- ucode_blob.size = 0;
- ucode_blob.data = NULL;
- }
- else if ( ucode_mod.mod_end )
- {
- bootstrap_unmap();
- ucode_mod.mod_end = 0;
- }
-
- return 0;
-}
-__initcall(microcode_init);
-
/* Load a cached update to current cpu */
int microcode_update_one(void)
{
--
2.39.5
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |