[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [PATCH 3/4] x86/vmx: cleanup vmx.c


  • To: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>
  • From: Jan Beulich <jbeulich@xxxxxxxx>
  • Date: Tue, 21 Feb 2023 14:15:44 +0100
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com; dkim=pass header.d=suse.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Jmrmf4hy3NHgGSeKGiH6Xqn1/Nq/LP0hJLPg5IaRRKI=; b=CY2MVi+yXq7WCszgQWFp8qPJUDTXf/u0IvQUuje4Zi9yIg8SSGqChZaTWjcjxkEfCGrRIqcMDCUD5hSpoTawdRYRkIcXT1TFkSVNxTHKWmvvURLEIwgNULzGrq6OyBHmjTJmpj5a5b/0Z7RW8MeYv9gvvC14bsvMYaeSFsfQtiLoApsehfdlwzLQWixnV8xVzom2huAnoZx9oTHUyuGtFBgPsf+h44ZZJ7fw4AkcJB+96LW/ypt4zlwX/Z0tzfNTqv7iKtwL1O+qoavfzu62FnWfRLKCEOKGQfDDS/nAxIZhkN74vi180gR90dNZ3I/HCdd0bn/wP9mNg6KX+DN13A==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IX877QcoPNjZxbtWvvdLmYxNeqF+Bpa3s42/tRSx1I4AaOAcoqs3QmBcszBGk9YT4cuYuvyd2wCt2fVpHrG2inFJeH64/VZoaZMyML433vO9ZTun4ToY7CkqPKy/iyva074XT5+RxIbMHeJLdg8m6C3G55py1IqlSdlrf9J2JizTQmN3bmiCBdvPg8cnmyyJlxbAota4O9Yyu6FBT7/R+ZY41pR5eIdMBUpPkNeh38i0JTls/q4PC2b3ZG6btZwx3fW8jLHIV9CjIWVFjJTBNPZw8e6ka8mvcZhQmOz+S9hIEWzBbCWwPPdIqbPAQLt9wtVbAYisnOs71CfeuOcMKw==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=suse.com;
  • Cc: Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, xen-devel@xxxxxxxxxxxxxxxxxxxx
  • Delivery-date: Tue, 21 Feb 2023 13:15:52 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On 21.02.2023 12:35, Xenia Ragiadakou wrote:
> On 2/21/23 13:26, Jan Beulich wrote:
>> On 17.02.2023 19:48, Xenia Ragiadakou wrote:
>>> Do not include the headers:
>>>    asm/hvm/vpic.h
>>>    asm/hvm/vpt.h
>>>    asm/io.h
>>>    asm/mce.h
>>>    asm/mem_sharing.h
>>>    asm/regs.h
>>>    public/arch-x86/cpuid.h
>>>    public/hvm/save.h
>>> because none of the declarations and macro definitions in them is used.
>>> Sort alphabetically the rest of the headers.
>>>
>>> Rearrange the code to replace all forward declarations with the function
>>> definitions.
>>>
>>> Replace double new lines with one.
>>>
>>> Reduce scope of nvmx_enqueue_n2_exceptions() to static because it is used
>>> only in this file.
>>>
>>> Move vmx_update_debug_state() in vmcs.c because it is used only in this file
>>> and limit its scope to this file by declaring it static and removing its
>>> declaration from vmx.h.
>>>
>>> Take the opportunity to remove all trailing spaces in vmx.c.
>>>
>>> No functional change intended.
>>>
>>> Signed-off-by: Xenia Ragiadakou <burzalodowa@xxxxxxxxx>
>>> ---
>>>   xen/arch/x86/hvm/vmx/vmcs.c            |   12 +
>>>   xen/arch/x86/hvm/vmx/vmx.c             | 5844 ++++++++++++------------
>>>   xen/arch/x86/include/asm/hvm/vmx/vmx.h |    1 -
>>>   3 files changed, 2913 insertions(+), 2944 deletions(-)
>>
>> I'm afraid this is close to unreviewable and hence absolutely needs 
>> splitting.
>> With this massive amount of re-arrangement (it's half of vmx.c, after all) 
>> I'm
>> also concerned of losing "git blame"-ability for fair parts of the code 
>> there.
> 
> I understand. Let me split the changes apart from the one that 
> rearranges the code. Do you agree in principle? or do you want me to 
> except and sth else?

Well, the large amount of code movement wants at least one other party
(e.g. Kevin, Andrew, or Roger) agreeing with your approach. As said, I
for one don't like this interruption in half-way easy history
determination (which can be particularly helpful e.g. when you want to
find a commit to put in a Fixes: tag).

Jan



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.