[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: HVM performance once again


  • To: Marek Marczykowski-Górecki <marmarek@xxxxxxxxxxxxxxxxxxxxxx>
  • From: Roger Pau Monné <roger.pau@xxxxxxxxxx>
  • Date: Wed, 31 May 2023 10:24:44 +0200
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com; dkim=pass header.d=citrix.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=6yizvWieGBDbXFQUkBN2ZaXU+JT+BHn5OSwneDCy93s=; b=ZbdqQF5JEf17WZMbd9m/ILdQLgyWrZe1kvKEHRJl4v3q6BgBdFQxZoMCwBBI6ILkNXfT1t2bsOhrm9husizkKe7Nsj9S1ZsYIHmwXbWpFcY63zagcbVMryU5nWmCXb1skuDfm0Phqa9woIraA+r28wSMHU0EmP6ssAGtpplK4Kfw3O5vNwYI0NvkKnh0qt/CQjz819pMDgQI4leuPUpxKzjnwytRWpWmHCYlLQYlghU84xWS1DcWDoJPIpiktCgU5TBPjEa94Ik5qoJOor3gwrjLqP7Rs35go7ZDTPHjW8z7rGc1XVtZLRs8FiDiT23fO+xeviCLw+tiGcISrxblGQ==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=QHfy70HlftIX/eFF0suSOGRhyvesKu9Ln0GZrHM5AMO+uiB6OgWysOsYPpFd7R5KYLRbde2xqUP12V7thPBzKVfdnT5568jJNIKeMBlOhJWzl5FFmbnDP7LuBPOqyR4MNRY72t1WKonQufp9wc2EO1JV5vMsQRkoHt/FkhqAD0VPeqULHHys9gBZnUxrvho16jxXUiUH/Y56MPDkr5k7ntFbRs2PQi5v284Rhxr7GVmHteneJe+2kINEnT3jSoOaYN0JOm6ZGl4D9zWojE12e91meUapHwdfxUqgsvLEwtauH9sFUIBiaQNjHvQO5xZCsNNnnQRQalFa4BRy3DljHg==
  • Authentication-results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=citrix.com;
  • Cc: xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>
  • Delivery-date: Wed, 31 May 2023 08:25:20 +0000
  • Ironport-data: A9a23:Lbua5KkMonZf7kUKQHeqx7ro5gyhJ0RdPkR7XQ2eYbSJt1+Wr1Gzt xJNUDqCOK6DNGb0L4wna4Syox4CscSDm4VrTAFqqnhmFCMWpZLJC+rCIxarNUt+DCFhoGFPt JxCN4aafKjYaleG+39B55C49SEUOZmgH+a6U6icfHgqH2eIcQ954Tp7gek1n4V0ttawBgKJq LvartbWfVSowFaYCEpNg064gE0p5KyaVA8w5ARkPqgW5waGzhH5MbpETU2PByqgKmVrNrbSq 9brlNmR4m7f9hExPdKp+p6TnpoiG+O60aCm0xK6aoD66vRwjnVaPpUTbZLwXXx/mTSR9+2d/ f0W3XCGpaXFCYWX8AgVe0Ew/yiTpsSq8pefSZS0mZT7I0Er7xIAahihZa07FdRwxwp5PY1B3 cUhBBcBM1fevciZ/ZKbQ+JWj/YpfOC+aevzulk4pd3YJdAPZMmaBo/stZpf1jp2gd1SF/HDY cZfcSBocBnLfxxIPBEQFY46m+CrwHL4dlW0qnrM/fZxvzeVkVw3iea8WDbWUoXiqcF9hEGXq 3iA523kKhobKMae2XyO9XfEaurnxHqiAN9NT+HonhJsqGGq4WkrLxgtb3a+5sSdrxadcY59d kNBr0LCqoB3riRHVOLVWhSipXeesx00WtxOEvY74gWA1qrV5QmCAmEOCDVGbbQOpMIwADAny FKNt9foHiB09q2YT2qH8bWZpi/0PjIaRVLufgcBRAoBptPl8Ic6i0uWSs45SfDkyNroBTv33 jaG6jAkgKkehtIK0KP9+k3bhzWrpd7CSQtdChjrY19JJzhRPOaND7FEI3CChRqcBO51lmW8g UU=
  • Ironport-hdrordr: A9a23:pqvo/a0QO0KHFXOYDshBHAqjBEQkLtp133Aq2lEZdPU0SKGlfg 6V/MjztCWE7gr5PUtLpTnuAsa9qB/nm6KdpLNhX4tKPzOW31dATrsSjrcKqgeIc0HDH6xmpM JdmsBFY+EYZmIK6foSjjPYLz4hquP3j5xBh43lvglQpdcBUdAQ0+97YDzrYnGfXGN9dOME/A L33Ls7m9KnE05nFviTNz0+cMXogcbEr57iaQ5uPW9a1OHf5QnYk4ITCnKjr20jbw8=
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>

On Wed, May 31, 2023 at 09:49:32AM +0200, Marek Marczykowski-Górecki wrote:
> Hi,
> 
> I returned to HVM performance once again, this time looking at PCI
> passthrough impact evaluating network throughput.
> The setup:
>  - Xen 4.17
>  - Linux 6.3.2 in all domU
>  - iperf -c running in a PVH (call it "client")
>  - iperf -s running in a HVM (call it "server")
>  - client's netfront has a backend directly in server
>  - frontend's "trusted" is set to 0
>  - HVM have qemu in a stubdomain in all the cases
>  - no intentional differences about HVM besides presence of a PCI device
>    (it is a network card, but it was not involved in the traffic)
> 
> And now the results:
>  - server is plain HVM: ~6Gbps
>  - server is HVM and has some PCI passthrough: ~3Gbps
> 
> Any idea why such huge difference?

Just a wild guess, when domains have a PCI device assigned the memory
cache types from the guest are enforced, otherwise it all defaults to
write-back (see epte_get_entry_emt()).

If you are not using the PCI device you might want to play with
epte_get_entry_emt() and see if that makes a difference.

Do you see the same performance regression when testing on AMD?

Thanks, Roger.



 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.