[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Xen / EC2 release criteria proposal
On Sat, 2019-08-10 at 17:01 +0300, Matt Wilson wrote: > On Fri, Aug 09, 2019 at 05:56:11PM -0700, Adam Williamson wrote: > [...] > > So it seems like this would also be a good opportunity to revisit and > > nail down more specifically exactly what our cloud requirements are. > > bcotton suggested that we require two sample instance types to be > > tested, c5.large (KVM) and t3.large (Xen). (I've also mailed Thomas > > Cameron, ex-of Red Hat, now of Amazon, for his opinion, as it seemed > > like it might be worthwhile - he's promised to get back to me). > > > > So, for now, let me propose this as a trial balloon: we rewrite the > > above criterion to say: > > > > "Release-blocking cloud disk images must be published to Amazon EC2 as > > AMIs, and these must boot successfully and meet other relevant release > > criteria on c5.large and t3.large instance types." > > Hi Adam, > > Thanks for bringing this up. It's good to revisit things from time to > time as the world changes. Thanks for the feedback, Matt! > Of the two instances that you propose, neither runs on Xen. The T2 > instances run on Xen, but T3 uses the KVM-based Nitro hypervisor. That'll teach me to trust Ben...;) > To ensure that a Linux based AMI functions across all of the devices > and operating modes of EC2, you need to cover: > > x86 platforms > ------------- > * Xen domU with only PV interfaces (e.g., M3 instances) > * Xen domU with Intel 82599 virtual functions for Enhanced Networking > (e.g., C3 instances running in a VPC) > * Xen domU with Enhanced Networking Adapter (e.g., R4 instances) > * Xen domU with NVMe local instance storage (e.g., virtualized I3 > instances) > * Xen domU with more than 32 vCPUs (e.g., c4.8xlarge) > * Xen domU with four NUMA nodes (e.g., x1.32xlarge) > * Xen domU with maximum RAM available in EC2 (x1e.32xlarge) > * KVM guest with consistent performance (e.g., c5.large) > * KVM guest with burstable performance (e.g., t3.large) > * KVM guest with local NVMe storage (e.g., c5d.large) > * KVM guest with 100 Gbps networking and Elastic Fabric Adapter > (c5n.18xlarge) > * KVM guest on AMD processors (e.g., m5a.large) > * KVM guest on AMD processors with maximum NUMA nodes (e.g., > m5a.24xlarge) > * Bare metal Broadwell (i3.metal) > * Bare metal Skylake (m5.metal) > * Bare metal Cascade Lake (c5.metal) > > Arm platforms > ------------- > * KVM guest on Arm with 1 CPU cluster (a1.xlarge) > * KVM guest on Arm with 2 CPU clusters (a1.2xlarge) > * KVM guest on Arm with 4 CPU clusters (a1.4xlarge) > > Not all of these are going to cause an image to fail to boot up to the > point where a customer can SSH in. But a good number of these have > caused early boot problems in the past (e.g., >32 vCPUs on PVHVM Xen). Thanks a lot for the list, it's very helpful. It's also very long, though. :P Still, we can certainly use it as a base. -- Adam Williamson Fedora QA Community Monkey IRC: adamw | Twitter: AdamW_Fedora | identi.ca: adamwfedora http://www.happyassassin.net _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |