[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 0/4] Yocto Gitlab CI
Hi Michal, > On 10 Nov 2022, at 07:34, Michal Orzel <michal.orzel@xxxxxxx> wrote: > > Hi Stefano, > > On 10/11/2022 01:18, Stefano Stabellini wrote: >> >> >> On Mon, 7 Nov 2022, Michal Orzel wrote: >>> Hi Bertrand and Stefano, >>> >>> On 31/10/2022 16:00, Bertrand Marquis wrote: >>>> >>>> >>>> Hi Michal, >>>> >>>>> On 31 Oct 2022, at 14:39, Michal Orzel <michal.orzel@xxxxxxx> wrote: >>>>> >>>>> Hi Bertrand, >>>>> >>>>> On 31/10/2022 15:00, Bertrand Marquis wrote: >>>>>> >>>>>> >>>>>> This patch series is a first attempt to check if we could use Yocto in >>>>>> gitlab ci to build and run xen on qemu for arm, arm64 and x86. >>>>>> >>>>>> The first patch is creating a container with all elements required to >>>>>> build Yocto, a checkout of the yocto layers required and an helper >>>>>> script to build and run xen on qemu with yocto. >>>>>> >>>>>> The second patch is creating containers with a first build of yocto done >>>>>> so that susbsequent build with those containers would only rebuild what >>>>>> was changed and take the rest from the cache. >>>>>> >>>>>> The third patch is adding a way to easily clean locally created >>>>>> containers. >>>>>> >>>>>> This is is mainly for discussion and sharing as there are still some >>>>>> issues/problem to solve: >>>>>> - building the qemu* containers can take several hours depending on the >>>>>> network bandwith and computing power of the machine where those are >>>>>> created >>>>> This is not really an issue as the build of the containers occurs on the >>>>> local >>>>> machines before pushing them to registry. Also, building the containers >>>>> will only be required for new Yocto releases. >>>>> >>>>>> - produced containers containing the cache have a size between 8 and >>>>>> 12GB depending on the architecture. We might need to store the build >>>>>> cache somewhere else to reduce the size. If we choose to have one >>>>>> single image, the needed size is around 20GB and we need up to 40GB >>>>>> during the build, which is why I splitted them. >>>>>> - during the build and run, we use a bit more then 20GB of disk which is >>>>>> over the allowed size in gitlab >>>>> As we could see during v2 testing, we do not have any space restrictions >>>>> on the Xen GitLab and I think we already decided to have the Yocto >>>>> integrated into our CI. >>>> >>>> Right, I should have modified this chapter to be coherent with your latest >>>> tests. >>>> Sorry for that. >>>> >>>>> >>>>> I will do some testing and get back to you with results + review. >>> I did some testing and here are the results: >>> >>> In the current form this series will fail when running CI because the Yocto >>> containers >>> are based on "From ubuntu:22.04" (there is no platform prefix), which means >>> that the containers >>> are built for the host architecture (in my case and in 99% of the cases of >>> the local build it will >>> be x86). In Gitlab we have 2 runners (arm64 and x86_64). This means that >>> all the test jobs would need >>> to specify x86_64 as a tag when keeping the current behavior. >>> After I built all the containers on my x86 machine, I pushed them to >>> registry and the pipeline was successful: >>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgitlab.com%2Fxen-project%2Fpeople%2Fmorzel%2Fxen-orzelmichal%2F-%2Fpipelines%2F686853939&data=05%7C01%7Cmichal.orzel%40amd.com%7C2449f063e67341c3b95a08dac2b112a5%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C638036363027707274%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=EwTJrW2vuwQIugKc7mnzG9NNbsYLP6tw5UODzBMmPEE%3D&reserved=0 >> >> When I tested the previous version of this series I built the >> containers natively on ARM64, so that is also an option. >> >> >>> Here is the diff on patch no. 3 to make the series work (using x86 tag and >>> small improvement to include needs: []): >>> ``` >>> diff --git a/automation/gitlab-ci/test.yaml b/automation/gitlab-ci/test.yaml >>> index 5c620fefce59..52cccec6f904 100644 >>> --- a/automation/gitlab-ci/test.yaml >>> +++ b/automation/gitlab-ci/test.yaml >>> @@ -65,6 +65,9 @@ >>> paths: >>> - 'logs/*' >>> when: always >>> + needs: [] >>> + tags: >>> + - x86_64 >>> >>> # Test jobs >>> build-each-commit-gcc: >>> @@ -206,19 +209,13 @@ yocto-qemuarm64: >>> extends: .yocto-test >>> variables: >>> YOCTO_BOARD: qemuarm64 >>> - tags: >>> - - arm64 >>> >>> yocto-qemuarm: >>> extends: .yocto-test >>> variables: >>> YOCTO_BOARD: qemuarm >>> - tags: >>> - - arm32 >>> >>> yocto-qemux86-64: >>> extends: .yocto-test >>> variables: >>> YOCTO_BOARD: qemux86-64 >>> - tags: >>> - - x86_64 >>> ``` >>> >>> Now, the logical way would be to build x86 yocto container for x86, arm64 >>> for arm64 and arm32 on arm64 or x86. >>> I tried building the container qemuarm64 specifying target arm64 on x86. >>> After 15h, only 70% of the Yocto build >>> was completed and there was an error with glibc (the local build of the >>> container for the host arch takes on my machine max 2h). >>> This enormous amount of time is due to the qemu docker emulation that >>> happens behind the scenes (I checked on 2 different machines). >>> >>> So we have 3 solutions: >>> 1) Build and run these containers for/on x86_64: >>> - local users can build the containers on local machines that are almost >>> always x86 based, in short period of time, >>> - "everyone" can build/push the containers once there is a new Yocto release >>> - slightly slower CI build time >>> 2) Build and run these containers for specific architectures: >>> - almost no go for local users using x86 machine (unless using more than 16 >>> threads (which I used) and willing to wait 2 days for the build) >>> - faster CI build time (arm64 runner is faster than x86 one) >>> - someone with arm64 based machine (not that common) would have to build >>> and push the containers >>> 3) Try to use CI to build and push the containers to registry >>> - it could be possible but what about local users >> >> From a gitlab-ci perspective, given the runners we currently have, we >> have to go with option 2). We don't have enough resources available on >> the x86 runner to run the Yocto jobs on x86. >> > That is what I reckon too. Running the Yocto build/test on CI using x86 > runner will always be slower. > So, if we go with this solution, then the following is needed: > 1. Modify test jobs so that yocto-qemu{arm64/arm} uses arm64 tag to be taken > by arm64 runner and use tag x86_64 for yocto-qemux86-64. > 2. Come up with a solution to build the yocto containers automatically for > the above platforms + possibility to specify the platform for local users. > Right now, these containers are being always build for the host machine > platform, so without doing tricks like adding --platform or prefix to image > name, > one cannot build the Yocto containers that would be ready to be pushed to > registry. We need to have a clean solution without requiring user to do > tricks. > > The only drawback of this solution is that the person building the > yocto-qemu{arm64/arm} container and willing to push it to registry, > needs to have access to arm64 machine. I think we need to find a solution working for both possibilities. And we also need a solution so that one can have both kind of images so the host machine should be encoded in the container name somehow. > >> >>> Regardless of what we chose, we need to keep in mind that the biggest >>> advantage to the Yocto build/run is that >>> it allows/should allow local users to perform basic testing for all the Xen >>> supported architectures. This is because >>> everything happens in one place with one command. >> >> That's right, but it should be possible to allow the Yocto containers to >> also build and run correctly locally on x86, right? The arm/x86 tag in >> test.yaml doesn't matter when running the containers locally anyway. All in all, test.yaml only matter for gitlab. Maybe we could have it supporting both cases but only use one ? Cheers Bertrand > > ~Michal
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |