[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: Xen and Qemu virtio question
Hi Vikram, > Subject: Re: Xen and Qemu virtio question > > Hi Peng, > Please see my comments below. > On Wed, Jan 03, 2024 at 05:38:07AM +0000, Peng Fan wrote: > > Hi Vikram, Oleksii > > > > I follow the slide virtio for Xen on ARM[1], but I met some issues, > > and stuck for about two days. > > > > I use upstream lastest qemu repo master branch, not qemu-xen.git repo. > This is right. > > > > My start command as below[2], but seems I need first `xl create > > domu.cfg`, otherwise it will fail, because xen hypervisor > > dm_op->rcu_lock_remote_domain_by_id will return failure if domu not > > created. My domu cfg is simple: > > kernel = "/run/media/boot-mmcblk1p1/Image" > > disk = [ '/etc/xen/disk_1.img,,xvda,specification=virtio' ] cmdline = > > "console=hvc0 root=/dev/xvda rw" > > name = "DomU" > > memory = 512 > > serial = "pty" > > > > I drop qdisk in the disk entry, because xen tool report qdisk and > > virtio not compatible. > > > > And the arg xen_domid should be same as domu domain id? Is there any > > dynamic way to set xen_domid, start qemu when start domu? > Yes, it should be xen_domid. Below virtio-disk patches will take care of > invoking QEMU with right domid. > > > > Does the domu dts still needed to include the virtio,mmio node? > > > > however xl create domu.cfg relies virtio disk ready, which needs qemu > > start first. This confuses me. How do you address this or I do > > something wrong? > > > > Then after xl create, I quickly start qemu, but met: > > failed to create ioreq server, then I see there is mismatch. > > qemu use HVM_IOREQSRV_BUFIOREQ_ATOMIC to create ioreq server, but > xen > > ioreq_server_create will return failure: > > if ( !IS_ENABLED(CONFIG_X86) && bufioreq_handling ) > > return -EINVAL; > > > There is a downstream patch for this: > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > .com%2FXilinx%2Fxen%2Fcommit%2Fbe35b46e907c7c78fd23888d837475e > b28334638&data=05%7C02%7Cpeng.fan%40nxp.com%7C52a0d96c7d95474 > c016408dc0c9cbc2e%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0% > 7C638399114509848564%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj > AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C% > 7C%7C&sdata=KylGkFrL2hOazsLjFwwj%2FIAPDQU80LhFfONQL4ytaic%3D&re > served=0 > > > So I change HVM_IOREQSRV_BUFIOREQ_OFF in qemu, but met: > > qemu-system-aarch64: failed to map ioreq server resources: error 2 > > handle=0xaaaad26c7f30 > > qemu-system-aarch64: xen hardware virtual machine initialisation > > failed > > > > Do you have out of box repo that works well? Including Qemu and Xen, I > > am trying virtio disk, but will move to test virtio gpu soon. > > > > I am not sure there are some downstream patches in your side to > > address and make it work well. > > > We have a few downstream patches for xen-tools. > For Virtio disk backend: > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > .com%2FXilinx%2Fxen%2Fcommit%2F947280803294bbb963f428423f679d0 > 74c60d632&data=05%7C02%7Cpeng.fan%40nxp.com%7C52a0d96c7d95474 > c016408dc0c9cbc2e%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0% > 7C638399114509848564%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj > AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C% > 7C%7C&sdata=%2Bfjlzz%2Bvut3WyRWARfCij03kJ5ZuiGIika8uK5MIdM8%3D& > reserved=0 > For Virtio-net: > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > .com%2FXilinx%2Fxen%2Fcommit%2F32fcc702718591270e5c8928b7687d8 > 53249c882&data=05%7C02%7Cpeng.fan%40nxp.com%7C52a0d96c7d95474 > c016408dc0c9cbc2e%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0% > 7C638399114509848564%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLj > AwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C% > 7C%7C&sdata=BLtX2GbY77J65H6mBF1r3nXkCWN%2B0SfRJTkH1CnTiQE%3D > &reserved=0 > For changing the machine name to Xenpvh(to align with QEMU changes): > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub > .com%2FXilinx%2Fxen%2Fcommit%2F5f669949c9ffdb1947cb47038956b5fb > 8eeb072a&data=05%7C02%7Cpeng.fan%40nxp.com%7C52a0d96c7d95474c > 016408dc0c9cbc2e%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7 > C638399114509848564%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjA > wMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7 > C%7C&sdata=FbmQpeyrc86qapDK8gDNr7o0fYj09t6RrBA3RdmR4EQ%3D&res > erved=0 > > With above 4 patches it should work for you. Please re-compile Xen and Xen- > tools first and finally compile QEMU with this Xen. Thanks very much on your information. I will pick up your patches and give a try. BTW, any plan to upstream the patches? Thanks, Peng. > > > Thanks for your help. > > > > Thanks, > > Peng. > > > > [1]https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fw > > > ww.youtube.com%2Fwatch%3Fv%3DboRQ8UHc760&data=05%7C02%7Cpen > g.fan%40nxp > > .com%7C52a0d96c7d95474c016408dc0c9cbc2e%7C686ea1d3bc2b4c6fa9 > 2cd99c5c30 > > > 1635%7C0%7C0%7C638399114509848564%7CUnknown%7CTWFpbGZsb3d > 8eyJWIjoiMC4w > > > LjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C > %7C%7C > > > &sdata=VUs6u5qmXi7X5M0JgYJypNO10SIdFkBdVz7RRmUkuNM%3D&reserv > ed=0 > > > > [2]qemu-system-aarch64 -xen-domid 1 \ -chardev > > socket,id=libxl-cmd,path=qmp-libxl-1,server=on,wait=off \ -mon > > chardev=libxl-cmd,mode=control \ -chardev > > socket,id=libxenstat-cmd,path=qmp-libxenstat-1,server=on,wait=off \ > > -mon chardev=libxenstat-cmd,mode=control \ -xen-attach -name guest0 > > -vnc none -display none -nographic \ -machine xenpvh -m 513 &
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |