[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH V1 01/12] hvm/ioreq: Make x86's IOREQ feature common
Hi Stefano, On 21/08/2020 01:53, Stefano Stabellini wrote: On Thu, 20 Aug 2020, Oleksandr wrote:On 11/08/2020 23:48, Stefano Stabellini wrote:I have the impression that we disagree in what the Device Emulator is meant to do. IHMO, the goal of the device emulator is to emulate a device in an arch-agnostic way.That would be great in theory but I am not sure it is achievable: if we use an existing emulator like QEMU, even a single device has to fit into QEMU's view of the world, which makes assumptions about host bridges and apertures. It is impossible today to build QEMU in an arch-agnostic way, it has to be tied to an architecture.AFAICT, the only reason QEMU cannot build be in an arch-agnostic way is because of TCG. If this wasn't built then you could easily write a machine that doesn't depend on the instruction set. The proof is, today, we are using QEMU x86 to serve Arm64 guest. Although this is only for PV drivers.I realize we are not building this interface for QEMU specifically, but even if we try to make the interface arch-agnostic, in reality the emulators won't be arch-agnostic.This depends on your goal. If your goal is to write a standalone emulator for a single device, then it is entirely possible to make it arch-agnostic. Per above, this would even be possible if you were emulating a set of devices. What I want to avoid is requiring all the emulators to contain arch-specific code just because it is easier to get QEMU working on Xen on Arm.If we send a port-mapped I/O request to qemu-system-aarch64 who knows what is going to happen: it is a code path that it is not explicitly tested.Maybe, maybe not. To me this is mostly software issues that can easily be mitigated if we do proper testing...Could we please find a common ground on whether the PIO handling needs to be implemented on Arm or not? At least for the current patch series.Can you do a test on QEMU to verify which address space the PIO BARs are using on ARM? I don't know if there is an easy way to test it but it would be very useful for this conversation. This is basically configured by the machine itself. See create_pcie() in hw/arm/virt.c. So the hostcontroller is basically unaware that a MMIO region will be used instead of a PIO region. Below my thoughts: From one side I agree that emulator shouldn't contain any arch-specific code, yes it is hypervisor specific but it should be arch agnostic if possible. So PIO case should be handled. From other side I tend to think that it might be possible to skip PIO handling for the current patch series (leave it x86 specific for now as we do with handle_realmode_completion()). I think nothing will prevent us from adding PIO handling later on if there is a real need (use-case) for that. Please correct me if I am wrong. I would be absolutely OK with any options. What do you think?I agree that PIO handling is not the most critical thing right now given that we have quite a few other important TODOs in the series. I'd be fine reviewing another version of the series with this issue still pending. For Arm64, the main user will be PCI. So this can be delayed until we add support for vPCI. Of course, PIO needs to be handled. The key to me is that QEMU (or other emulator) should *not* emulate in/out instructions on ARM. I don't think anyone here suggested that we would emulate in/out instructions on Arm. There is actually none of them. PIO ioreq requests should not be satisfied by using address_space_io directly (the PIO address space that requires special instructions to access it). In QEMU the PIO reads/writes should be done via address_space_memory (the normal memory mapped address space). So either way of the following approaches should be OK: 1) Xen sends out PIO addresses as memory mapped addresses, QEMU simply reads/writes on them 2) Xen sends out PIO addresses as address_space_io, QEMU finds the mapping to address_space_memory, then reads/writes on address_space_memory From an interface and implementation perspective, 1) means that IOREQ_TYPE_PIO is unused on ARM, while 2) means that IOREQ_TYPE_PIO is still used as part of the ioreq interface, even if QEMU doesn't directly operate on those addresses. My preference is 1) because it leads to a simpler solution. "simpler" is actually very subjective :). So maybe you can clarify some of my concerns with this approach. One part that have been barely discussed is the configuration part.The discussion below is based on having the virtual PCI hostbridges implemented in Xen. In the case of MMIO BAR, then the emulator doesn't need to know where is aperture in advance. This is because the BAR will contain the an absolute MMIO address so it can configure the trap correctly. In the case of PIO BAR, from my understanding, the BAR will contain a relative offset from the base of PIO aperture. So the emulator needs to know the base address of the PIO aperture. How do you plan to pass the information to emulator? How about the case where there are multiple hostbridges? Furthermore, most of the discussions has been focused towards device model that will provide emulation for all your devices (e.g. QEMU). However, I think this is going to be less common of device model that will emulate a single device (e.g. DEMU). This fits more in the disaggregation model. An emulator for a single PCI device is basically the same as a real PCI device. Do you agree with that? The HW engineer designing the PCI device doesn't need to know about the architecture. It just needs to understand the interface with the hostbridge. The hostbridge will then take care of the differences with the architecture. A developper should really be able to do the same with the emulator. I.e. write it for x86 and then just recompile for Arm. With your approach, he/she would have to understand how the architecture works. I still don't quite understand why we are trying to differ here. Why would our hostbridge implementation would not abstract the same way as a real one does? Can you clarify it? Maybe the problem is just the naming issue? Cheers, -- Julien Grall
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |