[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Why gpa instead of mfn is directed used in emulated nic



Hi, all:
I want to learn how the emulated NICs work in XEN. So I boot a DomU with an emulated rtl8139 NIC, ping host from DomU and capture the trace info using xentrace tool, and then check the log of qemu-dm and trace info analyzed by xenalyze tool. I have enabled debug in rtl8139.c and added debug info to qemu-dm when receiving ioreq. The host runs XEN 4.4.1 and have EPT enabled.

If I understand right, the TX procedure from DomU to Dom0 is as follows:
1. RTL8139 driver in Guest Kernel write Guest Physical Address of data to be sent to the corresponding registers of RTL8139
2. The address space of registers were marked "special" in hvmloader, so the DomU will exit with EPT_VIOLATION reason
3. Hypervisor will handle the exit in ept_handle_violation() function and then call hvm_hap_nested_page_fault(gfn). In the latter function, hypervisor will get the mfn from gfn and judge that the gfn is emulated MMIO, and then pass the fault to mmio handler
4. The MMIO handler will generate an IOREQ with related info and push it to "Shared Memory Page" between DomU and Qemu-Dm in Dom0, which is initialized when booting DomU via hvm_init() function, and then notify qemu-dm via event channel which is also initialized in hvm_init function
5. QEMU-DM will get the IOREQ and call the corresponding callbacks registered by RTL8139.c in qemu

However, from the qemu-dm's log and trace info, I have two questions:
1. In step 4 mentioned above, is the ioreq passed through "shared memory page" or "buffed io page"? Both of them are initialized in hvm_init function. 
2. From the trace info and qemu-dm's log, it seems that it is "GPA" (Guest Physical Address) instead of "MFN" in the IOREQ's data field received by qemu-dm:
	a. EPT_VIOLATION related records in xentrace:
		]  1.744702725     -x-- d3v1 vmexit exit_reason EPT_VIOLATION eip ffffffffa01624ae
   		   1.744702725     -x-- d3v1 npf gpa f2051020 q 182 mfn ffffffffffffffff t 4
		]  1.744702725     -x-- d3v1 mmio_assist w gpa f2051020 data d4e6c400
   		   1.744706888     -x-- d3v1 runstate_change d0v0 blocked->runnable
	As shown above, the value of data is "d4e6c400", I think it is the data represents the txbuf address written by DomU, so it should be the GPA of DomU. From the code I do find that the data is the "ram_gpa" argument in xen/arch/x86/hvm/emulate.c:hvmemul_do_io function.

    b. Related ioreq received by qemu-dm and handled by RTL8139.c in Dom0:
		xen: I/O request: 1, ptr: 0, port: f2051020, data: d4e6c400, count: 1, size: 4
		RTL8139: TxAddr write offset=0x0 val=0xd4e6c400
	As shown above, the data field of ioreq received by qemu-dm is also "0xd4e6c400" which is the GPA of DomU.

	c. RTL8139.c does the transmit work:
		RTL8139: C+ TxPoll write(b) val=0x40
		RTL8139: C+ TxPoll normal priority transmission
		RTL8139: +++ C+ mode reading TX descriptor 0 from host memory at 00000000 d4e6c400 = 0xd4e6c400
		RTL8139: +++ C+ mode TX descriptor 0 b000005a 00000000 ead20802 00000000
		RTL8139: +++ C+ Tx mode : transmitting from descriptor 0
		RTL8139: +++ C+ Tx mode : descriptor 0 is first segment descriptor
		RTL8139: +++ C+ mode transmission buffer allocated space 65536
		RTL8139: +++ C+ mode transmit reading 90 bytes from host memory at ead20802 to offset 0
		RTL8139: +++ C+ Tx mode : descriptor 0 is last segment descriptor
		RTL8139: +++ C+ mode transmitting 90 bytes packet
		RTL8139: +++ C+ mode reading TX descriptor 1 from host memory at 00000000 d4e6c400 = 0xd4e6c410
		RTL8139: +++ C+ mode TX descriptor 1 00000000 00000000 00000000 00000000
		RTL8139: C+ Tx mode : descriptor 1 is owned by host
		RTL8139: Set IRQ to 1 (0004 80ff)
	As shown above, RTL8139 will read TX descriptor from HOST MEMORY at 0xd4e6c400, which is the GPA of DomU.

I think qemu-dm/rtl8139 should read/write data from "MFN" address in host memory instaed of GPA, and I find that there isn't hypercall from dom0 to "translate" the gpa to mfn in subsequent xentrace info. Is my understanding is wrong?
I would really appreciate your help.

----------
Best Regards



_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.