[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] Nouveau on dom0
On Sat, Mar 6, 2010 at 1:46 PM, Arvind R <arvino55@xxxxxxxxx> wrote: > On Sat, Mar 6, 2010 at 1:53 AM, Konrad Rzeszutek Wilk > <konrad.wilk@xxxxxxxxxx> wrote: >> On Fri, Mar 05, 2010 at 01:16:13PM +0530, Arvind R wrote: >>> On Thu, Mar 4, 2010 at 11:55 PM, Konrad Rzeszutek Wilk >>> <konrad.wilk@xxxxxxxxxx> wrote: >>> > On Thu, Mar 04, 2010 at 02:47:58PM +0530, Arvind R wrote: >>> >> On Wed, Mar 3, 2010 at 11:43 PM, Konrad Rzeszutek Wilk >>> >> <konrad.wilk@xxxxxxxxxx> wrote: >> Yeah... Can you also instrument the code to print the PFN? The code goes >> through insert_pfn->pfn_pte, which calls xen_make_pte, which ends up >> doing pte_pfn_to_mfn. That routine does a pfn_to_mfn which does a >> get_phys_to_machine(pfn). The last routine looks up the PFN->MFN lookup >> table and finds a MFN that corresponds to this PFN. Since the memory >> was allocated from ... well this is the big question. by ttm_tt_page_alloc and is not backed by video-memory. the nouveau folks have just added a patch that disables pushbuf in video memory. >> >> Is the memory allocated from normal kernel space or is really backed by >> the video card. In your previous e-mails you mentioned that is_iomem is >> set to zero, which implies that the memory for these functions is NOT >> memory backed. right. see continuation below. >> >> the VM_IO is OK if the memory that is being referenced is the video >> driver memory. _BUT_ if the memory is being allocated through the >> alloc_page (ttm_tt_alloc_page) , or kmalloc, then this will cause us >> headaches. You might want to check in ttm_bo_vm_fault what the >> vma->vm_flags are and if VM_IO is set. >> >> (FYI, look at >> http://git.kernel.org/?p=linux/kernel/git/konrad/xen.git;a=commit;h=e84db8b7136d1b4a393dbd982201d0c5a3794333) > How do you remember these refs?! Sadly, VM_IO is set. Tried not setting it (in ttm_bo_map) - works on bare-boot, but crashes (very) hard on Xen. Tried setting it conditional to io_mem; same result. No logs even, so don't know what happened. >> Thought I am not sure if the ttm_bo_mmap is used by the nvidia driver. > U mean nouveau? Only for accelerated graphics. > >> Attached is a re-write of the debug patch I sent earlier. I compile >> tested it but haven't yet run it (just doing that now). >> > Output: (snipped/cut/pasted for easier association) > > Trace of Pushbuf Memory Access, Bare-BOOT: > X: OUT_RING: Enter: chan=0x8170a0, id=2, data=0x48000, > chan->cur=0x7f0aa3594054 > kernel: [TTM] FAULTing-in address=0x7f0aa3594000, bo->buffer_start=0x0 > kernel: [ BEFORE]PFN: 0x7513f PTE: 0x750001e3 (val:750001e3): [ RW > PSE GLB x ] [2M] > kernel: [ AFTER]PFN: 0x7513f PTE: 0x750001e3 (val:750001e3): [ > RW PSE GLB x ] [2M] > kernel: [BEFORE]PFN: 0x75144 PTE: 0x750001e3 (val:750001e3): [ RW > PSE GLB x ][2M] > kernel: [ AFTER]PFN: 0x75144 PTE: 0x750001e3 (val:750001e3): [ RW > PSE GLB x ] [2M] > < --- and so on for 14 more pages ---> > X: OUT_RING: updated data > X: OUT_RING: Exit > > Trace of Pushbuf Memory Access, Xen-BOOT: > X: OUT_RING: Enter: chan=0x8170a0, id=2, data=0x44000, > chan->cur=0x7f98838df000 > kernel: [TTM] FAULTing-in address=0x7f98838df000, bo->buffer_start=0x0 > kernel: [BEFORE]PFN: 0x16042 PTE: 0x10000068042067 > (val:10000068042067): [mfn:426050->ffff880016042000USR RW > x ] [4K] > kernel: [ AFTER]PFN: 0x16042 PTE: 0x10000068042067 > (val:10000068042067): [mfn:426050->ffff880016042000USR RW > x ] [4K] > kernel: [BEFORE]PFN: 0x16043 PTE: 0x10000068043067 > (val:10000068043067): [mfn:426051->ffff880016043000USR RW > x ] [4K] > kernel: [ AFTER]PFN: 0x16043 PTE: 0x10000068043067 > (val:10000068043067): [mfn:426051->ffff880016043000USR RW > x ] [4K] > < --- and so on for 14 more pages ---> > < --- and repeat fault ---> > kernel: [TTM] FAULTing-in address=0x7f98838df000, bo->buffer_start=0x0 > Note that the patch now effectively looks up page_address(address) > Do you know what is happening? Is a solution feasible? > > Sequence of nouveau operation as I understand it: > 1. prepare for user pushbuf write by grabbing memory access rights > (exclude GPU access) > 2. Do the write > 3. finish and release grab > > The memory may/maynot be on the video card. There is a vram_pushbuf > module option which would probably complicate things more. the option is now made ineffective by the nouveau folks. Continuation, after some code reading: The pushbuf needs to be backed by some memory - any memory. The memory is allocated after the mmap call (which sets up VM), by the fault-handler. The user-space program (X thro' libdrm-nouveau) issues ioctls that are effectively sync_to_cpu/device and wites to the buffer - thereby invoking the fault handler. After writing, ioctls are issued to sync and ends up with nouveau_pushbuf_flush - which treats the pushbuf memory to be __force user-space memory and does a DRM_COPY_FROM_USER which is in fact copy_from_user into a locally allocated (GFP_KERNEL) buffer and writes it out to the video card (basically iomem writes). What gets written triggers activity on the GPU (the parameters having been set and associated with some other buffers) and the caller waits on an event_queue for notification. Strange way of doing things, but guess this is the ground-work for the future of GEM. All that is achieved is hide buffer-allocations from the user - this may be important - cos if the fault-handler installs only one page and leaves the rest to be allocated by future faults - the video card hangs in PFIFO errors! So the special value of SPECULATIVE_PRE_INSTALL - 16 pages. I suppose that in xen-boot, the pages are installed to the wrong address. The installed pages need NOT be contiguous - the contiguous pages happen only at the first X invocation on a bare boot. So choosing a different allocator is possible in the fault-handler - the problem is in the freeing of the allocation. Sorry for the messed-up postings _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |