[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Create a iSCSI DomU with disks in another DomU running on the same Dom0



On Wed, Jan 09, 2013 at 08:23:45PM +0100, Roger Pau Monné wrote:
> On 02/01/13 22:36, Konrad Rzeszutek Wilk wrote:
> >>> I think we are just swizzling the PFNs with a different MFN when you
> >>> do the domU -> domX, using two ring protocols. Weird thought as the
> >>> m2p code has checks WARN_ON(PagePrivate(..)) to catch this sort of
> >>> thing.
> >>>
> >>> What happens if the dom0/domU are all 3.8 with the persistent grant
> >>> patches?
> >>
> >> Sorry for the delay, the same error happens when Dom0/DomU is using a
> >> persistent grants enabled kernel, although I had to backport the
> >> persistent grants patch to 3.2, because I was unable to get iSCSI
> >> Enterprise Target dkms working with 3.8. I'm also seeing this messages
> >> in the DomU that's running the iSCSI target:
> >>
> >> [  511.338845] net_ratelimit: 36 callbacks suppressed
> >> [  511.338851] net eth0: rx->offset: 0, size: 4294967295
> > 
> > -1 ?! I saw this somewhere with 9000 MTUs.
> > 
> >> [  512.288282] net eth0: rx->offset: 0, size: 4294967295
> >> [  512.525639] net eth0: rx->offset: 0, size: 4294967295
> >> [  512.800729] net eth0: rx->offset: 0, size: 4294967295
> > 
> > But wow. It is just all over.
> > 
> > Could you instrument the M2P code to print out the PFN/MFN
> > values are they are being altered (along with __builtin_func(1) to
> > get an idea who is doing it). Perhaps that will shed light whether
> > my theory (that we are overwritting the MFNs) is truly happening.
> > It does not help that it ends up using multicalls - so it might be
> > that they are being done both in bathess - so there are multiple
> > MFN updates. Perhaps the multicalls have two or more changes to the
> > same MFN?
> 
> A little more info, I've found that we are passing FOREIGN_FRAMES in
> the sbk fragments on netback. When we try to perform the grant copy
> operation using a foreign mfn as source, we hit the error. Here is
> the stack trace of the addition of the bogus sbk:
> 
> [  107.094109] Pid: 64, comm: kworker/u:5 Not tainted 3.7.0-rc3+ #8
> [  107.094114] Call Trace:
> [  107.094126]  [<ffffffff813ff099>] xen_netbk_queue_tx_skb+0x16b/0x1aa
> [  107.094135]  [<ffffffff81400088>] xenvif_start_xmit+0x7b/0x9e
> [  107.094143]  [<ffffffff81460fe7>] dev_hard_start_xmit+0x25e/0x3db
> [  107.094151]  [<ffffffff81478917>] sch_direct_xmit+0x6e/0x150
> [  107.094159]  [<ffffffff814612ce>] dev_queue_xmit+0x16a/0x360
> [  107.094168]  [<ffffffff814c2426>] br_dev_queue_push_xmit+0x5c/0x62
> [  107.094175]  [<ffffffff814c2536>] br_deliver+0x35/0x3f
> [  107.094182]  [<ffffffff814c12b8>] br_dev_xmit+0xd7/0xef
> [  107.094189]  [<ffffffff81460fe7>] dev_hard_start_xmit+0x25e/0x3db
> [  107.094197]  [<ffffffff814549b6>] ? __alloc_skb+0x8d/0x187
> [  107.094204]  [<ffffffff81461409>] dev_queue_xmit+0x2a5/0x360
> [  107.094212]  [<ffffffff81485991>] ip_finish_output2+0x25c/0x2b7
> [  107.094219]  [<ffffffff81485a62>] ip_finish_output+0x76/0x7b
> [  107.094226]  [<ffffffff81485aa1>] ip_output+0x3a/0x3c
> [  107.094235]  [<ffffffff81483367>] dst_output+0xf/0x11
> [  107.094242]  [<ffffffff81483558>] ip_local_out+0x5c/0x5e
> [  107.094249]  [<ffffffff81485560>] ip_queue_xmit+0x2ce/0x2fc
> [  107.094256]  [<ffffffff814969b4>] tcp_transmit_skb+0x746/0x787
> [  107.094264]  [<ffffffff81498e73>] tcp_write_xmit+0x837/0x949
> [  107.094273]  [<ffffffff810f1fea>] ? virt_to_head_page+0x9/0x2c
> [  107.094281]  [<ffffffff810f21aa>] ? ksize+0x1a/0x24
> [  107.094288]  [<ffffffff814549ca>] ? __alloc_skb+0xa1/0x187
> [  107.094295]  [<ffffffff81499212>] __tcp_push_pending_frames+0x2c/0x59
> [  107.094302]  [<ffffffff8148af1a>] tcp_push+0x87/0x89
> [  107.094309]  [<ffffffff8148cd26>] tcp_sendpage+0x448/0x480
> [  107.094317]  [<ffffffff814aacc8>] inet_sendpage+0xa0/0xb5
> [  107.094327]  [<ffffffff81329aac>] iscsi_sw_tcp_pdu_xmit+0xa2/0x236
> [  107.094335]  [<ffffffff81328188>] iscsi_tcp_task_xmit+0x34/0x236
> [  107.094345]  [<ffffffff8100d8b5>] ? __spin_time_accum+0x17/0x2e
> [  107.094352]  [<ffffffff8100daf3>] ? __xen_spin_lock+0xb7/0xcd
> [  107.094360]  [<ffffffff8132434a>] iscsi_xmit_task+0x52/0x94
> [  107.094367]  [<ffffffff81324ea2>] iscsi_xmitworker+0x1c2/0x2b9
> [  107.094375]  [<ffffffff81324ce0>] ? iscsi_prep_scsi_cmd_pdu+0x604/0x604
> [  107.094384]  [<ffffffff81052bbe>] process_one_work+0x20b/0x2f9
> [  107.094391]  [<ffffffff81052e17>] worker_thread+0x16b/0x272
> [  107.094398]  [<ffffffff81052cac>] ? process_one_work+0x2f9/0x2f9
> [  107.094406]  [<ffffffff81056c60>] kthread+0xb0/0xb8
> [  107.094414]  [<ffffffff81056bb0>] ? kthread_freezable_should_stop+0x5b/0x5b
> [  107.094422]  [<ffffffff8151f4bc>] ret_from_fork+0x7c/0xb0
> [  107.094430]  [<ffffffff81056bb0>] ? kthread_freezable_should_stop+0x5b/0x5b
> 
> I will try to find out who is setting that sbk frags. Do you have any
> idea Konrad?

m2p_add_override.


>  
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.