|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] help -- A question about ' XENFEAT_auto_translated_physmap':
On August 09, 2016 9:02 PM, < JBeulich@xxxxxxxx > wrote:
> >>> On 09.08.16 at 14:36, <xuquan8@xxxxxxxxxx> wrote:
> > Hi Jan,
> >
> > A question about ' XENFEAT_auto_translated_physmap':
> >
> > In linux code, in arch/x86/xen/mmu.c,
>
> I assume you know that I'm not a maintainer of the Linux code.
>
> > __xen_pgd_walk()
> > {
> > ....
> >
> > if (xen_feature(XENFEAT_auto_translated_physmap))
> > return 0;
> > ....
> > }
> >
> >
> >
> > Why xen_feature(XENFEAT_auto_translated_physmap) is true, then return
> > directly?
> > If not return directly, is there any potential risk?
>
> Well, the function is specifically there for operations (pinning/unpinning)
> which are required only for the not-auto-translated case.
> Why would anyone want to traverse a page table tree just to do nothing on
> each of the entries?
Jan, thank you!!
As I am struggling with a dom0 crash, the kernel is old, 3.0.X..
Now there is a crash in
[<ffffffff80023465>] _pin_lock+0x165/0x2a0 <----*crash*
unable to handle kernel paging request at ffff8b1021826000
static void _pin_lock(struct mm_struct *mm, int lock)
{
171 pgd_t *pgd = mm->pgd;
172 unsigned g;
173
174 for (g = 0; g <= ((TASK_SIZE_MAX-1) / PGDIR_SIZE); g++,
pgd++) {
175 pud_t *pud;
176 unsigned u;
177
178 if (pgd_none(*pgd))
179 continue;
180 pud = pud_offset(pgd, 0);
181 for (u = 0; u < PTRS_PER_PUD; u++, pud++) {
182 pmd_t *pmd;
183 unsigned m;
184
185 if (pud_none(*pud))
186 continue;
187 pmd = pmd_offset(pud, 0);
188 for (m = 0; m < PTRS_PER_PMD; m++, pmd++) {
189 spinlock_t *ptl;
190
191 if (pmd_none(*pmd))
<---------*crash*
192 continue;
193 ptl = pte_lockptr(0, pmd);
194 if (lock)
195 spin_lock(ptl);
196 else
197 spin_unlock(ptl);
198 }
199 }
200 }
201 }
202 #endif
}
Quan
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |