[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] RE: [Xen-ia64-devel] Some things to be considered for ptc.ga emulation
>From: Tristan Gingold [mailto:Tristan.Gingold@xxxxxxxx] >Sent: 2006年4月6日 15:28 >> >> As we talked previously, the effect of purge may be override by another >> write if both writing without lock protection. >Sure but if we have an itc and ptc in progress at the same time, we don't >know which one will success. Yes, even on native the behavior is unpredictable... Maybe it's the software to ensure. >+ again: > fault = vcpu_translate(current,address,is_data,0,&pteval,&itir,&iha); >- if (fault == IA64_NO_FAULT) { >+ if (fault == IA64_NO_FAULT || fault == IA64_USE_TLB) { > pteval = translate_domain_pte(pteval,address,itir); > > vcpu_itc_no_srlz(current,is_data?2:1,address,pteval,-1UL,(itir>>2) >&0x3f); >+ if (fault == IA64_USE_TLB && !current->arch.dtlb.pte.p) { Sorry for my question, but I really can't convince myself such code immune from race. :-) It's possible another side to clear present bit of current dtlb right after pass of above check. In that case, the TC entry is still inserted into TLB. Since the logic can't prevent all the cases, that check seems excessive here. >-static inline int vcpu_match_tr_entry(TR_ENTRY *trp, UINT64 ifa, >UINT64 rid) >-{ >- return trp->p && trp->rid == rid >+static inline int vcpu_match_tr_entry_no_p(TR_ENTRY *trp, UINT64 ifa, >UINT64 rid) >+{ >+ return trp->rid == rid > && ifa >= trp->vadr > && ifa <= (trp->vadr + (1L << trp->ps) - 1); > } Why do you need no_p version? In your patch, no one call no_p version directly >- if (/* is_data && */ vcpu_match_tr_entry(trp,address,rid)) { >- if (vcpu->domain==dom0 && !in_tpa) *pteval = >trp->page_flags; >+ pte = trp->pte; >+ if (/* is_data && */ pte.p >+ && vcpu_match_tr_entry_no_p(trp,address,rid)) { Above change seems no difference as original code. >+void vhpt_flush_address_remote(int cpu, >+ unsigned long vadr, unsigned long addr_range) >+{ >+ while ((long)addr_range > 0) { >+ /* Get the VHPT entry. */ >+ unsigned int off = ia64_thash(vadr) - VHPT_ADDR; >+ volatile struct vhpt_lf_entry *v; >+ v =__va(per_cpu(vhpt_paddr, cpu) + off); >+ v->ti_tag = INVALID_TI_TAG; > addr_range -= PAGE_SIZE; > vadr += PAGE_SIZE; > } Concerns here: - The per-LP vhpt table is shared by all domains, and the entry sitting at the hash position may not belong to target domain. By above way, you may finally purge unrelated entry of other domain. So check upon tag value is required - Once start to check tag value, a read_seqlock style process is required here since target LP may be in process of modifying the very hash entry - When hash tlb is added with collision chain support, this remote purge is more dangerous since there'll be link list operation which however has no lock protection. Last point, so far your patch only aims for vtlb and vhpt. Machine TLB purge is not covered yet which however is the most direct place required to be handled. :-) Thanks, Kevin _______________________________________________ Xen-ia64-devel mailing list Xen-ia64-devel@xxxxxxxxxxxxxxxxxxx http://lists.xensource.com/xen-ia64-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |