[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [PATCH v3 19/22] x86emul: support TILELOADD{,T1} and TILESTORE
On 26.04.2021 09:12, Paul Durrant wrote: > On 22/04/2021 16:11, Jan Beulich wrote: >> On 22.04.2021 17:06, Jan Beulich wrote: >>> On 22.04.2021 16:55, Jan Beulich wrote: >>>> + do { >>>> + /* Limit rows to just as many to cover the next one to >>>> access. */ >>>> + cfg->start_row = i; >>>> + cfg->rows[modrm_reg] = i + 1; >>>> + write_tilecfg(cfg); >>>> + >>>> + if ( vex.pfx != vex_f3 ) >>>> + rc = ops->read(ea.mem.seg, >>>> + truncate_ea(ea.mem.off + i * ea.val), >>>> + row, cfg->colsb[modrm_reg], ctxt); >>>> + >>>> + invoke_stub("", "", "=m" (dummy) : "a" (row)); >>>> + >>>> + if ( vex.pfx == vex_f3 ) >>>> + rc = ops->write(ea.mem.seg, >>>> + truncate_ea(ea.mem.off + i * ea.val), >>>> + row, cfg->colsb[modrm_reg], ctxt); >>>> + } while ( rc == X86EMUL_OKAY && ++i < n ); >>> >>> in principle tiles could have rows larger than 64 bytes without any >>> separate CPUID feature flag qualifying this. struct hvm_mmio_cache, >>> otoh, is having a fixed-size 64-byte buffer right now. Therefore I'm >>> wondering whether we'd want to switch to dynamically allocating that >>> to the minimum of 64 bytes and the size of a tile row, just as a >>> precautionary measure. >> >> Actually, as it occurred to me only after sending, enlarging tile size >> would under almost all circumstances require a new XSTATE component, >> which we'd need to enable first. I consider it less likely that they'd >> permit a wider range of layouts without increasing tile size. But we >> might still want to play safe. >> > > I guess on-demand reallocation to a larger size would be fine. Certainly > we want to be sure we don't overflow. Okay, I've added a patch doing not just this, but (perhaps even more important) also increase struct hvmemul_cache's capacity on such hardware. Jan
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |