[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [RFC PATCH 00/30] Code tagging framework and applications
On Thu, Sep 1, 2022 at 12:15 PM Michal Hocko <mhocko@xxxxxxxx> wrote: > > On Thu 01-09-22 08:33:19, Suren Baghdasaryan wrote: > > On Thu, Sep 1, 2022 at 12:18 AM Michal Hocko <mhocko@xxxxxxxx> wrote: > [...] > > > So I find Peter's question completely appropriate while your response to > > > that not so much! Maybe ftrace is not the right tool for the intented > > > job. Maybe there are other ways and it would be really great to show > > > that those have been evaluated and they are not suitable for a), b) and > > > c) reasons. > > > > That's fair. > > For memory tracking I looked into using kmemleak and page_owner which > > can't match the required functionality at an overhead acceptable for > > production and pre-production testing environments. > > Being more specific would be really helpful. Especially when your cover > letter suggests that you rely on page_owner/memcg metadata as well to > match allocation and their freeing parts. kmemleak is known to be slow and it's even documented [1], so I hope I can skip that part. For page_owner to provide the comparable information we would have to capture the call stacks for all page allocations unlike our proposal which allows to do that selectively for specific call sites. I'll post the overhead numbers of call stack capturing once I'm finished with profiling the latest code, hopefully sometime tomorrow, in the worst case after the long weekend. > > > traces + BPF I > > haven't evaluated myself but heard from other members of my team who > > tried using that in production environment with poor results. I'll try > > to get more specific information on that. > > That would be helpful as well. Ack. > > > > E.g. Oscar has been working on extending page_ext to track number of > > > allocations for specific calltrace[1]. Is this 1:1 replacement? No! But > > > it can help in environments where page_ext can be enabled and it is > > > completely non-intrusive to the MM code. > > > > Thanks for pointing out this work. I'll need to review and maybe > > profile it before making any claims. > > > > > > > > If the page_ext overhead is not desirable/acceptable then I am sure > > > there are other options. E.g. kprobes/LivePatching framework can hook > > > into functions and alter their behavior. So why not use that for data > > > collection? Has this been evaluated at all? > > > > I'm not sure how I can hook into say alloc_pages() to find out where > > it was called from without capturing the call stack (which would > > introduce an overhead at every allocation). Would love to discuss this > > or other alternatives if they can be done with low enough overhead. > > Yes, tracking back the call trace would be really needed. The question > is whether this is really prohibitively expensive. How much overhead are > we talking about? There is no free lunch here, really. You either have > the overhead during runtime when the feature is used or on the source > code level for all the future development (with a maze of macros and > wrappers). Will post the overhead numbers soon. What I hear loud and clear is that we need a kernel command-line kill switch that mitigates the overhead for having this feature. That seems to be the main concern. Thanks, Suren. [1] https://docs.kernel.org/dev-tools/kmemleak.html#limitations-and-drawbacks > > Thanks! > -- > Michal Hocko > SUSE Labs
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |