MetaCG Development Details Mini Series

This is the first post of a mini series that I thought may be interesting to do. It will go a little into the details of our development model for the MetaCG library and their reasons. The things we do here are obviously by no means “the” right thing to do for every project, but maybe the perspective on the different topics is relevant for you and your decision making for your own project. The series will consist of three parts.

  • Branching Model: In the first part we provide an overview of our branching model and why we chose it.
  • Development and Testing: The second part will go over our development work. This is probably the largest part of the mini series and may be split up into multiple parts itself.
  • Release Management: In the third part we provide an overview of our release management, i.e., which releases we do, how they are prepared and how they are finalized.

However, let’s first have some history on MetaCG as this will be the reason for some of the decisions that were and are taken.

MetaCG — History

MetaCG started (probably late 2013) as a nameless mock-tool for smart instrumentation tools and heuristics within our work on the InstRO tool. It was used to construct the call graph for a program given an unfiltered Score-P profile. An unfiltered Score-P profile is a runtime profile recorded with the Score-P tool that contains all call edges from the original source code executed at runtime. Given this call graph, the tool would then evaluate the heuristics that were published in the paper “Calltree-Controlled Instrumentation for Low-Overhead Survey Measurements”.

Then, during my PhD time, I evolved the code base of the nameless mock-tool starting in 2017 into what was then referred to as MetaCG. The code base was, however, still almost exclusively focused on the use within the PIRA tool and our work on iterative performance instrumentation refinement that we published in our paper on PIRA.

After our paper on “Automatic Instrumentation Refinement for Empirical Performance Modeling”, we decided to evolve MetaCG into its own tool that can be used more easily outside of the PIRA context. The idea behind this was to ease our research tools development should we want or need to pass information from the source level to the LLVM level. We wrote a paper about MetaCG and used the now-available whole-program call-graph information for an extended analysis in our paper “Towards compiler-aided correctness checking of adjoint MPI applications”.

Since then we have worked on the MetaCG code base to separate it more and more from the initial use case within PIRA. This means that it is now evolving into a set of libraries and tools that build on top of these libraries. One tool is the (also evolving) analyzer used within PIRA and a more recently created tool is the analyzer built as part of the CaPI project.

This evolution of the MetaCG code base has lead to several significant changes and re-designs. Many of these changes were really motivated and necessary while we evolve from a special-purpose mock-up tool, to a prototype tool for one particular context, to a set of libraries for tool construction. What I learned in the process may also make it into an article on this website eventually.

As one important side note, coming from academia, the possibility for (a) working on paper ideas and (b) students to work on parts of MetaCG for a thesis was and is very important. This can be seen in the branching model and other areas of the tool and library throughout the mini-series.

Leave a Reply

Your email address will not be published. Required fields are marked *