A Comparative Study of Graph Matching Algorithms in Computer Vision


We want to keep this benchmark up to date so that it can serve as a valuable resource also in the future. To help us with this please contact us and submit your algorithm or dataset in the predefined format described below.

Algorithm and code requirements

  1. The algorithm must optimize graph matching problems in the Lawler form. Algorithms targeting only the Koopmans-Beckmann form cannot be considered.

  2. A mathematical description of the algorithm must be publicly available online. Additionally to the spezialized venues, to share the description we recommend to use platforms like ArXiv or Optimization Online.

  3. The solver must be open source with the code preferably placed on a suitable platform like, e.g., GitHub or GitLab.

  4. Building the solver from the source code should be automatized, e.g., with a provided installation script.

  5. The solver should run on x86_64 hardware. We recommend to test it prior to submission on the current LTS version of Ubuntu.

  6. If the solver is threaded, the number of threads should be configurable with a possibility to run single threaded.

  7. The solver must provide the required output results in the JSON format below. Iterative solvers should be able to do this every \(k\) iterations, for small enough k. This is necessary to build the performance profile of the solver in a single run.

Additional hardware requirements such as non-commonly available SIMD instructions or GPU access may lead to inability to include the respective solver to the benchmark, should we lack access to the respective hardware.

Output Format Requirements

We require “JSON lines” output for automatic processing. This means that the solver should output one JSON object per output line, e.g., every \(k\) iterations. The JSON object should include information about the iteration number, the run time, the current value and the current bound.

The required JSON object structure is as follows:

{"iteration": 1, "time": 0.8, "value": 65.234, "bound": 71.364, "assignment": [...]}
{"iteration": 2, "time": 2.3, "value": 68.999, "bound": 70.554, "assignment": [...]}
{"iteration": 3, "time": 3.6, "value": 69.000, "bound": 69.000, "assignment": [...]}

The assignment field is a JSON list of integers. It contains an entry for each node \(u \in \mathcal V\) specifying the index of the label \(s \in \mathcal L\) with which it is matched. If a label \(u \in \mathcal V\) is unassigned, i.e., not matched to any label in \(\mathcal L\), the respective element in the assignment list is specified as null.

Should the solver be purely primal and do not provide any lower bound, the bound should be omitted or specified as -inf.

Should the solver not output in the above format, a wrapper script for output transformation must be provided.

Dataset Format Requirements

We use the same data format as the one used in [Toressani13], [Hutschenreiter21] and our paper. The model description should be stored in the file ${instancename}.dd and must match the following format:

c comment line.
p <V> <L> <A> <E>       // V: # of nodes on the left (set V),
                        // L: # of nodes on the right (set L),
                        // A: # assignments (node-to-node),
                        // E: # edges (pairs of assigments)
a <id> <i> <s> <cost>   // specify assignment and its cost c_{is,is}
e <id1> <id2> <cost>    // specify edge and its cost c_{is,jl}+c_{jl,is}

i0 <i> {x} {y}          // optional: coordinate of a point i\in V in the left image
i1 <s> {x} {y}          // optional: coordinate of a point s\in L in the right image
n0 <i> <j>              // optional: specify that points <i> and <j> in
                        //   the left image are neighbors
n1 <s> <l>              // optional: specify that points <s> and <l> in
                        //   the right image are neighbors

The meaning of the sets \(\mathcal V\) and \(\mathcal L\) coincides with the one in our paper, see also its supplement for a more detailed data format specification.

If the optimal objective value is known for a dataset instance, it should be stored as a single floating point value in the file ${instancename}.opt.

If ground truth is known, it should be stored in the file ${instancename}.gt. The file must contain a single JSON sequence where each element is an array of correct label indices or null if no ground truth annotation is known for the specific node. For example, if a model containts four nodes and we know that for the first node the label 3 is correct and for the third node either label 1 or 4 is correct, the ground truth file should be provided as follows:

[[3], null, [1, 4], null]

In other words, the format allows to store ground truth information that are ambiguios or partial. In the more common case where the ground truth labeling is complete and unique, each element in the ground truth of the sequence contains a array containing exactly one element as in the following example:

[[1], [2], [3], [4]]



L. Torresani, V. Kolmogorov, C. Rother. “A Dual Decomposition Approach to Feature Correspondence”. PAMI, 2013.


L. Hutschenreiter, S. Haller, L. Feineis, C. Rother, D. Kainmüller, B. Savchynskyy. “Fusion Moves for Graph Matching”. ICCV, 2021.