Best Practices for Backtrack Search Algorithms

Almost every algorithm I implement these days is a combinatorial search, which is a fancy way of saying a backtrack search to find combinatorial objects. A recursive algorithm determines a set of choices to make, selects one, and deepens the search. After all options are exhausted, the algorithm steps “up” a level of the search tree and makes a different choice.

Today, I want to discuss a few patterns I have developed in order to make such a backtracking search very error-proof and to make development streamlined. These concepts are particularly helpful when using TreeSearch.

Manager/Data

When using TreeSearch, the branching algorithm is manifest as an extension of the `SearchManager` class. The methods to select a child, deepen the tree, prune a node, and move back up the tree, are all implemented here. However, it is a good policy to have the data managed by the algorithm in a separate object. Thus, there is one object that manages the search tree, but another that manages the data.

This way, the manager class can perform translations from the TreeSearch methods into specific accessors, mutators, augmentations, and algorithms that are implemented within the data class. Also, it helps to keep “protected” or “private” data members out-of-reach from the recursive algorithm.

In-Place Data vs. Deep Copies

One way to make sure that changes deep in the search tree do not affect nodes higher in the tree is to create a deep copy of your data at every search node. However, most choices will only change a few values in a large table of data. By performing a complete copy, you are guaranteed to require a complete evaluation of every data point at every search node.

A way to avoid this is to have a single data structure throughout the search. When changes are made, you can “remember” the changes you made, and “undo” them when it is time to move up in the tree. If the changes are truly small compared to the full data structure, the memory required to store the changes is much smaller than the memory required to store multiple copies of the full structure. More about how to store these changes in the Snapshot/Rollback section.

The one area where it is very valuable to store a full copy of the data is when it is computationally difficult to produce that data. In particular, any symmetry data computed from `nauty` such as orbit information should be stored in visibility of each search node. Also, any linear programming relaxation solutions should be stored.

Data Segmentation

When performing a combinatorial search, there is some very basic data that is stored that “defines” the current object. For instance, there may be an adjacency matrix that stores values 0, 1, or 2 depending on if a pair is a non-edge, edge, or “undecided” (see the definition of trigraph). However, there is a lot of extra information that may come along with that data, such as the degree sequence, the number of triangles, or any number of extra information. Each of these data values should be split into its own “chunk”. This does not mean that each requires its own object, but that you should group all of the necessary actions into its own collection of methods.

For instance, the degree sequence of the graph (so far), stored as an integer array, should have its own methods for the following:

• initialize the degree sequence array.
• deallocate the degree sequence array.
• access a specific value of the degree sequence.
• modify a specific value of the degree sequence.

With these types of methods for each data type, there can be a single method for initializing the object by calling all type-specific initializers, a single method for destroying the object by calling all the type-specific deallocators, and for each augmentation made by the search algorithm, each mutator/accessor is called one-by-one in order to protect data types, bounds, and default values.

Snapshot/Rollback

This concept is the most important in my development process. Before the search algorithm makes any changes to the object, it first wants to store a “snapshot” of the current situation. Then, after deepening the search and wanting to turn around, the algorithm can “roll back” to this previous point. Again, we could make a deep copy of our object and simply copy everything back, but that would be inefficient.

Instead, we use stacks to record every change that is made to the object. For instance, when the $i,j$ position of the adjacency matrix changes from a 2 to a 1, we can push $i$ and $j$ to a stack. When a snapshot is made, we can record how many entries are in that stack (and placing that value in its own stack). Then, when a rollback is initiated, we can find the size the stack should be and pop pairs $i,j$ off, changing the positions back to 2 until complete.

Thus, if we look at the stack from bottom to top, we see a list of changes that are made, in the order we made them. Then our snapshots store the boundaries between these changes in order to identify the depth of the search tree when those changes were made. We simply “undo” the latest changes until reaching the depth we want.

Let’s consider a complete example of a data type that is segmented away from the rest of the object and has full snapshot/rollback capabilities.

First, in the class definition, we include all methods corresponding to the adjacency matrix together.

```char* adjmat;
bool setAdjacency(int i, int j, char type);
```

In this data type, we have a single-dimensional array storing all array positions for the adjacency matrix. Thus, by using the methods `setAdjacency` and `getAdjacency` the rest of the class is ignorant of the ranking mechanism for finding the array position of a pair $ij$. Let us see these methods.

```char GraphData::getAdjacency(int i, int j)
{
int ind = indexOf(i, j);
}

bool GraphData::setAdjacency(int i, int j, char type)
{
int ind = indexOf(i, j);
if ( this->adjmat[ind] == 2 )
{
return true;
}
return false;
}
```

These methods use a method `indexOf` that returns the co-lex rank of a pair. Observe that the `setAdjacency` method pushes this rank to a stack. This is helpful when performing the snapshot/rollback methods below.

```void GraphData::snapshotAdjacencies()
{
}

{

while ( this->adjacency_stack.size() > to_size )
{

}
}
```

Observe that the snapshot method only stores the size of `adjacency_stack`. That way, when the rollback method is called it can identify how big that stack should be after the rollback, and any changes that were made in that time will be un-done. Also note that this only works to undo changes from 2 to another value! If you want to allow other changes, you should also push the previous values to their own stack! (For instance, the snapshot/rollback methods for the degree sequence will require both the value of $i$ and the previous degree of the $i$th vertex to be pushed to a stack, or two stacks.)

Finally, I want to point out that the method that frees the adjacency matrix data should also empty the snapshot stacks. This is important as the object may need to be “cleaned” before used by the algorithm on a new instance of the search.

Propagation Queues

When searching for a combinatorial object that has many constraints, you can learn a lot about the open variables based on the variables that have been set by a branch. I call this process constraint propagation. The goal is to modify the variable domains and values in order to gain some form of local consistency. The idea is to reduce the domains of the open variables to values that have “any hope” of extending to a full solution. Constraint propagation is worth a few blog posts of its own!

One simple way to achieve local consistency is to iterate over all constraints, then all variables in that constraint, and then to determine which values of the domain can extend to a complete assignment of the variables in the constraint. In an integer linear program, this means that we search over all rows of the constraint matrix, over each variable with a nonzero coefficient in that row, and all values in that variable’s domain and see if that constraint can be satisfied with certain in-domain assignments to the other variables.

However, this is very inefficient, especially in a very sparse constraint system. Instead, we can think about the variable-constraint bipartite graph: Let $G$ be the graph with bipartition $X \cup C$ where $X$ is the set of variables and $C$ is the set of constraints. An edge $xc$ exists if and only if the variable $x$ is in the support of $C$.

What we can do is this: When a variable value or domain is updated, add all adjacent constraints to a list! We then verify local consistency on each constraint. This may lead to more updates to variable values and domains, which will add more constraints to the list. If we treat this list of constraints as a queue, then we can iterate in a “Breadth-First” manner through the variable-constraint graph. However: you should be aware that we can repeat constraints if the variables are updated after those constraints are verified.

This can greatly reduce the amount of time that is used during constraint propagation. Further, it helps you really determine exactly what kind of local consistency you are looking for. I usually have some basic form of arc consistency in mind, but more advanced kinds are available!

Others?

If you have other best-practices (Combinatorial Optimizers, I’m looking at YOU!), then please make a brief note in the comments!