# Wiki

# OscaR / lns

## Large Neighborhood Search (LNS)

It's a technique to solve difficult optimization problems with Constraint Programming. The goal of LNS is to avoid being stuck in a region of the search tree for too long by restarting frequently to explore other regions. In a huge search tree you can have the following behavior where the early decisions are never reconsidered and you spend time searching in only one region of the search space:

With LNS when you realize you are stuck for too long you restart and visit another place to improve the current best solution. So the search space you would explore would look more like this:

The principle is the following:

- Keep a current best solution to your problem, repeat the following two steps (relax + restart) until a stopping criteria is met.
**relax**: relax the current best solution (typically by fixing only a fraction of the decision variables to their value in the current best solution)**restart**: Try to improve the current best solution with CP and replace it if a better solution can be found. This restart has a limited number of failures

Next section illustrates how to do that on a quadratic assignment problem:

*
There are a set of n facilities and a set of n locations.
For each pair of locations, a distance is specified and for each pair of facilities a weight or flow is specified (e.g., the amount of supplies transported between the two facilities). The problem is to assign all facilities to different locations with the goal of minimizing the sum of the distances multiplied by the corresponding flows.
*

```
val cp = CPSolver()
var w: Array[Array[Int]] //weight matrix
var d: Array[Array[Int]] //distance matrix
val x = N map (v => CPVarInt(cp, 0 until n))
val D = Array.tabulate(n, n)((i, j) => element(d, x(i), x(j))) //matrix of variables representing the distances
cp.minimize(sum(N, N)((i, j) => D(i)(j) * w(i)(j))) subjectTo {
cp.add(alldifferent(x), Strong)
} exploration {
cp.binaryFirstFail(x)
} run()
```

The first step to transform this model into a LNS model is keep track of the best current solution in **bestSol** array:

```
val cp = CPSolver()
var w: Array[Array[Int]] //weight matrix
var d: Array[Array[Int]] //distance matrix
val x = N map (v => CPVarInt(cp, 0 until n))
val D = Array.tabulate(n, n)((i, j) => element(d, x(i), x(j))) //matrix of variables representing the distances
val bestSol = Array.fill(n)(0)
cp.minimize(sum(N, N)((i, j) => D(i)(j) * w(i)(j))) subjectTo {
cp.add(alldifferent(x), Strong)
} exploration {
cp.binaryFirstFail(x)
(0 until n).foreach(i => bestSol(i) = x(i).getValue())
} run()
```

Our lns relaxation procedure that will randomly restore 50% of the variables to their value in the current best solution:

```
cp.post((0 until n).filter(i => rand.nextInt(100) < 50).map(i => x(i) == bestSol(i)))
```

We can now implement the 300 restarts and each restart have a limit of at most 60 failures.
We do that in a simple for loop and use the method `runSubjectTo`

to make a new run. Note that constraints added in the `runSubjectTo`

block are reverted at each new run.

```
val cp = CPSolver()
var w: Array[Array[Int]] //weight matrix
var d: Array[Array[Int]] //distance matrix
val x = N map (v => CPVarInt(cp, 0 until n))
val D = Array.tabulate(n, n)((i, j) => element(d, x(i), x(j))) //matrix of variables representing the distances
val bestSol = Array.fill(n)(0)
cp.minimize(sum(N, N)((i, j) => D(i)(j) * w(i)(j))) subjectTo {
cp.add(alldifferent(x), Strong)
} exploration {
cp.binaryFirstFail(x)
(0 until n).foreach(i => bestSol(i) = x(i).getValue())
} run(1) // first make a run to find the initial solution
for (r <- 1 to 300) {
// do a new run, not limiting number of solution and with 60 backtrack limit
cp.runSubjectTo(Int.MaxValue,60) {
// relax randomly 50% of the variables
cp.post((N).filter(i => rand.nextInt(100) < 50).map(i => x(i) == bestSol(i)))
}
}
```

In the output produced by this program you will either see "!" or "R" before each restart:

**!**means that the run stopped because the limit on the number of failures was reached**R**means that the run stopped because the search tree is exhausted before the failure limit was reached.

If you see too many **R** is probably means that you should relax more variables. If you see too many **!** you should probably relax less variables because the search tree is a bit large wrt to the failure limit.

Updated