A Lagrangian Propagator for Artificial Neural Networks in Constraint Programming
NOTE: the content of this repository sums to ~560MB, largely because the code is shipped in a vagrant Virtual Machine.
This repository contains resources to the paper titled "A Lagrangian Propagator for Artificial Neural Networks in Constraint Programming" , which presents a new propagator for global constraint that encode Artificial Neural Networks (ANNs) in Constraint Programming (CP).
The first method for encoding ANNs in CP has been proposed in . The method was later generalized into the Empirical Model Learning approach (see  and ), based on the key idea of embedding a Machine Learning model into a more complex and comprehensive combinatorial optimization model. This technique allow to approximately represent the behavior of a system that is impervious w.r.t. conventional modeling efforts.
The idea is related to so-called surrogate models employed in Derivative Free (or black box) Optimization. However, such approaches are typically designed to tackle problems without a complex combinatorial structure. Conversely, in EML, the emphasis is on (i) integrating the Machine Learning model with a more complex, traditional optimization model, and on (ii) taking advantage as much as possible of the structure of the Machine Learning model for boosting the search process.
For example, the original approach from  proposed a simple algorithm to enforce Bound Consistency on global constraints that encode single artificial neurons. In  we have strengthened such approach by targeting two-layer ANNs rarther than single neurons, and by using a Lagrangian relaxation to compute bounds on the network output.
In , i.e. the paper associated to this repository, we have extended our approach from  by introducing a technique to perform reduced-cost based filtering on the network input, and by experimenting with different overhead reduction techniques. The paper contains also a larger experimental evaluation.
Content of the Repository
This repository contains:
- The code for the Lagrangian propagator, in the
code-vm(see the related description)
- The Artificial Neural Networks considered in our experimentation, in the
annsfolder (see the related description)
- The instances considered in our experimentation, in the
instancesfolder (see the related description)
- All our experimental results, in the
resultsfolder (see the related description)
 Lombardi, Michele, and Stefano Gualandi. "A lagrangian propagator for artificial neural networks in constraint programming." Constraints (2015): 1-28.
 Andrea Bartolini, Michele Lombardi, Michela Milano, Luca Benini: "Neuron Constraints to Model Complex Real-World Problems". CP 2011: 115-129
 Andrea Bartolini, Michele Lombardi, Michela Milano, Luca Benini: "Optimization and Controlled Systems: A Case Study on Thermal Aware Workload Dispatching". AAAI 2012
 Alessio Bonfietti, Michele Lombardi, Michela Milano: "Embedding Decision Trees and Random Forests in Constraint Programming". CPAIOR 2015: 74-90
 Michele Lombardi, Stefano Gualandi: "A New Propagator for Two-Layer Neural Networks in Empirical Model Learning". CP 2013: 448-463
 Stefano Gualandi, Federico Malucelli: "Resource Constrained Shortest Paths with a Super Additive Objective Function". CP 2012: 299-315