Week 1 focusses on deterministic optimisation techniques, while week 2 is centered around stochastic modelling.
In week 1 we start with the basic technique of linear programming (LP). The emphasis will be on modelling practical problems into the framework of LP, and solving them with software such as Gurobi. It will be demonstrated what mathematical principles lie behind the solution method. Many interesting and practical LP problems involve integer variables, which lead to the topics integer linear programming (ILP), and combinatorial optimisation (ILP). We will discuss why these are more difficult than LP and highlight some solution methods. Next, we will consider optimisation problems in network models which is relevant for logistical, distributional, and supply chain systems. The most famous problems are the ones with the shortest paths, which are easy to solve by Dijkstra's algorithm, and the traveling salesman, which is hard to solve. These problems can be solved as special cases of ILP, but have, in most cases, their own algorithm. Week 1 is concluded by generalising to dynamic optimisation. In practice, once you have made a decision (or action), the system evolves to a new state and a new optimisation problem that requires an action, etc. We will show you how to get an overall optimal solution by considering the appropriate single decision problems and solve these.
During week 1, students will work on an optimisation project that could be (I)LP or network model. The necessary input data are obtained from various sources, then the model is constructed and validated. The output is generated by solving the model with Gurobi.
Week 2 starts off with discussing Markov chains. These are relevant stochastic processes with properties that make them suitable for modelling operational systems. For instance, Google pagerank is based on Markov chains. Moreover, it is relatively easy to analyse long-run behaviour of systems when modelled as Markov chains. Another approach for modelling and analysis of operational systems is by executing simulation experiments. Stochastic simulation (also called Monte Carlo simulation) is the second topic of this week. We will discuss the basic principles of generating random numbers and random variables, will give ample examples, and elaborate on statistical analysis of simulation output. Next, we will discuss queueing systems, a typical Operations Research topic. Queueing and waiting are phenomena that one encounters in, for example, service centres, ticket lines, hospitals, communication systems, and many others. In this lecture, we will discuss the mathematical theory that describes these phenomena in a conceptual model which enables to execute performance analysis and run numerical computations. The purpose is to find system designs that may reduce waiting times or increase utilisation. The last topic is reinforced learning. Reinforced learning is a computational approach for determining optimal decisions in a dynamic system by means of learning (exploiting) from previous decisions and experimenting (exploring) with new decisions. The exploiting stage uses techniques from dynamic programming, whereas the exploring stage uses Monte Carlo simulation. Typically, reinforced learning is applied to reach a certain goal. This lecture will discuss the basic principles of this method, and will supply successful examples and applications.
During week 2, students will work on a simulation project for queueing optimisation or reinforced learning. The necessary input data are obtained from various sources, then the model is constructed and validated. Finally, the model is solved with Python (or other language) programming.