Education Research Current Organisation and Cooperation NL
Login as
Prospective student Student Employee
Bachelor Master VU for Professionals
Student Desk Exchange programme VU Graduate Winter School Honours programme VU-NT2 Semester in Amsterdam
PhD at VU Amsterdam Research highlights Prizes and distinctions
Research institutes Our scientists Research Impact Support Portal Creating impact
News Events calendar Energy in transition
Israël and Palestinian regions Women at the top Culture on campus
Practical matters Mission and core values Entrepreneurship on VU Campus
Organisation Partnerships Alumni University Library Working at VU Amsterdam
Sorry! De informatie die je zoekt, is enkel beschikbaar in het Engels.
This programme is saved in My Study Choice.
Something went wrong with processing the request.
Something went wrong with processing the request.

Operations Analytics seminars

The seminar of the department of Operations Analytics of the Vrije Universiteit Amsterdam currently takes place in a hybrid format, where the seminar takes place on campus and is livestreamed using Zoom. It takes place once a month at 16.00 (CET). Please find the campus room and Zoom room in the announcements that we send around.

In case you are interested in attending the seminar or giving a talk, please send an email to t.oosterwijk@vu.nl

Coming Seminars:

  • December 5 at 16.00

Damla Yuksel: Q-learning Guided Algorithms for Bi-Criteria Minimization of Total Flow Time and Makespan in No-Wait Permutation Flowshops

Combining deep reinforcement learning and meta-heuristic techniques represents a new research direction for enhancing the search capabilities of meta-heuristic methods in the context of production scheduling. Q-learning is a prominent reinforcement learning in which its utilization aims to direct the selection of actions, thus preventing the necessity for a random exploration in the iterative process of the metaheuristics. In this study, we provide Q-learning guided algorithms for the Bi-Criteria No-Wait Flowshop Scheduling Problem (NWFSP). The problem is treated as a bi-criteria combinatorial optimization problem where total flow time and makespan are optimized simultaneously. Firstly, a deterministic mixed-integer linear programming model is provided. Then, Q-learning guided algorithms are developed: Bi-Criteria Iterated Greedy Algorithm with Q-learning and Bi-Criteria Block Insertion Heuristic Algorithm with Q-learning. Moreover, the performance of the proposed Q-learning-guided algorithms is compared over a collection of heuristics in the literature. The complete computational experiment, that is performed on the 480 problem instances known as the VRF benchmark set, indicates that the proposed Q-learning guided algorithms can yield more non-dominated bi-criteria solutions with the most substantial competitiveness than the remaining algorithms. At the same time, both are competitive with each other on the benchmark problems. Among all the features that have been compared, the Q-learning-guided algorithms demonstrate the highest level of competitiveness. The outcomes of this study encourage us to discover (i) the effectiveness of the Q-learning integration into metaheuristics applied for the flowshop scheduling problems, and (ii) many more bi-criteria NWFSPs for revealing the trade-offs between other conflicting objectives, such as makespan & the number of tardy jobs, to overcome various industries' problems.

Doi: https://doi.org/10.1016/j.swevo.2024.101617

  • December 12 at 15.00 (mind the date and time!)

Maaike Vollebergh: Bankruptcy Problems Over Time

A bankruptcy problem occurs when several agents each claim a portion of an estate that is insufficient to satisfy all the claims, raising the question of how to divide the estate equitably among the agents.

In the traditional bankruptcy problem, there is a single decision made at a single point in time. Looking at the applications of bankruptcy, e.g. water sharing or vaccine allocation, one can see that these are actually problems where allocations have to be made over time. In this presentation, we introduce the concept of bankruptcy over time in the Multi-period Bankruptcy Problem. We also present three different perspectives on bankruptcy over time.

For each of these perspectives, we present mechanisms to generalize existing bankruptcy rules to the multi-period setting. Moreover, we formulate properties of these multi-period bankruptcy rules, and prove that these properties are inherited from the traditional bankruptcy rules. Finally, we present efficient algorithms to apply the bankruptcy rules using recursive methods.

By introducing the Multi-period Bankruptcy Problem, presenting and computing multi-period bankruptcy rules, we want to extend the usefulness of bankruptcy rules in practice and open avenues for further research on multi-period bankruptcy problems.

  • January 9

Jannis Kurtz: Counterfactual Explanations for Linear Optimization

In recent years, there has been a rising demand for transparent and explainable machine learning (ML) models. A large stream of works focuses on algorithmic methods to derive so called counterfactual explanations (CE). Although significant progress has been made in generating CEs for ML models, this topic has received minimal attention in the Operations Research (OR) community. However, algorithmic decisions in OR are made by complex algorithms which cannot be considered to be explainable or transparent. In this work we argue that there exist many OR applications where counterfactual explanations are needed and useful. We translate the concept of CEs into the world of linear optimization problems and define three different classes of CEs: strong, weak and relative counterfactual explanations. For all three types we derive problem formulations and analyze the structure. We show that the weak and strong CE formulations have some undesirable properties while relative CEs can be derived by solving a tractable convex optimization problem. We test all concepts on a real-world diet problem and we show that relative CEs can be calculated efficiently on NETLIB instances.

  • February 6

Ahmadreza Marandi: Robust Spare Parts Inventory Management

We focus on the spare parts inventory control under demand uncertainty, particularly during the New Product Introduction (NPI) phase when historical data is limited. Most conventional spare parts inventory control models assume demand follows a Poisson process with a known rate. However, the rate may not be known when limited data is available. We propose an adaptive robust optimization (ARO) approach to multi-item spare parts inventory control. We show how the ARO problem can be reformulated as a tractable deterministic integer programming problem. We develop an efficient algorithm to obtain near-optimal solutions for thousands of items. We demonstrate the practical value of our model through a case study at ASML, a leading semiconductor equipment supplier. The case study reveals that our model consistently achieves higher service levels at lower costs than the conventional stochastic optimization approach employed at ASML.

Past Seminars

  • 2024

    • November 6

    Marieke van Keeken: The effects of product lifetime extension on short- and long-term supply chain circularity: A case study of the European aluminum automotive supply chain

    This paper models, quantifies, and analyzes the environmental impacts of Product Lifetime Extension (PLE) on circular materials in the short and long term. The European Aluminum Rolled Products Automotive Supply Chain (ARPASC) is used as a case study. We present a system dynamics model for the European ARPASC to fit short-term and long-term goals. The computational results show that PLE reduces the demand for products and primary and secondary resources in the short and long term. A 25% PLE starting in 2025 is found to reduce the global warming potential of the European ARPASC by 16.7% in 2050. The results of different scenarios show that the degree of PLE and the timing of its implementation should be chosen carefully to ensure that the long-term objectives of the Paris Agreement are achieved without compromising short-term goals. PLE proves to be a sustainable product development strategy for realizing the European Green Deal.

    DOI: https://doi.org/10.1016/j.resconrec.2024.107836

    • October 3

    Francesco Giliberto: Stochastic Programming for Dynamic Temperature Control of Refrigerated Road Transport

    Maintaining temperatures within specified ranges is essential for many products to prevent quality degradation and increase shelf life. In this paper, we propose an optimal cooling policy for refrigerated trucks that deliver temperature-sensitive products to customers on a predefined route. Our policy is the first that explicitly accounts for the most important uncertainties such as the duration of door opening times while loading and unloading as well as the initial temperatures of the loaded products. Furthermore, our model features a careful modeling of thermodynamics within the truck, including the heat transfer between the air, the products, the outside environment, and the cooling unit. The resulting problem is cast as a large multi-stage stochastic programming problem that we solve using stochastic dual dynamic programming. In cooperation with industry partners, we set up an extensive battery of test cases in order to examine the quality of our solution. To that end, we benchmark our stochastic policy against a rolling lookahead policy and a myopic practitioner's benchmark out of sample. The results show that our policy clearly outperforms the benchmarks on all routes and in all truck configurations by a large margin, employing a cooling regimen that hedges against warming of the products due to unexpectedly long door opening times. This leads to significantly fewer violations of temperature bounds. In a separate analysis, we show that our policy enables energy savings of up to 25% in cooling unit operation, helping to make energy-intensive cold chains more sustainable.

    • September 12

    Michael Kahr: Layout and location planning for automatic locker box systems under stochastic demand

    The last pandemic raised many new challenges for humanity. For instance, governments imposed regulations such as lockdowns, resulting in supply chain shocks at different tiers. Additionally, delivery services reached their capacity limits because the demand for mail orders soared temporarily during the lockdowns. We argue that one option to support supply chain viability at the last-mile delivery tier is to use (outdoor) parcel lockers through which customers can collect their orderings. The location planning of such lockers is known to be of utmost importance for their success. Another important topic to address is that the design of the compartment structure of the parcel lockers should meet the (uncertain) customer demand for different commodities. Both of the latter planning issues are combined into one optimization problem. The objective is to maximize a linear function (e.g., expected profits) of the covered demand, given a budget an operator is willing to invest. An integer linear programming formulation is proposed, and a reformulation based on Benders decomposition is derived. The developed algorithms enable solving of large-scale problem instances. The impact of different problem parameters on the obtained solutions is discussed, and a case study based on real-world data from Austria is presented. The results further indicate that small-sized and medium-sized compartments should be preferred over large and x-large ones in the parcel locker compartment design.

    The work is already published in:

    Michael Kahr, Determining locations and layouts for parcel lockers to support supply chain viability at the last mile, Omega, Volume 113, 102721, 2022.

    • June 6

    Tjark Vredeveld: Analyzing the (k-)swap neighborhood for makespan scheduling

    Analyzing the behavior of local search methods has received considerable attention over the last two decades. One interesting question is how the simplest form of local search, i.e., iterative improvement, behaves w.r.t. a certain neighborhood both in quality of the solution as well as number of iterations needed to obtain a local optimal solution.

    In this talk, we consider the basic scheduling problem  in which n jobs need to be scheduled on m identical machines so as to minimize the makespan, i.e., the completion time of the last job. Finn and Horowitz (1979) showed that the folklore jump neighborhood, that moves one job from its machine to another machine, yields a (2-2/(m+1))-approximation and this was shown to be tight by Schuurman and Vredeveld (2001/2007). Brucker, Hurink and Werner (1996/1997) showed that the iterative improvement procedure needs O(n^2) iterations to obtain a jump optimal solution, which was shown to be tight by Hurkens and Vredeveld (2003).

    Schuurman and Vredeveld (2007) left as an open question to bound the number of iterations to obtain a swap-optimal solution.

    In this talk, we consider the k-swap neighborhood, which is a generalization of the jump and the swap neighborhood. In the k-swap neighborhood, a neighboring solution is obtained by selecting at most k jobs scheduled on at most two machines and then interchanging the machine allocation of these jobs. For k=1 this is equivalent to the jump neighborhood and for k=2 it is equivalent to the swap neighborhood.

    We analyze the number of iterations needed to obtain a k-swap optimal solution by iterative improvement; thereby answering the question posed in Schuurman and Vredeveld.

    This is joint work with Lars Rohwedder and Ashkan Safari.

    • May 16

    Steven Miltenburg: Fixed Order Routing

    We consider a routing problem with multiple identical vehicles, where instead of being able to determine the order in which locations (requests) are visited, there is a global fixed order. The challenge is thus to decide which requests are visited by which vehicles, as the order determines the order in which the vehicle has to visit these requests. We consider multiple objectives: to minimize the total traversed distance with a vehicle capacity c, to minimize the largest route C_max with at most k vehicles and the sum of completion times of the requests, Sum C_j with at most k vehicles. Counter-intuitively, this fixed order can make the problem harder. We show that on a line, where the problem is can be solved optimally without an order, it is strongly NP-hard with an order for the C_max and Sum C_j objectives. We also give a PTAS for the total distance objective on the line, though NP-completeness is not yet proven.

    Furthermore we give some results on general metric spaces, where TSP without an order or capacity is already APX-hard. For the fixed order total distance objective, we give an approximation algorithm and show that the problem is both NP-hard and APX-hard, showing that our approximation bound is of the right order.

    • April 25

    Guanlian Xiao: Optimal Inspection Policy under Imperfect Predictions: Structural Analysis and Insights

    We consider a critical component that deteriorates according to a three-state discrete-time Markov chain with a self-announcing failed state and two unobservable operational states: good and defective. The component is periodically monitored by a defect-prediction model that generates binary signals, but the signals are imperfect. The problem is to decide how to use the signals to make the inspect-or-not decision with the objective  of minimizing the expected total discounted cost. We build a partially observable Markov decision process to address the problem. We show that the structure of the optimal policy is threshold type, and the objective value at any given belief state is unimodal in the threshold value. By introducing a novel concept referred to as a chain-based threshold policy, we formalize specific properties of the belief space to explicitly link the optimal policy to a critical number of signals of different types coming from the prediction model. In this way, the optimal policy can be implemented in practice by simply counting the number of signals in a specific order.

    We further provide a sufficient condition that confirms the optimal policy belongs to the class of chain-based threshold policies, and propose an approximate algorithm for the case when the optimal policy is not chain-based.

    • April 11 at 17.10 (mind the date and the time!)

    Buğra Çınar: Pricing and bundling decisions considering driver behavior in crowdsourced delivery

    Crowdsourced delivery utilizes the services of independent actors. As opposed to traditional modes of delivery, availability and acceptance decisions of crowdshippers are uncertain and cannot be fully controlled by an operator. We consider a setting in which an operator groups tasks into bundles and in which the resulting bundles are offered to crowdshippers in exchange for some compensation. Uncertainty in the crowdshippers' behavior who may accept or reject offers is considered via (individual) acceptance probabilities. For the latter, we consider generic probability functions whose main parameters are the compensation offered, the number of tasks in a bundle, and the total detour to deliver a bundle. The objective of the resulting optimization problem is to minimize the expected total cost of delivery. We propose a mixed-integer non-linear programming (MINLP) formulation that simultaneously decides (i) how to group tasks into bundles, (ii) which bundles are offered to which crowdshipper, and (iii) the compensation offered for each bundle. We show that this MINLP can be reformulated as a mixed-integer linear program with an exponential number of variables. We present a column generation algorithm for solving instances of the latter whose pricing subproblem corresponds to an elementary shortest path problem with resource constraints (ESPPRC) and a nonlinear objective function. We develop a tailored algorithm for this variant of the ESPPRC that exploits theoretical bounds we derive on the routes of the crowdshippers. The preliminary computational experiments show that the algorithm is capable of solving large instances in a reasonable amount of time.

    • March 21

    Svetlana Borovkova: Climate Stress Testing for credit and equity portfolios

    Climate risk is the talk of the day in financial institutions, especially banks. Regulatory burden in climate risk is rapidly increasing, and from 2024, banks must incorporate climate risk drivers into their stress testing framework. However, there are no established methodologies yet and the lack and bad quality of data presents another severe obstacle. In this talk, I will address the topic of climate stress testing for banks based on several case studies: Dutch mortgage portfolio as well as global corporate credit and equity portfolios. I will zoom in on some interesting models, for example, those we developed for forecasting housing market transition and for energy labels imputation. This will be not your usual Operations Analytics talk, but more of a practitioner’s overview of challenges and possible solutions for climate stress testing. 

    • February 1

    Tim Oosterwijk: The Secretary Problem with Independent Sampling

    The secretary problem is probably the most well-studied optimal stopping problem with many applications in economics and management. In the secretary problem, a decision-maker faces an unknown sequence of values, revealed one after the other, and has to make irrevocable take-it-or-leave-it decisions. Her goal is to select the maximum value in the sequence. While in the classic secretary problem, the values of upcoming elements are entirely unknown, in many realistic situations, the decision-maker still has access to some information, for example, in the form of past data. In this talk, I will take a sampling approach to the problem and assume that before starting the sequence, each element is independently sampled with probability p. This leads to what we call the random order and adversarial order secretary problems with p-sampling. In the former, the sequence is presented in random order, while in the latter, the order is adversarial. Our main result is to obtain the best possible algorithms for both problems and all values of p. As p grows to 1, the obtained guarantees converge to the optimal guarantees in the full information case, that is, when the values are i.i.d. random variables from a known distribution. Notably, we establish that the best possible algorithm in the adversarial order setting is a simple fixed threshold algorithm. In the random order setting, we characterize the best possible algorithm by a sequence of thresholds, dictating at which point in time we should start accepting a value. Surprisingly, this sequence is independent of p.

    I will then complement our theoretical results with practical insights obtained from numerical experiments on real life data obtained from Goldstein et al. (2020), who conducted a large-scale behavioral experiment in which people repeatedly played the secretary problem. The results help explain some behavioral issues they raised and indicate that people play in line with a strategy similar to our optimal algorithms from the first game onwards, albeit slightly suboptimally.

    This is joint work with José Correa, Andrés Cristi, Laurent Feuilloley, and Alexandros Tsigonias-Dimitriadis.

    • January 11 (mind the date!)

    Ad Ridder: Exponential Dispersion Models for Count Data

    We describe a methodology of constructing probability distributions on the nonnegative integers (aka counting distributions). Counting distributions have historic roots as they have been studied since the beginning of Probability Theory. The reason is their statistical importance and applicability in almost all societal and scientific areas. The methodology is based on considering natural exponential families of probability distributions. These families are uniquely determined by their variance functions. Then, counting distributions can be constructed from variance functions that show some regularity conditions. The counting distributions in this talk are constructed from polynomial variance functions with nonnegative coefficients. The usability of these new counting distributions is exhibited by various applications of data fitting, and insurance risk modeling.

  • 2023

    • December 7

    Roozbeh Qorbanian: Valuation of Power Purchase Agreements for Corporate Renewable Energy Procurement

    Corporate renewable power purchase agreements (PPAs) are long-term agreements that enable companies to source renewable energy without having to build and operate their own renewable energy projects. Typically, producers and consumers agree on a fixed price at which power is purchased. The value of the PPA to the buyer then depends on the difference between the fixed price and the expected price at which the produced volume is expected to sell the future. To model this so-called capture price, practitioners often use either fundamental models or statistical approaches, each with their inherent weaknesses. We propose a new approach that blends the logic of fundamental electricity market models with statistical learning techniques. In particular, we use regularized inverse optimization of a quadratic programming formulation of a bid stack model to estimate the marginal costs of different technologies as a parametric function of exogenous factors. We compare the out-of-sample performance using market data from three European countries and demonstrate that our approach outperforms established statistical learning benchmarks. We then discuss the case of a photovoltaic plant in Spain to illustrate how to use the model to value a PPA from the buyer's perspective.

    • November 2

    Wout Dullaert, Leen Stougie and David Wozabal: Internal seminar

    This seminar is only for the colleagues of the department of Operations Analytics.

    • October 5

    Sena Eruguz: Customer-to-Customer Returns Logistics: Can It Mitigate the Negative Impact of Online Returns?

    Customer returns are a major problem for online retailers due to high costs and CO2 emissions. This paper investigates a new concept for handling online returns: customer-to-customer (C2C) returns logistics. The idea behind the C2C concept is to deliver returned items directly to the next customer, bypassing the retailer's warehouse. To incentivize customers to purchase C2C return items, retailers can promote return items on their webshop with a discount. We build the mathematical models behind the C2C concept to determine how much discount to offer to ensure enough customers are induced to purchase C2C return items and to maximize the retailer's expected total profit. Our first model, the base model (BM), is a customer-based formulation of the problem and provides an easy-to-implement constant-discount-level policy. Our second model formulates the real-world problem as a Markov decision process (MDP). Since our MDP suffers from the curse of dimensionality, we resort to simulation optimization (SO) and reinforcement learning (RL) methods to obtain reasonably good solutions. We apply our methods to data collected from a Dutch fashion retailer. We also provide extensive numerical experiments to claim generality. Our results indicate that the constant-discount-level policy obtained with the BM performs well in terms of expected profit compared to SO and RL. With the C2C concept, significant benefits can be achieved in terms of both expected profit and return rate. Even in cases where the cost-effectiveness of the C2C returns program is not pronounced, the proportion of customer-to-warehouse returns to total demand becomes lower. Therefore, the system can be defined as more environmentally friendly. The C2C concept can help retailers financially address the problem of online returns and meet the growing need for corporate social responsibility that has emerged over the past decade.

    • Summer break
    • June 1

    Angelos Georghiou: Risk-Averse Regret Minimization in Multistage Stochastic Programs

    Within the context of optimization under uncertainty, a well-known alternative to minimizing expected value or the worst-case scenario consists in minimizing regret. In a multistage stochastic programming setting with a discrete probability distribution, we explore the idea of risk-averse regret minimization, where the benchmark policy can only benefit from foreseeing ∆ steps into the future. The ∆-regret model naturally interpolates between the popular ex ante and ex post regret models. We provide theoretical and numerical insights about this family of models under popular coherent risk measures and shed new light on the conservatism of the ∆-regret minimizing solutions.

    • May 25

    David Wozabal: Markovian Stochastic Dual Dynamic Programming

    Stochastic dual dynamic programming (SDDP) is traditionally used for problems with stage-wise independent randomness. While the assumption of stage-wise independence makes the implementation as well as the theoretical analysis easier, it restricts the set of admissible problems. Consequently, versions of SDDP that work with Markovian randomness have recently gained popularity. In this talk, we discuss applications, algorithms, discretization strategies for Markovian SDDP and discuss convergence properties.

    • April 19

    Claudia Archetti: Reinforcement Learning Approaches for the Orienteering Problem with Stochastic and Dynamic Release Dates 

    We study Orienteering Problem with Stochastic and Dynamic Release Dates (DOP-rd) where a single and uncapacitated vehicle is serving customer requests with stochastic and dynamic release dates, representing the time at which parcels are available for distribution. The objective is to maximize the number of requests served within the deadline. We model the problem as a Markov decision process and present two approximation approaches for its solutions, where the distribution strategy is learned from Monte-Carlo simulation.

    • April 5

    Martijn van Ee: Exact and approximation algorithms for routing a convoy through a graph 

    In convoy routing problems, we assume that each edge in the graph has a length and a speed at which it can be traversed, and that our convoy has a given length. At any moment, due to safety requirements, the speed of the convoy is dictated by the slowest edge on which currently a part of the convoy is located. In this talk, I will discuss the complexity and approximability of some classical routing problems in the convoy setting.

    • March 2

    Kim van den Houten: Measure-valued derivatives in reinforcement learning algorithms

    Policy gradient methods are successful for a wide range of reinforcement learning tasks. Traditionally, such methods utilize the score function as stochastic gradient estimator. We investigate the effect of replacing the score function with a measure-valued derivative within an on-policy actor-critic algorithm. The hypothesis is that measure-valued derivatives reduce the need for score function variance reduction techniques that are common in policy gradient algorithms. The empirical results of this study suggest that measure-valued derivatives can serve as low-variance alternative.

    • February 9: Department activity
    • January 26

    Caroline Jagtenberg: Optimal dispatching for Community First Responder systems

    For urgent patients, traditional ambulance response may be supplemented with trained volunteers who are dispatched via an app. How many volunteers should be alerted for one patient? And do we alert all volunteers at time 0, or does it make sense to wait until the first few have accepted/rejected?

    • January 11: New year's drinks
  • 2022

    • December 1

    Chris Franssen: The Feature-Based Network Fitting Framework: an approach for fitting structural features in networks

    In this paper, we introduce the Feature-Based Network Fitting Framework for constructing continuously (and non-negatively) weighted networks of small to medium size (i.e., up to a few hundred nodes), satisfying prespecified features. We show how this framework can produce randomized networks on the node-level, while maintaining global features of a confidential network. Additionally, we show how to transform a network such that it satisfies an additional feature - providing a valuable tool in policymaking for network management and regulators.

    • November 24

    Dirk Briskorn: Single-machine scheduling with an external resource

    We study a single-machine scheduling problem with an external resource, which is rented for a non-interrupted period. We look at four classes of problems with an external resource: a class of problems where the renting period is budgeted and the scheduling cost needs to be minimized, a class of problems where the scheduling cost is budgeted and the renting period needs to be minimized, a class of two-objective problems where both, the renting period and the scheduling cost, are to be minimized, and a class of problems where a linear combination of the scheduling cost and the renting period is minimized. We provide a thorough complexity analysis (NP-hardness proofs and  (pseudo-)polynomial algorithms) for different members of these four classes.

    • November 3

    Buğra Çınar: Integrating Preferences and Satisfaction of Supply Chain Actors in Last-Mile Delivery

    One of the recent innovations in urban (last-mile) distribution is crowdsourced delivery, where deliveries are made by occasional drivers who want to utilize their excess resources (unused transportation capacity) by performing deliveries in exchange for some compensation. Utilizing occasional drivers presents new challenges since (in contrast to traditional couriers) neither their availability nor their behavior in accepting the delivery offers is certain. We consider a setting in which compensation-dependent acceptance probabilities are explicitly considered in the process of assigning delivery tasks to occasional drivers.

    • October 27

    Selin Hülagü: Integrating Life Cycle Assessment and Supply Chain Network Optimization

    Life Cycle Assessment (LCA) appears to be one of the most promising approaches to systematically assess the environmental impacts of supply chains. Decisions on the network structure to bring products and/or services to customers are the typical focal point of supply chain network optimization (SCO). This research wants to integrating LCA and SCO to optimize the overall network design and life cycle environmental impacts of the products supplied. We show how such integration can be modelled and discuss its potential benefits. 

    • October 6

    Tjalling Veenstra: Minimizing the expected shortest path weight for stochastic networks

    We consider a weighted network whose arcs independently become non-operational with a certain failure probability in a random experiment. The failure probability of an arc can be decreased at a given cost. Constrained by a budget the objective is to decrease the failure probability of a subset of the arcs such that it minimizes the expected shortest path weight. We propose approaches to determine optimal and approximate solutions for this problem. 

    • Summer break
    • June 10

    Leen Stougie: The Online Traveling Salesman Problem with Predictions

    A recent vibrant line of research aims at incorporating error-prone predictions into online algorithms. In this talk I will show results within this framework for the online traveling salesman problem (OLTSP). I will show the typical ingredients that such analysis requires and the typical statements about performance that one may expect to see.

    • April 29

    Reinout Heijungs: Eco-efficiency by BASF: work-in-progress or hoax?

    Eco-efficiency is generally defined as the ratio of an economic and an environmental variable. This interpretation is also cited in connection to its most popular implementation, known as the "BASF eco-efficiency portfolio analysis". There is, however, something strange with this. A ratio is easily visualized as a slope, but BASF's method is working with a distance, which can be formulated as a weighted sum, not as a ratio. Here we critically analyze this shift of paradigm. 

    • March 11

    Caroline Jagtenberg : Modeling Emergency Medical Service Volunteer Response

    Out of hospital cardiac arrest requires immediate treatment and patient survival can be improved by combining traditional ambulance response with the dispatch of volunteers alerted via an app. How many volunteers are needed, and from where should they be recruited?

    • February 11

    Andreas Wiese: A PTAS for Unsplittable Flow on a Path

    We present a PTAS for the Unsplittable Flow on a Path problem which is in a sense a temporal extension of the classical knapsack problem, the difference being that each item needs the knapsack only during a certain time interval. The problem has applications and connections to several other settings like scheduling, resource allocation, and multi-commodity flow.

    • January 28

    René Sitters: Polynomial Time Approximation Schemes for the Traveling Repairman and Other Minimum Latency Problems 

    In the Traveling Repairman Problem one needs to find a tour visiting all point in a metric space with the objective to minimize the average arrival time at the points. We show how to get 1+epsilon approximation algorithms (with epsilon>0 arbitrary small) for several variant of this problem. See https://doi.org/10.1137/19M126918X

    • January 14

    Denise Tönissen: Using 3D-printing in disaster response: The two-stage stochastic 3D-printing knapsack problem

    When to pack and use 3D-printers in disaster response operations? To answer this question, we introduce and solve a new type of problem, which we call the two-stage stochastic 3D-printing knapsack problem.

  • 2021

    • December 10

    Dirk Briskorn: Scheduling Interfering Gantry Cranes - A Bottom-Up Approach 

    When scheduling multiple gantry cranes operating on the same storage block interference of these cranes needs to be taken into account. We present approaches developed in the course of a bottom-up approach where we focussed on the subproblem to resolve interference first and employed the corresponding algorithms when tackling the holistic scheduling problem.

    • November 26

    Pascal Wissink: Using presence pattern predictors for delivery-efficient route optimization

    The notion that demographic and temporal factors retain predictive power for customer presence during parcel delivery suggests that a ‘good’ route depends on more than deterministic distances alone. In fact, the oft neglected costs associated with redelivery attempts constitutes more than 5% of the overall length in some of Amsterdam’s districts. In this talk, I will present a simplified problem that illustrates how presence predictors can be exploited to incorporate the expected cost of redelivery attempts in earlier stages of route optimization. 

    • November 12

    Nanne Dieleman: Solving ranking problems in DTMCs with stochastic optimization techniques

    In this talk, I will discuss our on-going research in using stochastic optimization techniques (in our case SPSA) to solve ranking problems based on the stationary distribution of DTMCs. I will describe the SPSA technique and show some recent results.

    • October 29

    Bernd Heidergott: From randomized algorithms for deterministic problems to (partially) deterministic algorithms for problems including randomness

    In this lecture, I want to identify an approach to integrating the probabilistic and deterministic paradigm which may lead to new and meaningful research. The key questions are (i) Can we adapt deterministic algorithms to compensate for randomness? (ii) Can we perform a risk analysis of the output of deterministic algorithms? (iii) Can we trade off partial solutions per instance with a statistical sampling of instances? 

    • October 15

    Christian Franssen: Feature-Based Network Sampling: An approach for sampling weighted graphs satisfying hard constraints

    In many disciplines, the analysis of networks is obstructed by the confidentiality of inter-entity relations. In my master’s thesis, I introduce Feature-Based Network sampling (FBNS) as a tool for such cases. In this talk, I will explain the method, some of its applications, and some challenges to solve.

    • October 1

    Tim Oosterwijk: Modelling Flow Dynamics with Flow over Time

    In this talk, I will introduce a basic and widely used model that allows us to track particles over time as they traverse a network from their origin to their destination. I will mention our result on its price of anarchy and share some interesting open questions and related models.

    • September 17

    Andreas Wiese: How theory helps to solve a problem in avionics industries

    We study a scheduling problem that appears when constructing the onboard-computer of a modern plane. We show how this problem can be solved computationally with a non-standard IP formulation for the problem, using properties of the instances of our industrial partner Boeing.

Quick links

Homepage Culture on campus VU Sports Centre Dashboard

Study

Academic calendar Study guide Timetable Canvas

Featured

VUfonds VU Magazine Ad Valvas

About VU

Contact us Working at VU Amsterdam Faculties Divisions
Privacy Disclaimer Veiligheid Webcolofon Cookies Webarchief

Copyright © 2024 - Vrije Universiteit Amsterdam