News & EventsWasserstrom Lecture Series
The Role of Optimization in Operations Management Applications
Georgia Perakis, MIT
May 28, 2024
Abstract: Data-driven decision-making has garnered a growing interest due to the increase in data availability in recent years. With that growth many opportunities as well as challenges arise. Optimization can play an important role to address these challenges in both predictive and prescriptive tasks in Operations Management applications. In this talk, we discuss some of our recent work that highlights how to integrate optimization in data-driven decision-making when optimization is handled in an offline manner using trained Machine Learning models. We focus on how we can optimize over already trained objective functions that arise either from tree ensemble predictive models or from neural network models in order to recommend better decisions.
Biography: Georgia Perakis is the John C Head III Dean (Interim) of the MIT Sloan School of Management and a Professor of Operations Management, Operations Research & Statistics at the MIT Sloan School of Management. She is on leave from the roles of co-director of the Operations Research Center and Associate Dean for Social and Ethical Responsibility in Computing (SERC) in the Schwarzman College of Computing and MIT Sloan. Her widely published research has received many awards and focuses on analytics/AI, in particular, in the intersection of optimization and machine learning with applications in pricing, revenue management, supply chain, and healthcare among others. She received the PECASE Award from the Office of the President on Science and Technology. In 2016, she was elected as an INFORMS Fellow, and in 2021 as Distinguished MSOM Fellow. Perakis has passion for supervising PhD, masters, and undergraduate students, having graduated 30 PhD and 59 masters students. She has received numerous awards for teaching including the Graduate Student Council Teaching Award (2002), the Samuel M. Seegal Award (2012), the Jamieson Prize for excellence in teaching (2014), and the Teacher of the Year Award (2017) at MIT Sloan. Perakis is currently the Editor in Chief of the M&SOM journal and has served as editor at a number of other publications. She holds a BS in mathematics from the University of Athens as well as an MS in applied mathematics and a PhD in applied mathematics from Brown University.
On Some Challenges In The Practice Of Analytics
Nimrod Megiddo, IBM
May 16, 2023
Abstract: There are some fundamental difficulties arising in the practical application of theoretical methodologies for making decisions based on data. The talk will present some examples, including formulation of optimization models, causal inference, and game theory.
Biography: Nimrod Megiddo received his Ph.D. in mathematics from the Hebrew University of Jerusalem. He is an IBM Distinguished Research Scientist, formerly Professor of Statistics & Operations Research at Tel Aviv University, and taught at Stanford University, Carnegie Mellon University, and Northwestern University. Megiddo contributed to the areas of machine learning, optimization, game theory, algorithms and complexity theory, computational geometry, and operations research in general. His contributions were recognized by the John von Neumann Theory Prize and the Lanchester Prize of INFORMS, the USNIX-FAST Test-of-Time award, and the INFORMS Computer Society Prize. He also holds more than 80 patents in various areas. Some of his inventions were sold to Google, Twitter, Intel, SAP, Facebook and Ebay. Megiddo served as the Editor-in-Chief of Mathematics of Operations Research during 2004-2009 and has been a member of the editorial boards of several other journals of operations research and computer science. He is the Editor-in-Chief of Discrete Optimization. Megiddo is a Fellow of INFORMS, the Game Theory Society, and the Society for the Advancement of Economic Theory.
Interpreting Deep Neural Networks Towards Trustworthiness
Bin Yu, Statistics and EECS, UC Berkeley
May 3, 2022
Abstract: Recent deep learning models have achieved impressive predictive performance by learning complex functions of many variables, often at the cost of interpretability. This lecture first defines interpretable machine learning in general and introduces the agglomerative contextual decomposition (ACD) method to interpret neural networks. Extending ACD to the scientifically meaningful frequency domain, an adaptive wavelet distillation (AWD) interpretation method is developed. AWD is shown to be both outperforming deep neural networks and interpretable in two prediction problems from cosmology and cell biology. Finally, a quality-controlled data science life cycle is advocated for building any model for trustworthy interpretation and introduce a Predictability Computability Stability (PCS) framework for such a data science life cycle.
Biography: Bin Yu is Chancellor's Distinguished Professor and Class of 1936 Second Chairin the departments of statistics and EECS at UC Berkeley. She leads the Yu Group which consists of students and postdocs from Statistics and EECS. She was formally trained as a statistician, but her research extends beyond the realm of statistics. Together with her group, her work has leveraged new computational developments to solve important scientific problems by combining novel statistical machine learning approaches with the domain expertise of her many collaborators in neuroscience, genomics and precision medicine. She and her team develop relevant theory to understand random forests and deep learning for insight into and guidance for practice. She is a member of the U.S. National Academy of Sciences and of the American Academy of Arts and Sciences. She is Past President of the Institute of Mathematical Statistics (IMS), Guggenheim Fellow, Tukey Memorial Lecturer of the Bernoulli Society, Rietz Lecturer of IMS, and a COPSS E. L. Scott prize winner. She holds an Honorary Doctorate from The University of Lausanne (UNIL), Faculty of Business and Economics, in Switzerland. She has recently served on the inaugural scientific advisory committee of the UK Turing Institute for Data Science and AI, and is serving on the editorial board of Proceedings of National Academy of Sciences (PNAS).
Stochastic Networks: Bottlenecks, Entrainment and Reflection
Ruth J. Williams, University of California, San Diego
May 11, 2021
Abstract: Stochastic models of complex networks with limited resources arise in a wide variety of applications in science and engineering, e.g., in manufacturing, transportation, telecommunications, computer systems, customer service facilities, and systems biology. Bottlenecks in such networks cause congestion, leading to queuing and delay. Sharing of resources can lead to entrainment effects. Understanding the dynamic behavior of such modern stochastic networks present challenging mathematical problems. This talk will describe some recent developments and open problems in this area. A key feature will be dimension reduction, resulting from entrainment due to resource sharing. An example of bandwidth sharing in a data network will be featured. For background reading, see the survey article: R. J. Williams, Stochastic Processing Networks, Annu. Rev. Stat. Appl. 2016. 3:323–45.
Biography: Ruth Williams holds the Charles Lee Powell Chair in Mathematics at the University of California, San Diego (UCSD). She is a mathematician who works in probability theory, especially on stochastic processes and their applications. She is known for her foundational work on reflecting diffusion processes in domains with corners, for co-development with Maury Bramson of a systematic approach to proving heavy traffic limit theorems for multiclass queuing networks, and for the development of fluid and diffusion approximations for the analysis and control of more general stochastic networks, including those described by measure-valued processes. Her current research includes the study of stochastic models of complex networks, for example, those arising in Internet congestion control and systems biology.
Williams studied mathematics at the University of Melbourne where she earned her Bachelor of Science (Honors) and Master of Science degrees. She then studied at Stanford University where she earned her Ph.D. degree in Mathematics. She had a postdoc at the Courant Institute of Mathematical Sciences in New York before taking up a position as an Assistant Professor at the University of California, San Diego (UCSD). She has remained at UCSD during her career, where she is now a Distinguished Professor of Mathematics.
She is an elected member of the US National Academy of Sciences, an elected fellow of the American Academy of Arts and Sciences, an elected Corresponding Member of the Australian Academy of Science, an inaugural fellow of the American Mathematical Society, a fellow of the Institute for Operations Research and the Management Sciences, the American Association for the Advancement of Sciences, the Institute of Mathematical Statistics and the Society for Industrial and Applied Mathematics. Williams has been a Guggenheim Fellow, an Alfred P. Sloan Fellow and a National Science Foundation Presidential Young Investigator. In 2016, Williams was awarded the John von Neumann Theory Prize by the Institute for Operations Research and the Management Sciences, jointly with Martin I. Reiman. At the annual INFORMS meeting in 2017, she was awarded the 2017 Award for the Advancement of Women in Operations Research and the Management Sciences.
The Traveling Salesman Problem: Postcards From The Edge Of Impossibility
William Cook, University of Waterloo and Johns Hopkins University
April 2, 2019
Abstract: Given a collection of points, the TSP asks for the shortest route to visit them all. Simple enough. But even a whisper of the problem strikes fear in the heart of the computing world. Last year, a Washington Post article reported it would take "1,000 years to compute the most efficient route between 22 cities."
In an ultimate battle of math+engineering versus the impossible, the impossible wins: it is likely no TSP solution method can have good performance on every data set as the number of points goes off to infinity. That said, the 1,000-year claim ignores over 70 years of intense study. A 22-city TSP can be handled in a snap with modern algorithms, even on an iPhone. Going larger, we describe techniques that have been used to solve to precise optimality examples with nearly 50,000 points and Google Map walking distances. And if we have a couple of million points to visit, say, the nearest stars to our sun, then my money is on the math. Indeed, for this particular example, with 2,079,471 stars, we have a route that is guaranteed to be no more than 1.00002 times longer than a shortest possible solution.
Complexity theory suggests there are limits to the power of general-purpose computational techniques, in engineering, science and elsewhere. But what are these limits and how widely do they constrain our quest for knowledge? The TSP can play a crucial role in this context, demonstrating whether or not focused efforts on a single, possibly unsolvable, problem will produce results beyond our expectations.
Biography: William Cook is a Professor in Applied Mathematics and Statistics at Johns Hopkins University and a University Professor in Combinatorics and Optimization at the University of Waterloo, where he received his Ph.D. in 1983. Bill was elected a SIAM Fellow in 2009, an INFORMS Fellow in 2010, a member of the National Academy of Engineering in 2011, and an American Mathematics Society Fellow in 2012. He is the author of the popular book In Pursuit of the Traveling Salesman: Mathematics and the Limits of Computation. Bill is a former Editor-in-Chief of the journals Mathematical Programming (Series A and B) and Mathematical Programming Computation. He is a past chair of the Mathematical Optimization Society and a past chair of the INFORMS Computing Society.
Air Transportation Optimization
Cynthia Barnhart, Chancellor, Massachusetts Institute of Technology
January 23, 2018
Abstract: Air transportation has a long history of using operations research models and algorithms for decision making. In this talk, I will briefly review that history by providing examples illustrating the evolution of approaches developed to solve classical airline problems such as crew scheduling, fleet assignment, aircraft routing, and flight network design. I will describe how over the years the models have become more sophisticated, accounting for multiple sources of uncertainty, competitive effects, passenger choice, and dynamic decision making, to name a few. Using data from airline operations, I will describe and quantify the impacts of these advanced techniques on passengers, airlines, and the aviation system.
Bio: Cynthia Barnhart is MIT’s Chancellor and the Ford Foundation Professor of Engineering. At MIT, she previously served as Associate and Acting Dean for the School of Engineering and co-directed the Operations Research Center and the Center for Transportation and Logistics. Her research focuses on building mathematical programming models and large-scale optimization approaches for transportation and logistics systems. Barnhart is a member of the National Academy of Engineering and the American Academy of Arts and Sciences, and has served as the President of the Institute for Operations Research and the Management Sciences and in editorial roles for the flagship journals in her discipline.
Statistical Learning with Sparsity
Trevor Hastie, Stanford University
November 1, 2016
Abstract: In a statistical world faced with an explosion of data, regularization has become an important ingredient. In many problems, we have many more variables than observations, and the lasso penalty and its hybrids have become increasingly useful. This talk presents a general framework for fitting large scale regularization paths for a variety of problems. We describe the approach, and demonstrate it via examples using our R package GLMNET. We then outline a series of related problems using extensions of these ideas.
Bio: Trevor Hastie received his university education from Rhodes University, South Africa (BS), University of Cape Town (MS), and Stanford University (Ph.D Statistics 1984). His first employment was with the South African Medical Research Council in 1977, during which time he earned his MS from UCT. In 1979 he spent a year interning at the London School of Hygiene and Tropical Medicine, the Johnson Space Center in Houston Texas, and the Biomath department at Oxford University. He joined the Ph.D program at Stanford University in 1980. After graduating from Stanford in 1984, he returned to South Africa for a year with his earlier employer SA Medical Research Council. He returned to the USA in March 1986 and joined the statistics and data analysis research group at what was then AT&T Bell Laboratories in Murray Hill, New Jersey. After eight years at Bell Labs, he returned to Stanford University in 1994 as Professor in Statistics and Biostatistics. In 2013, he was named the John A. Overdeck Professor of Mathematical Sciences. His main research contributions have been in applied statistics; he has published over 180 articles and has co-written four books in this area: "Generalized Additive Models“, "Elements of Statistical Learning “, "An Introduction to Statistical Learning, with Applications in R" and "Statistical Learning with Sparsity“. He has also made contributions in statistical computing, co-editing (with J. Chambers) a large software library on modeling tools in the S language ("Statistical Models in S", Wadsworth, 1992), which form the foundation for much of the statistical modeling in R. His current research focuses on applied statistical modeling and prediction problems in biology and genomics, medicine and industry. His current research focuses on applied statistical modeling and prediction problems in biology and genomics, medicine and industry
Incremental Proximal and Augmented Lagrangian Methods for Convex Optimization: A Survey
Dimitri Bertsekas, Massachusetts Institute of Technology
April 12, 2016
Abstract: Incremental methods deal effectively with an optimization problem of great importance in machine learning, signal processing, and large-scale and distributed optimization: the minimization of the sum of a large number of convex functions. We survey these methods and we propose incremental aggregated and nonaggregated versions of the proximal algorithm. Under cost function differentiability and strong convexity assumptions, we show linear convergence for a sufficiently small constant stepsize. This result also applies to distributed asynchronous variants of the method, involving bounded interprocessor communication delays.
We then consider dual versions of incremental proximal algorithms, which are incremental augmented Lagrangian methods for separable equality-constrained optimization problems. Contrary to the standard augmented Lagrangian method, these methods admit decomposition in the minimization of the augmented Lagrangian, and update the multipliers far more frequently. Our incremental aggregated augmented Lagrangian methods bear similarity to several known decomposition algorithms, including the alternating direction method of multipliers (ADMM) and more recent variations. We compare these methods in terms of their properties, and highlight their potential advantages and limitations.
Bio: Dimitri P. Bertsekas' undergraduate studies were in engineering at the National Technical University of Athens, Greece. He obtained his MS in electrical engineering at the George Washington University, Wash. DC in 1969, and his Ph.D. in system science in 1971 at the Massachusetts Institute of Technology.
Dr. Bertsekas has held faculty positions with the Engineering-Economic Systems Dept., Stanford University (1971-1974) and the Electrical Engineering Dept. of the University of Illinois, Urbana (1974-1979). Since 1979 he has been teaching at the Electrical Engineering and Computer Science Department of the Massachusetts Institute of Technology (M.I.T.), where he is currently McAfee Professor of Engineering. His research spans several fields, including optimization, control, large-scale computation, and data communication networks, and is closely tied to his teaching and book authoring activities. He has written numerous research papers, and sixteen books and research monographs, several of which are used as textbooks in MIT classes.
Professor Bertsekas was awarded the INFORMS 1997 Prize for Research Excellence in the Interface Between Operations Research and Computer Science for his book "Neuro-Dynamic Programming" (co-authored with John Tsitsiklis), the 2000 Greek National Award for Operations Research, the 2001 ACC John R. Ragazzini Education Award, the 2009 INFORMS Expository Writing Award, the 2014 ACC Richard E. Bellman Control Heritage Award for "contributions to the foundations of deterministic and stochastic optimization-based methods in systems and control," the 2014 Khachiyan Prize for Life-Time Accomplishments in Optimization, and the SIAM/MOS 2015 George B. Dantzig Prize. In 2001, he was elected to the United States National Academy of Engineering for "pioneering contributions to fundamental research, practice and education of optimization/control theory, and especially its application to data communication networks."
Dr. Bertsekas' recent books are "Introduction to Probability: 2nd Edition" (2008), "Convex Optimization Theory" (2009), "Dynamic Programming and Optimal Control, Vol. II: Approximate Dynamic Programming" (2012), "Abstract Dynamic Programming" (2013), and "Convex Optimization Algorithms" (2015), all published by Athena Scientific.
Financial engineering
Paul Glasserman, Columbia Business School
April 7, 2015
Abstract: Financial engineering has traditionally addressed problems of portfolio selection, derivatives valuation, and risk measurement. This talk will provide an overview of more recent financial engineering problems that arise in the design and monitoring of the financial system. Several problems in this domain can be viewed as instances of stabilizing or destabilizing feedback. Some problems result from a combination of the two: actions that are stabilizing for individual agents can become destabilizing when agents interact. Other problems draw on traditional tools of the field. I will discuss specific modeling problems in the design of capital requirements, measuring counterparty risk, margin requirements for derivatives, and the effects interconnections between financial institutions, drawing on joint work with several other researchers.
Bio: Paul Glasserman is the Jack R. Anderson Professor of Business at Columbia Business School, where he serves as research director of the Program for Financial Studies. In 2011-2012, he was on leave from Columbia, working full-time at the Office of Financial Research in the U.S. Treasury Department, where he currently serves as a part-time consultant. His work with the OFR has included research on stress testing, financial networks, contingent capital, and counterparty risk. Paul’s research recognitions include the INFORMS Lanchester Prize, the Erlang Prize in Applied Probability, and the I-Sim Outstanding Simulation Publication Award; he is also a past recipient of Risk magazine’s Quant of the Year award. Paul served as senior vice dean of Columbia Business School in 2004-2008 and was interim director of its Sanford C. Bernstein Center for Leadership and Ethics in 2005-2007.
Routing Optimization Under Uncertainty
Patrick Jaillet, Ph.D., Massachusetts Institute of Technology
April 29, 2014
Abstract: We consider various network routing problems under travel time uncertainty where deadlines are imposed at a subset of nodes. Corresponding nominal deterministic problems include variations of classical shortest path problems and capacitated multi-vehicle routing problems. After providing motivating examples, we will introduce several new mathematical frameworks for addressing a priori and adaptive versions for these problems, under varying degree of uncertainty. We will show how some of these problems can be solved in a computationally tractable way. We will then compare their solutions to those of other stochastic and robust optimization approaches.
Joint works with Yossiri Adulyasak, Arthur Flajolet, Jin Qi, and Melvyn Sim.
Bio: Patrick Jaillet is the Dugald C. Jackson Professor in the Department of Electrical Engineering and Computer Science and a member of the Laboratory for Information and Decision Systems at MIT. He is also one of the two Directors of the MIT Operations Research Center. Before MIT, he held faculty positions at the University of Dilpome d'Ingenieur from France, and then an SM in Transportation and a PhD in Operations Research from MIT. His current research interests include on-line and data-driven optimization. Dr. Jaillet was a Fulbright Scholar in 1990 and received several awards including most recently the Glover-Klingman Prize. He is a Fellow of INFORMS and a member of SIAM.
A Flexible Point Process Model for Describing Arrivals to a Service Facility
Peter W. Glynn, Ph.D., Stanford University
April 16, 2013
Abstract: In many applied settings, one needs a description of incoming traffic to the system. In this talk, we argue that the Palm-Khintchine superposition theorem dictates that the process should typically look "locally Poisson". However, there are usually obvious time-of-day effects that should be reflected in the model. Furthermore, in many data sets, it appears that medium-scale burstiness is also present. In this talk, we consider a Poisson process that is driven by a mean-reverting process as a flexible vehicle for modeling such traffic. We argue that this model is tractable computationally, is parsimonious, has physically interpretable parameters, and can flexibly model different behaviors at different scales. We discuss estimation methodology and hypothesis tests that are relevant to this model, and illustrate the ideas with call center data. This work is joint with Jeff Hong and Xiaowei Zhang.
Bio: Peter W. Glynn is the current Chair of the Department of Management Science and Engineering at Stanford University. He received his Ph.D in Operations Research from Stanford University in 1982. He then joined the faculty of the University of Wisconsin at Madison, where he held a joint appointment between the Industrial Engineering Department and Mathematics Research Center, and courtesy appointments in Computer Science and Mathematics. In 1987, he returned to Stanford, where he joined the Department of Operations Research. He is now the Thomas Ford Professor of Engineering in the Department of Management Science and Engineering, and also holds a courtesy appointment in the Department of Electrical Engineering. From 1999 to 2005, he served as Deputy Chair of the Department of Management Science and Engineering, and was Director of Stanford's Institute for Computational and Mathematical Engineering from 2006 until 2010. He is a Fellow of INFORMS and a Fellow of the Institute of Mathematical Statistics, has been co-winner of Best Publication Awards from the INFORMS Simulation Society in 1993 and 2008, was a co-winner of the Best (Biannual) Publication Award from the INFORMS Applied Probability Society in 2009, and was the co-winner of the John von Neumann Theory Prize from INFORMS in 2010. In 2012, he was elected to the National Academy of Engineering. His research interests lie in simulation, computational probability, queuing theory, statistical inference for stochastic processes, and stochastic modeling.
Operations Research and Public Health: A Little Help Can Go a Long Way
Margaret Brandeau, Ph.D., Stanford University
May 1, 2012
Abstract: How should the Centers for Disease Control and Prevention revise national immunization recommendations so that gaps in vaccination coverage will be filled in a cost-effective manner? What is the most cost-effective way to use limited HIV prevention and treatment resources? To what extent should local communities stockpile antibiotics for response to a potential bioterror attack? This talk will describe examples from past and ongoing model-based analyses of public health policy questions. We also provide perspectives on key elements of a successful policy analysis and discuss ways in which such analysis can influence policy.
Bio: Her research focuses on the development of applied mathematical and economic models to support health policy decisions. Her recent work has focused on HIV prevention and treatment programs, programs to control the spread of hepatitis B virus, and preparedness plans for bioterror response. She is a Fellow of the Institute for Operations Research and Management Science (INFORMS), and has received the President’s Award from INFORMS (recognizing important contributions to the welfare of society), the Pierskalla Prize from INFORMS (for research excellence in health care management science), a Presidential Young Investigator Award from the National Science Foundation, and the Eugene L. Grant Teaching Award from Stanford, among other awards. Professor Brandeau earned a BS in Mathematics and an MS in Operations Research from MIT, and a PhD in Engineering-Economic Systems from Stanford University.