Skip to Content

2024 Purdue Operations Conference Speakers

August 23-25 | West Lafayette, Indiana

Click on any speaker to read their bio and abstract.

Keynote Speakers

Title: Overview & Insights on Integrated Supply Chain Finance (iSCF) Research: A Supply Chain-Centric View of Working Capital, Hedging, and Risk Management

Abstract: Integrated supply chain finance (iSCF) is a portfolio of effective operating, financial, and risk mitigating practices and techniques within supply chains, which reflect strategic concerns of participating firm agents (decision makers) within the chain, and optimize not only the management of the working capital for liquidity, but also make effective use of assets for firm profitability and risk control. The main theory and research areas within iSCF are financing working capital in supply chains; financial hedging in support of supply chain operations; integrated risk management in supply chains; and supply chain contracts and risk management. I will provide a research overview to each of these topics, with brief highlights of foundational models and insights behind key results, and outline fertile topics for future research in this important interdisciplinary (operations and finance) area.

Bio: Dr. Panos Kouvelis is the Emerson Distinguished Professor of Supply Chain, Operations, and Technology at Washington University in St. Louis. He is also the Director of The Boeing Center for Supply Chain Innovation, a supply chain management research center. Prior to joining Olin, Panos Kouvelis served as an associate professor at the Fuqua School of Business at Duke University and as an assistant professor at the University of Texas at Austin. He has published three books and over 80 papers in top-quality academic journals. Kouvelis has held visiting appointments with the Graduate School of Business, University of Chicago, where he taught in the executive programs in Barcelona, Chicago and Singapore, WHU- Koblenz School of Management, Germany, and Singapore Management University, Singapore. He has consulted with and/or taught executive programs for Emerson, IBM, Dell Computers, Boeing, Hanes, Duke Hospital, Solutia, Express Scripts, Spartech, MEMC, Ingram Micro, Smurfit Stone, Reckitt & Colman, and Bunge on supply chain, operations strategy, inventory management, lean manufacturing, operations scheduling and manufacturing system design issues.

Title: Hierarchical and Mixed Leadership Games for Dynamic Supply Chains: Applications to Cost Learning and Co-op Advertising

Abstract: We consider two applications dynamic stochastic supply chains. The first application is a decentralized two-period supply chain in which a manufacturer produces a product with benefits of cost learning, and sells it through a retailer facing a price-dependent demand. The manufacturer’s second-period production cost declines linearly in the first-period production, but with a random learning rate. The manufacturer may or may not have the inventory carryover option. We formulate the problem as a two-period Stackelberg games and obtain their feedback equilibrium solutions explicitly. We then examine the impact of mean learning rate and learning rate variability on the pricing strategies of the channel members, on the manufacturer’s production decisions, and on the retailer’s procurement decisions. We show that as the mean learning rate or the learning rate variability increases, the traditional double marginalization problem becomes more severe, leading to greater efficiency loss in the channel. We obtain revenue sharing contracts that can coordinate the dynamic supply chain. The second application studies a novel manufacturer-retailer cooperative advertising game where, in addition to the traditional setup into which the manufacturer subsidizes the retailer's advertising effort, we also allow the reverse support from the retailer to the manufacturer. This is modeled as a mixed leadership game in which one player is a leader on some decisions and a follower on other decisions. We find an equilibrium that can be expressed by a solution of a set of algebraic equations. We then conduct an extensive numerical study to assess the impact of model parameters on the equilibrium.

Bio: Suresh Sethi is Eugene McDermott Chair Professor of Operations Management and Director of the Center for Intelligent Supply Networks at The University of Texas at Dallas. He has written 11 books and published over 400 research papers in the fields of manufacturing and operations management, finance and economics, marketing, and optimization theory. He initiated and developed the doctoral programs in operations management at both The University of Texas at Dallas and University of Toronto. He serves on the editorial boards of several journals including Production and Operations Management and SIAM Journal on Control and Optimization. He was named a Fellow of The Royal Society of Canada in 1994. Two conferences were organized and two books edited in his honor in 2005-6. Other honors include: IEEE Fellow (2001), INFORMS Fellow (2003), AAAS Fellow (2003), POMS Fellow (2005), IITB Distinguished Alum (2008), SIAM Fellow (2009), POMS President (2012), INFORMS Fellows Selection Committee (2014-16), Alumni Achievement Award, Tepper School of Business, Carnegie Mellon University (2015), Best Paper Award Named After Suresh Sethi: Production and Operations Management Society (POMS) has instituted a Suresh Sethi Best Interdisciplinary Paper Award to be given every two years beginning 2022. He also has been a great supervisor to many Doctoral and Post Doctoral Students.

Title: Incorporating Discrete Choice Models into Operations Management Models

Abstract: Over the last couple of decades, there has been enormous progress in using discrete choice models to understand how customers choose and substitute among products and incorporating this understanding into operational models to decide which assortment of products to offer to customers or what prices to charge. We owe some of this progress to increase in the computational power so that we can build and solve more detailed operational models, but perhaps, most of this progress is due to the fact that online sales channels started providing fine-grained data on how customers browse the products. In this talk, we will go over fundamental discrete choice models that have been used in building operational assortment optimization and pricing models, overview the main algorithmic approaches that have been developed to solve the operational models, and identify research prospects. The focus will be on both static models that make one-shot assortment optimization or pricing decisions, as well as dynamic models that explicitly capture the evolution of demand and inventories over time.

Bio: Huseyin Topaloglu is the Howard and Eleanor Morgan Professor in the School of Operations Research and Information Engineering at Cornell Tech. He holds a Ph.D. in Operations Research and Financial Engineering from Princeton. His recent research focuses on constructing tractable solution methods for large-scale network revenue management problems and building approximation strategies for retail assortment planning. Huseyin Topaloglu is currently serving as an area editor for Analytics in Operations area at Manufacturing and Service Operations Management.

Title: Responsible Operations Management

Abstract: In this talk I will first briefly describe research topics related to responsible operations management. Following this I will highlight future research opportunities in this area.

Bio: Jayashankar (Jay) Swaminathan is the GlaxoSmithKline Distinguished Professor of Operations. He is an internationally recognized thought leader in productivity and innovation in operations related to retail, healthcare, customization, sustainability, agriculture, e-commerce and emerging markets. He teaches courses in global operations, global execution models and global supply chain strategy and management. Dr. Swaminathan has published more than 100 articles on these topics and is the author of “Indian Economic Superpower: Fiction or Future?” He has received numerous awards, including the National Science Foundation CAREER Award, George Nicholson Prize, Schwabacher Fellowship and Weatherspoon Distinguished Research and Excellence in Teaching awards. He has been a principal investigator on grants from the National Science Foundation, Obama-Singh Knowledge Initiative and U.S. Department of Education. He is an inducted fellow of the three prominent professional organizations: INFORMS (The Institute for Operations Research and the Management Sciences), POMS (The Production and Operations Management Society) and MSOM (Manufacturing and Service Operations Management Society), a recognition of his lifetime intellectual contributions. Dr. Swaminathan has consulted with numerous firms over the last two decades, including AGCO, Agilent, CEMEX, Cisco, IBM, Kaiser, McKinsey, Nokia, Public Health Institute, Railinc, Samsung, Sara Lee, Schaeffler Group, TVS Motors, UNICEF and the U.S. Navy.

Featured Faculty Speakers

Title: Effectiveness of supply-side financial incentives in ride-hailing networks with spatial demand imbalance and strategic drivers

Abstract: When matching riders with self-interested drivers in a spatial network, ride-hailing platforms face the two important challenges: (i) there are spatial demand imbalances that require some repositioning of drivers to serve the total rider demand; (ii) the control of supply is partially decentralized in that drivers strategically decide whether to join the network, and if so, whether, and where, to reposition when not serving riders. For such networks, we address the question: Under decentralized repositioning, how effective are supply-side financial incentives in achieving the optimal centralized performance benchmark? We consider a stationary fluid model of a ride-hailing network with general topology and demand imbalance patterns in a game-theoretic framework with riders, drivers, and the platform. We show how the effectiveness of supply-side financial incentives under decentralized repositioning depends on (i) the network’s spatial (travel time) configuration, (ii) spatial driver wage flexibility, and (iii) the congestion-sensitivity of travel times. This is joint work with Andre Cire and Uta Mohring.

Bio: Philipp Afèche is on the Rotman faculty in the Operations Management and Statistics Area. His research focuses on modeling, analyzing and optimizing demand and capacity management decisions (including pricing, service design, scheduling and matching) for congestion-prone service systems such as on-demand transportation or healthcare delivery systems. Philipp serves as Associate Editor for Management Science and Operations Research and served as expert reviewer for the national funding agencies in Canada, Hong Kong, Israel and the United States. He is a past chair of the Service Management Special Interest Group of the Manufacturing and Service Operations Management Society.

Title: Selling Professional Products Under Expertise Migration Uncertainty

Abstract: For professional products such as musical instruments and sports gear, a consumer's quality preference is positively associated with the consumer's expertise level. A novice who initially chooses a low-quality product over a high-quality one may have incentive to upgrade if experiencing expertise advancement after professional training. Nonetheless, the outcome of professional training is highly uncertain and typically only with a small probability will consumers advance their expertise. This paper examines a firm's strategies to sell professional products when consumer expertise migrates with uncertainty.  We find that consumer uncertainty about expertise migration allows the firm to implement expertise-based intertemporal segmentation as a strategy in selling a line of professional products. Through this intertemporal segmentation, the firm separates purchases of the low- and the high-quality products across consumers that experience different periods in expertise migration. As a result, the firm may sell both products to the same consumer, before and after the consumer goes through professional training. When consumers are ex ante heterogeneous in quality preferences, the firm can implement ex-ante market segmentation. Interestingly, we show that intertemporal segmentation and market segmentation are in conflict so that the firm may forego the opportunity of market segmentation and implement intertemporal segmentation only. Moreover, we show that offering a trade-in credit facilitates intertemporal segmentation and encourages the firm to offer a product line. Finally, we find that the firm is more likely to offer a product line when selling to myopic consumers, and consumer myopia may hurt firm profit.

Bio: Rachel R. Chen is Professor at Graduate School of Management, University of California at Davis. She received her PhD in Management from the Johnson Graduate School of Management, Cornell University, in 2003. Her research focuses on pricing in markets with new technology, e-procurement and distribution in supply chains and service operations. She has been a member of INFORMS since 1999.

Title: Cross Learning and Co-Learning with Operational Data Analytics (ODA)

Abstract: When data from a focal system is limited, typical machine learning approaches would call for migrating the experience of a related system with ample data through transfer learning or leveraging the similarity of multiple systems with limited data through data pooling. We, instead, progressively develop learning solutions by exploring the inherent structural properties of the decision-making problem and the data-generation model. Building on the understanding of the parametric ODA solution, which is known to be uniformly optimal in the parametric setting, we develop the non-parametric cross-learning solutions, which replicate the decision-making environment of the focal system with the ample data from a related system, and co-learning solutions, which achieve efficiency for not only the aggregated systems but also the individual systems.

Bio: Professor Feng’s current research mostly focuses on the development of stochastic functions and data-integrated decisions. A significant portion of her work analyzes firms’ procurement, inventory and pricing strategies, and negotiations of sourcing contract. She also works in the areas of subsidy design, resource planning, product development and proliferation management, economic growth models, and information system management. She served as a department editor for Production and Operations Management, and is currently an associate editor for Management Science and Manufacturing & Service Operations Management. She was named a POMS Fellow in 2020.

Title: How Not to Overpackage? -- AI for Sustainability in HelloFresh's Service Supply Chain

Abstract: Meal kit services have been hot and trending, especially among the younger generation. However, overpackaging is a common major challenge in these services. Packaging materials, including ice packs and liners, ensure the quality of the meal-kits delivered; yet too much packaging would leave a large carbon footprint and impose psychological burdens on many customers. This paper investigates artificial intelligence solutions to adaptively make the packaging decision for each box and mitigate potential overpackaging for HelloFresh, the world’s largest meal-kit company and integrated food solutions group. We design contextual bandit algorithms that take advantage of the special structures we find in the packaging problem and various contextual information such as transit conditions and box contents. Theoretically, our algorithm Contextual One-Sided Arm Elimination achieves the optimality guarantee with an O(\sqrt{T}) regret bound. Practically, we experiment with HelloFresh's real delivery datasets that contain hundreds of millions of records, identify and correct for issues such as confounding, and test our algorithm's performance. Given the enormous scale of HelloFresh's operations, our contextual bandit algorithm could potentially save millions of units of packaging materials per year, as well as the associated cost, energy and labor.

Bio: Evelyn Xiao-Yue Gong is an Assistant Professor of Operations Management at the Tepper School of Business at Carnegie Mellon University. Gong’s main research develops artificial intelligence solutions for business operations, including supply chain management and environmental sustainability. She is also interested in reinforcement learning, assortment optimization for reusable resources, pure exploration, algorithms and data-driven decision-making. Gong completed her Ph.D. in Operations Research at MIT, and received First Place in the Best Dissertation Competition at the 2023 Annual Conference on Supply Chain Management in the Post-Pandemic and AI Age. Professor Gong's commentary has been recently featured in major media outlets including Wall Street Journal and Forbes Magazine.

Title: Foresee the Next Line: Customer Strategies and Information Disclosure in Tandem Queues

Abstract: Many services consist of multiple stages, where each stage requires some waiting before completion. For example, customers who visit the Apple Store join the check-in queue first, and then wait in another queue to be served by the Genius Bar technician. Although customers may observe the queue in front of them, they usually have no information about the waiting situation in the next queue. Our paper aims to examine the impact of queue-length information on customers' strategic behavior in such systems. We assume a two-stage tandem queueing system, with an admission queue followed by a treatment queue. Customers observe the queue length at arrival to each queue; they may balk or join and might later renege. We first study the fully observable model, in which queue-length information of both queues is available to customers at the time they arrive to the system. We calculate the equilibrium strategy and show that it is not necessarily a function of the total number of customers in the system. Next, we study the partially observable model, in which customers observe each queue length only at arrival to it, i.e., they do not observe the second queue length when they arrive to the system. Although this is the common practice, it is analytically more challenging. We find that in most cases the partially observable model yields higher throughput, but lower social welfare compared to the fully observable model.

Bio: Ricky Roet-Green is an Associate Professor of Operations Management at Simon Business School, University of Rochester. Her research interests lie in modeling and analyzing the behavior of strategic customers in congestion-prone environments, particularly in service systems. Her work integrates queueing theory, game theory, and mechanism design, aiming to understand customers’ decisions and optimize system performance in terms of revenue and social welfare maximization.

Title: A Bilevel View for Fluid Stockout-Based Substitution

Abstract: We revisit the classic joint assortment and inventory planning problem under stockout-based substitution. As customers arrive sequentially, making substitution decisions based on the available assortment, the retailer needs to strategically determine the initial product set and inventory level at the season's start to maximize expected total profits throughout the selling period. Despite extensive study, the structural properties of optimal solutions remain relatively unknown. We consider a fluid relaxation that offers performance guarantee with an upper bound on the optimal value. We prove that this fluid problem is NP-hard, however, we can reformulate it as a bilevel optimization problem, which can be further simplified into a bilevel linear program under common choice models. This novel approach provides a fresh perspective on sales and inventory interaction under dynamic substitution. Furthermore, we establish that the objective function of the upper-level problem in the bilevel LP is Lipschitz continuous in the initial inventory under common choice models. These structural insights accommodate existing methods and heuristics to solve the fluid relaxation. In our numerical experiment, we showcase the application of these findings with a proposed heuristic.

Bio: I am an assistant professor in Decision Sciences at the George Washington University School of Business starting from fall 2023. I earned my PhD degree from Duke University in 2023 with a concentration in Operations Management. My research interest is around customer choice modeling and revenue management problems emerging from new retail settings.

Title: Dynamic Capacity Management for Deferred Surgeries

Abstract: Healthcare needs are becoming increasingly uncertain, challenging the efficient allocation of provider resources and resulting in extended waiting times in diagnosis and treatments. These delays jeopardize health security by not only negatively affecting patient’s health and increasing the related treatment costs, but also by decreasing the provider’s revenue due to departures. For sudden changes in demand, current capacity management policies are rather ad-hoc and either defer excess surgeries or expand capacities by a pre-determined factor, as experienced during the global COVID-19 pandemic with a prohibitively large number of deferrals. We studied 4 years of medical insurance claims of more than 15,000 hernia patients in the United States and observed the presence of uncertainty in surgery demand and patient departure. However, the endogeneity of the uncertainty to hospital operations renders existing capacity management approaches inapplicable. To this end, we develop an optimization framework, where uncertain parameters and their endogenous nature are modeled via multilinear functions. This nonlinear structure is addressed by two approaches based on robust and distributionally robust optimization. Both methods offer sizable improvements over alternative methods for hernia patients. Multiple operational insights on the solution properties are obtained from extensive sensitivity analysis. This framework is also applicable to more complex and significant situations, such as delayed screening or treatment of cancer or cardiac diseases.

Bio: Eojin Han is an assistant professor of Information Technology, Analytics and Operations in Mendoza College of Business at the University of Notre Dame. Before joining Notre Dame, he was an assistant professor at Southern Methodist University for four years. His research is broadly on using optimization and analytics to address operational challenges, caused by limited information on uncertainty and arising in healthcare, supply chains and service systems. His scholarly works have been published in academic journals such as Operations Research and Management Science. He obtained his Ph.D. degree in Industrial Engineering and Management Sciences from Northwestern University and B.S. degree in Mathematics and Electrical Engineering at Seoul National University.

Title: Learning to Price Supply Chain Contracts against a Learning Retailer

Abstract: The rise of big data analytics has automated the decision-making of companies and increased supply chain agility. In this paper, we study the supply chain contract design problem faced by a data-driven supplier (she) who needs to respond to the inventory decisions of the downstream retailer (he). Both the supplier and the retailer are uncertain about the market demand and need to learn about it sequentially over a fixed time horizon. In addition, the supplier does not know the retailer's inventory learning policy, which may change dynamically. The goal for the supplier is to develop data-driven pricing policies with sublinear regret bounds under a wide range of possible retailer inventory learning policies. To capture the dynamics induced by the retailer's inventory learning policy, we establish a connection with nonstationary online learning by following the notion of a variation budget. We start by making the observation that existing approaches for non-stationary online learning cannot precisely delineate the dynamics incurred by the retailer's inventory learning policy, and may lead to linear growth in the supplier's regret under some well-known retailer inventory learning policies. To overcome this challenge, we introduce a new notion of variation budget, which better quantifies the impact of the retailer's learning on the supplier's decision-making environment. We also demonstrate the advantages of our new model for the variation budget in our setting over those in the existing literature. We then proceed to propose dynamic pricing policies for the supplier for both discrete and continuous demand distributions. Our pricing policies lead to sublinear regret bounds for the supplier under a wide range of retailer inventory learning policies.  Our pricing policies empirically outperform those from the existing non-stationary online learning literature. At the managerial level, we answer affirmatively that there is a pricing policy with a sublinear regret bound for the supplier under a wide range of retailer inventory learning policies, even though she faces a learning retailer and an unknown demand distribution. Our work also provides a novel perspective in data-driven operations management where the principal has to learn to react to the learning policies employed by other agents in the system.

Bio: Professor Haskell's research focuses on dynamic Operations Management problems, risk-aware decision-making, and optimization algorithms.  In the area of dynamic OM, he has investigated data-driven dynamic programming and its statistical properties, and fairness in resource allocation.  In the area of risk-aware decision-making, he has proposed several new preference robust optimization models.  In the area of optimization algorithms, he has developed new methods for semi-infinite programming and for performance analysis in online optimization.  He currently teaches supply chain analytics to undergraduate and MBA students in the Daniels School of Business.

Title: The role of supply chain in healthcare crises

Abstract: The confluence of healthcare delivery and public health crises necessitates innovative solutions and comprehensive oversight. This talk will explore two critical issues: optimizing nurse staffing amidst a national shortage and addressing the complexities of opioid supply chains contributing to the opioid crisis. Firstly, we partnered with Indiana University Health (IUH) to develop the Delta Coverage Analytics Suite, an advanced data and decision analytics system supporting a novel nurse staffing model across 16 hospitals statewide. This program employs a flexible pool of resource nurses who dynamically respond to short-term patient census fluctuations, significantly reducing understaffing and generating substantial cost savings. The analytics suite integrates a deep generative model for accurate patient census forecasting and a stochastic optimization for optimal staffing decisions, demonstrating a groundbreaking approach to healthcare workforce management. Secondly, we investigate the opioid crisis, which claimed 69,000 lives in 2020, with prescription opioids accounting for the majority of abuse. Our research, utilizing the DEA’s ARCOS database, reveals that complex supply chains have exacerbated opioid dispensing, particularly in non-White communities. This complexity masked excessive opioid distribution, escaping DEA detection and highlighting racial disparities in regulatory policies. A fixed effects model indicates that increased supply chain complexity correlates with a significant rise in opioid dispensing, disproportionately impacting non-White populations.

Title: Fighting Plastic Pollution: Product Ban Regulation and Voluntary Compliance

Abstract: Plastic products are convenient but also create severe environmental concerns when end-of-life units are mismanaged and leak into the environment. While recycling rate regulation has been proposed as a remedy, the alternative regulatory approach of product ban regulation has gained significant traction in recent years, which tackles the plastic waste problem by curbing the product sales. However, the effects of the product ban are not well understood. For example, by restricting the sales volume, the product ban may be more effective than recycling rate regulation based on the “Reduce, Reuse, Recycle” waste management hierarchy. Yet meanwhile, its direct market intervention may compromise consumer freedom and firm profit. Another important observation is that, in the face of future product ban regulation, firms may engage in voluntary recycling as a proactive measure to induce a lower product ban stringency. In this paper, we study the economic and environmental implications of product ban regulation, while explicitly accounting for firms’ voluntary recycling incentives. We also compare the product ban with recycling rate regulation. Our analyses offer useful insights for various stakeholders. We show that when the production cost is high or existing recycling is low, the firm commits to higher voluntary recycling. However, in these cases, the regulator may set a more stringent product ban despite the higher firm efforts. Moreover, it can be more effective for the product ban to manage highly polluting products through the counter-intuitive strategy of encouraging higher voluntary recycling but relaxing the product ban stringency. We also show that the total recycling rate under the product ban, while chosen voluntarily by the firm, can be even higher than the regulator’s choice under recycling rate regulation. Moreover, the product ban is not necessarily more detrimental to firm profit despite its direct sales restriction.

Bio: Natalie (Ximin) Huang focuses her research on sustainable operations and supply chain management. Her current projects study the implications of various environmental regulations, such as waste management and carbon emissions regulations, for the triple bottom line. She is also interested in exploring how firms’ sustainable operations strategies, such as sharing economy business models and durable product resales, can be designed to improve their economic and environmental efficacy. Her research interests also span socially responsible operations topics, such as how firms can enhance their social value for underserved populations through product design improvements. Natalie is currently an Assistant Professor of Supply Chain and Operations at the Carlson School of Management in the University of Minnesota. She holds a Ph.D. in Operations Management from Georgia Institute of Technology and an MPhil and a BSc in Applied Mathematics from the University of Hong Kong.

Title: Properties of Two-Stage Stochastic Multi-Objective Linear Programs

Abstract: We consider a two-stage stochastic multi-objective linear program (TSSMOLP) which is a natural multi-objective generalization of the well-studied two-stage stochastic linear program. The second-stage recourse decision is governed by an uncertain multi-objective linear program whose solution maps to an uncertain second-stage nondominated set. The TSSMOLP then comprises the objective function, which is the Minkowsi sum of a linear term plus the expected value of the second-stage nondominated set, and the constraints, which are linear. Since the second-stage nondominated set is a random set, its expected value is defined through the selection expectation. The global Pareto set is defined as the collection of nondominated points in the image space of the TSSMOLP. We discuss properties of TSSMOLPs and the multifunctions that arise therein, as well as the implications of these properties for the future development of TSSMOLP solution methods. This work is joint work with Akshita Gupta, School of Industrial Engineering, Purdue University.

Bio: Susan R. Hunter is an associate professor in the School of Industrial Engineering at Purdue University. Her research interests include theoretical and algorithmic aspects of stochastic optimization in the presence of multiple performance measures with emphasis on asymptotics, computation, and application. In 2016, she received an NSF CAREER Award to work on multi-objective simulation optimization; that is, multi-objective optimization in which the objective functions can only be observed with stochastic error as the output of a black-box Monte Carlo simulation oracle. Her published works have been recognized by the INFORMS Computing Society in 2011, by IISE Transactions in 2017, and by The Operational Research Society in 2021. She currently serves as Program Chair for the 2024 Winter Simulation Conference, Vice President / President Elect of the INFORMS Simulation Society, and as an associate editor for Operations Research, Journal of Optimization Theory and Applications, and Flexible Services and Manufacturing Journal.

Title: Balancing Optimality and Diversity: Enhancing Human Decision-Making through Generative Curation

Abstract: The rapid increase in data availability has overwhelmed decision-makers with an abundance of choices and information. In response, there has been considerable work in creating optimal decision rules for a quantifiable objective. However, in many practical settings, human decision-makers must consider both explicit quantitative and implicit qualitative factors to make the final call. We introduce a general framework, termed \emph{generative curation}, which generates optimal recommendations that account for both quantitative and qualitative objectives. We show that a consideration of implicit qualitative factors naturally leads to a metric that measures the diversity of the generated solutions, transforming the problem into balancing between quantitative optimality and qualitative diversity. Our proposed algorithm efficiently solves this optimization problem by generating a finite number of diverse and near-optimal solutions. We validate our approach with real-world datasets, showcasing its potential to enhance decision-making processes in complex decision-making settings.

Bio: Michael Lingzhi Li is an Assistant Professor at the Harvard Business School, Technology and Operations Management Unit. His research focuses on end-to-end development of decision algorithms based on machine learning, causal inference and operations research, along with their implementation in hospitals, pharmaceutical companies, and public health organizations to fundamentally transform healthcare operations. He is the recipient of awards including INFORMS Edelman Finalist, INFORMS Pierskalla Award and the Innovative Applications in Analytics Award. In his free time, he enjoys hiking, swimming, and finding good new restaurants in Boston. 

Title: Optimal control of single-server queues: A wait-time approach

Abstract: This paper studies an optimal control problem for a single-server queue in which customers arrive according to a Poisson process. The service provider determines a control policy for each arriving customer according to the system wait time. The customer, in turn, decides whether to accept the policy offer and join the queue. The objective is to maximize the average reward in an infinite horizon. The proposed model encompasses many practical service systems with customers ranging from homogeneous with a known type to heterogeneous with unknown types. We illustrate the applicability of this general model through two practical service systems: discretionary services and make-to-order systems. The key contribution of this work is to show the existence and optimality of a stationary, wait-time-dependent policy. In addition, we characterize the structure of the optimal policy through the corresponding Hamilton-Jacobi-Bellman equation and reveal insights. The application of the make-to-order system leads to a mechanism design problem, which appears to be new in the literature. In particular, we find that when the wait time becomes longer, the provider should admit more patient customers while offering an option with a shorter job lead time. Interestingly, the offered payment may not be monotonic in the wait time.

Bio: Professor Lin’s research primarily focuses on developing optimal dynamic control policies in wait-time based queueing systems, with applications to service systems and make-to-order systems. His work aims to capture complex interactions within queues and provide structural managerial insights. His research has been published in a leading journal, Management Science, and he has presented his findings at major conferences in the field, such as the INFORMS Annual Meeting, MSOM Conference, and POMS Annual Conference. Notably, he was honored with the Student Paper Award in 2021 from the POMS College of Sustainable Operations

Title: Learning from Click Transition Data: The Effectiveness of Greedy Pricing Policy under Dynamic Product Availability

Abstract: We study how to utilize random clicking behaviors of customers to benefit online retailers' pricing strategies. We introduce a new dynamic attraction click model based on a Markov chain, which describes both purchase and click behaviors under product availability. Based on our click model, we propose an efficient data-driven framework to determine product prices that maximize expected revenue. To learn customers' preference efficiently under the high-dimensional click transition data, we explore the similarities in click transition patterns across products, which are captured by the low-rank structure of the attraction matrix in our click model. Driven by the dynamic availability of products in practice, we also provide an algorithm to estimate the attraction matrix under dynamic product availability. This approach yields a small estimation error bound by leveraging the low-rank structure. When considering estimation and pricing decisions simultaneously, we demonstrate the effectiveness of a greedy online algorithm and derive a sublinear regret bound under dynamic product availability. Empirical investigations conducted on real-world data show that using click data along with purchase data can significantly reduce the prediction error associated with purchase behaviors, leading to a substantial increase in the anticipated revenue from pricing decisions.

Bio: Mo Liu is an assistant professor in the Department of Statistics and Operations Research at UNC Chapel Hill. His research interests focus on decision-focused learning, a methodology that designs and trains prediction models to account for decision-making in downstream optimization problems. These downstream problems include real-world applications in revenue management, such as product recommendation, assortment optimization, and inventory management. He received his Ph.D. from the Department of Industrial Engineering and Operations Research at the University of California, Berkeley, in 2024 and received his B.S. degree in Industrial Engineering from Tsinghua University in 2019. 

Title: Contractor Selection in Project Outsourcing via Request-for-Quote

Abstract: Selecting and incentivizing contractors is imperative to the success of project outsourcing. We study the design of request-for-quote (RFQ) procedures for selecting project contractors with unknown cost efficiencies. We study the RFQ design problem for project outsourcing by capturing the fundamental time—cost trade-off in project management. We consider two contract forms (cost-sharing and time-incentive) that are the most widely used in practice, combined with a simple RFQ procedure. We show that while the cost-sharing contract leads to a higher client profit, the time-incentive contract can increase the system efficiency if the project urgency is relatively low. Our results uncover the values of incorporating time incentives, cost-sharing, and differentiated cost-auditing in RFQ design and provide justifications for simple RFQ procedures.

Bio: Professor Lu’s current research interests include infrastructure network design for supply chains and service systems, supply chain risk management, management of innovative operations, and project management. He applies robust optimization, stochastic optimization, and game theory to study various supply chain and service operations management problems, taking into consideration risks, ambiguity, incentives, and behavioral issues. His research has been funded by research foundations and funding agencies, including the National Science Foundation. His work has appeared in journals such as Management Science, Manufacturing & Service Operations Management, and Production and Operations Management.

Title: Nudging Patients towards Cost-Effective Providers: Analysis of an Insurer's Effort-Based and Cash Reward-Based Mechanisms

Abstract: Misalignments between patients' choices of providers and those of the health insurance company (HIC) can result in significant costs. Misalignments may occur either because patients are unaware of their options or because they do not have an incentive to choose the cost-effective provider. Motivated by emerging mechanisms in the industry, we examine how an insurer can exert effort and/or offer cash rewards to nudge patients towards cost-effective providers. We build an analytical model that captures the salient aspects of an HIC’s decision problem while incorporating how patients choose providers. With this versatile framework, we analyze the HIC's optimal effort and reward, individually and jointly, under different cost-share structures (i.e., copayment and coinsurance). Comparing the HIC's savings with the effort and cash reward-based approaches, we find that when coinsurance is high, the HIC prefers the effort-based approach. Conversely, the cash reward-based approach is better when coinsurance is low and the price difference between the two providers is high. With copayment, the HIC prefers to use a cash reward when the price difference is high; otherwise, it prefers to exert effort. Thus, neither a reward-only nor an effort-only approach uniformly outperforms the other. The two approaches can serve as tactical complements as indicated by the superiority of the joint approach in some cases. This work provides a framework for the HIC to tailor the nudge (effort or reward or both) for different procedures and geographies based on the cost-share structure and the relative magnitude of related costs.

Bio: Mili Mehrotra is an Associate Professor of Business Administration in the Gies College of Business at the University of Illinois Urbana-Champaign. She received her PhD in Operations Management in 2010 from the University of Texas at Dallas. Her research lies in the domain of socially-responsible supply chains and operations management. In particular, she is interested in developing and analyzing incentive schemes, and studying coordination and optimization issues that arise in practice due to the actions taken by a wide variety of stakeholders for achieving broader social objectives. She is also interested in using discrete models for analyzing problems in service operations, production planning, and logistics. Her papers have been accepted for publication in Management Science, Manufacturing and Service Operations Management, Operations Research, and Production & Operations Management. She currently serves as an Associate Editor for Manufacturing and Service Operations Management, Production & Operations Management, and Naval Research Logistics.

Title: Public Reporting and Payment Incentives in Hospital Markets

Abstract: One of the most prominent characteristics of healthcare markets is quality uncertainty. To govern the quality of care, the payer of a healthcare system can either improve quality information through public reporting, an information instrument, to better inform patient choice of hospitals, or increase payment rates, a financial instrument, to incentivize hospitals to compete on quality. Our paper aims to study how these instruments interplay and how to coordinate them in hospital markets with quality competition and congestion. We model hospitals as stochastic queueing systems where patients are sensitive to quality and wait times. Patient choice is modeled in a random utility paradigm with utility variations arising from imperfect quality information. With congestion, choice probabilities in equilibrium are an implicit fixed-point solution. We first characterize the structure of choice probabilities and quality decisions in equilibrium in monopoly, duopoly and oligopoly settings, respectively. We then analyze the effects of public reporting on equilibrium quality and patient welfare. We find that public reporting may not assure better quality but lower perceived patient welfare. We next characterize optimal joint public reporting and payment policy. We find that public reporting and payment incentives tend to be substitutes, with the latter dominating the former. The payer should exert public reporting efforts to improve quality information, accompanied with a moderate payment rate, only when service cost is relatively low, initial quality information accuracy is moderate, and competition is less intensive. Otherwise, it is more efficient to leverage payment incentives to govern hospital quality competition.

Bio: Zhan Pang is Lewis B. Cullman Rising Star and Professor of Management at Purdue Business School and Purdue Innovation and Entrepreneurship Fellow. His research interests include statistical learning and decision theory, healthcare systems, supply chain risk management, and pricing and revenue management. He is a senior editor for Production and Operations Management and a founding editor of Journal of Blockchain Research.

Title: Managerial Incentives, Passive Ownership, and Channel Decentralization

Abstract: This study proposes a simple mechanism in which passive ownership in the downstream firm of a vertical relationship fosters economic incentives for downward decentralization. This mechanism works by motivating managerial efforts aimed at enhancing productivity. The enhanced productivity results in higher profits for the firm when it sells its product indirectly compared to selling it directly. Interestingly, this scenario happens when the marginal cost of managerial effort falls within an intermediate range—not too low, yet not too high. This mechanism can also result in improved channel performance and consumer welfare compared to the centralized channel. Finally, this mechanism remains effective in incentivizing downward decentralization even in the presence of competition, albeit by strategically demotivating managerial efforts as a means to soften competition.

Bio: Jiong Sun is currently an Associate Professor at Purdue University’s School of Hospitality and Tourism Management. Jiong conducts research in supply chain management, distribution channels, operations/marketing interface, and game theory. His research work has appeared in Productions & Operations Management, Management Science, Journal of Retailing, and Service Science, among others.

Title: Policy Interventions for Combating Lead Pollution in Bangladesh: Model and Analysis of a Circular Supply Chain

Abstract: Millions of electronic three-wheelers form the backbone of local transportation in Bangladesh. These vehicles are powered by lead acid batteries made from imported and recycled lead. While environmental-responsible lead recycling technology is mature and available, most used lead acid batteries (ULABs) are recycled by thousands of informal smelters, giving rise to large scale lead emission that inflicts disastrous health and economics damages. Our model-based study identifies policy interventions to address the problem by undercutting business viability of informal recycling.

We develop a model of circular supply chain by integrating material balances of the system with consumer choices that are characterized by a mixed multi-nominal logit model. Different types of batteries, long-lasting ones made by the formal sector, and short-lived ones by the informal sector, are options to be chosen from. Buyer’s utility of each option is substantiated with information collected from field studies. The circular nature of the system is reflected by the dependence of the ULAB price on the demand and supply for lead scraps, and thus rates of sales of different types of new batteries. On the other hand, the ULAB price affects costs of new batteries made by different sectors, determining incentive and competitiveness of environmental-friendly recycling. The status quo in Bangladesh corresponds to a steady state in our model. We also consider future scenarios where lithium-ion batteries gradually replace lead batteries. For both cases, we discuss policy impacts by examining changes of model outputs with input parameters.

Bio: Dr. Qiong Wang is an associate professor at Department of Industrial and Enterprise Systems Engineering at University of Illinois at Urbana-Champaign. His research focuses on managerial issues in supply chains and service systems. Topic ranges from inventory control, revenue management, cooperative games, and behavior queuing. Outcomes are published in major journals of his field. His work not only involves developing novel approaches to solve challenging technical problems, but also touches upon subjects of important societal implications such as environment protection and supply chain resilience. Before joining University of Illinois, Dr. Wang was a Member of Technical Staff at Bell Labs. Besides working on operations management problems, he conducted research on economics of communication networks to support decision-making on service pricing, capacity planning, and traffic engineering. Dr. Wang received his PhD in Engineering and Public Policy from Carnegie-Mellon University and his undergraduate degrees from Tsinghua University.

Title: Recommender Systems under Privacy Protection

Abstract: Consumers make inferences about a product’s relevance or even get access to it through recommendations. Namely, recommendations often play both informative and allocative roles. However, the pervasive use of personal data by modern algorithmic recommender systems has sparked public outcry for tighter privacy regulations. Personal preferences over different product offerings are a basic constituent of consumer privacy. We study how a profit-driven online platform designs its recommender policies in response to different privacy protection regimes that grant users varying degrees of control over their personal data. We demonstrate the effective equivalence between the opt-out protection and the unprotected privacy. As a key finding, consumers’ autonomy over their privacy (to the extent that they could misrepresent their personal data) may compel platforms to distort their recommender policies and lead to unintended consequences. When the recommendation only plays an informative role, such level of privacy protection deters the platform from any personalized recommendation; if the recommendation can, in addition, act allocatively to control consumers’ access to products, algorithmic discrimination may arise, whereby the disadvantaged minority in the society are restricted or deprived of access to potential valuable opportunities. Ultimately, these distortions could inadvertently hurt both platforms and consumers, relative to less stringent privacy protection regimes. Counter-intuitively, enacting the recommendation’s allocative role (by restricting users’ access to certain products) in addition to its informative role can in fact benefit both the platform as well as the users, especially when users are given the autonomy over their privacy.

Bio: Shouqiang Wang is currently an Associate Professor of Operations Management at the Naveen Jindal School of Management, University of Texas at Dallas. His research focuses on strategic operations problems that arise from both business settings as well as public domains, with a particular interest in incentive issues in the presence of asymmetric information and dynamic interactions among decentralized stakeholders in these contexts.

Title: Renewable, Flexible, and Storage Capacities: Friends or Foes?

Abstract: More than 99% of the new power generation capacity to be installed in the United States from 2023 to 2050 will be powered by wind, solar, and natural gas. Additionally, large-scale battery systems are planned to support power systems. It is paramount for policymakers and electric utilities to deepen the understanding of the operational and investment relations among renewable, flexible (natural gas-powered), and storage capacities. In this paper, we optimize both the joint operations and investment mix of these three types of resources, examining whether they act as investment substitutes or complements. Using stochastic control theory, we identify and prove the structure of the optimal storage control policy, from which we determine various pairs of charging and discharging operations. We find that whether storage complements or substitutes other resources hinges on the operational pairs involved and whether executing these pairs is constrained by charging or discharging. Through extensive numerical analysis using data from a Florida utility, government agencies, and industry reports, we demonstrate how storage operations drive the investment relations among renewable, flexible, and storage capacities. Storage and renewables substitute each other in meeting peak demand; storage complements renewables by storing surplus renewable output; renewables complement storage by compressing peak periods, facilitating peak shaving and displacement of flexible capacity. These substitution and complementary effects often coexist, and the dominant effect can alternate as costs change. A thorough understanding of these relations at both operational and investment levels empowers decision makers to optimize energy infrastructure investments and operations, thereby unlocking their full potential.

Bio: Owen Wu is an Associate Professor of Operations and Decision Technologies and Director of Research and Outreach for the Institute for Environmental and Social Sustainability at the Kelley School of Business, Indiana University.  His research focuses on energy sustainability and socially responsible operations, including integrating renewable energy resources, investing and operating energy storage, improving energy efficiency and infrastructure, as well as managing assistance programs in humanitarian operations. His research received the INFORMS ENRE Society’s Best Publication Award in Environment and Sustainability, Paul Kleindorfer Award in Sustainability by POMS, Honorable Mention for the MSOM Responsible Research Award, and a finalist for the M&SOM Journal’s Best Paper.  Prof. Wu is an associate editor and guest department editor for Manufacturing & Service Operations Management, a senior editor for Production and Operations Management, and a guest associate editor for Management Science. He served as Vice President of the MSOM Society from 2021 to 2022. Before joining Indiana University, he was an Assistant Professor at the University of Michigan.  He earned his Ph.D. degree from the University of British Columbia, M.Eng. degree from Hong Kong University of Science and Technology, and B.Eng. degree from Shanghai Jiao Tong University.

Title: Conditional Approval and Value-Based Pricing for New Health Technologies

Abstract: Healthcare payers face the risk of approving a treatment that may not be cost-effective, or of rejecting a treatment that may be cost-effective because the clinical trial data may not be fully informative about economic measures regarding effectiveness, safety, and costs. To mitigate this risk, payers have been implementing conditional approval (CA) schemes which postpone the reimbursement decision until after the collection of post-market-approval data. We provide a quantitative analysis of CA schemes that considers the incentives of both the payer, who makes the reimbursement decision, and the company, who has developed the treatment. We use a cooperative bargaining framework and show that the case of a price-setting firm is a special instance of our model. We explicitly model different prices that are relevant for immediate and conditional approval decisions and consider two types of CA schemes that vary in patients’ level of access to the new treatment during the collection of additional data. We show that interim prices that arise during the schemes' data-collection processes reflect the sharing of data-collection costs and are often higher than immediate-approval prices. For broad-access schemes with high ‘’reversal'' costs -- associated with the decision after data collection not to reimburse -- interim prices may be driven below those at initial submission. We illustrate the potentially negative impact of policies that constrain interim prices and identify a new risk-sharing mechanism to mitigate the adverse consequences of those constraints. We present results about the probability of the treatment and the CA scheme being cost-effective.

Bio: Ozge Yapar is an Assistant Professor in the department of Operations & Decision Technologies at the Kelley School of Business of Indiana University, and she is a research fellow with the Indiana University Center for the Business of Life Sciences. She completed her Ph.D. in the Operations, Information and Decisions Department at The Wharton School of the University of Pennsylvania. Her research interest lies in investigating how the integration of strategic incentives into the process of information gathering influences the outcomes for participants involved. Her main research focus is on healthcare operations management, specifically on the process of developing and marketing new medical treatments. She uses tools from applied probability, stochastic processes, game theory and health economics to study questions that are of interest to healthcare payers, public health regulators, and the companies that develop new medical treatments. She earned her Bachelor of Science degree in Industrial Engineering from Bilkent University in Turkey and was a visiting student in Industrial Engineering and Operations Research Department of the University of California Berkeley during the third year of her undergraduate studies.

Title: A Simple Approach to Causal Clustering

Abstract: We consider a network interference problem, in which an experimenter conducts experiments over a single connected network. We consider the setting where the network is known and the exposure mapping, which describes how one unit’s treatment assignment impacts another’s potential outcomes, is well-specified. In this setting, causal clustering refers to conducting clustered experiments to achieve accurate estimation and inference. This paper proposes a simple robust optimization approach to causal clustering, which also synthesizes several existing results in the literature.

Bio: Jinglong Zhao is an Assistant Professor of Operations and Technology Management at Questrom School of Business at Boston University. He works at the interface between optimization and econometrics. His research leverages discrete optimization techniques to design field experiments with applications in online platforms. Jinglong completed his PhD in Social and Engineering Systems and Statistics at Massachusetts Institute of Technology.

Title: A Model of Shoppertainment Live Streaming

Abstract: “Shoppertainment” live streaming, a blend of entertainment and product selling, is becoming increasingly popular. In shoppertainment sessions, streamers face bandwidth constraints so balance between providing product information and entertainment (such as singing, dancing, and storytelling). We conceptualize a shoppertainment live streamer as a novel type of online retail platform that maximizes total commissions from selling a range of manufacturer products. Our study investigates how streamers can leverage entertainment bandwidth as a long-term strategy to influence manufacturers' pricing decisions and consumers' attendance as well as purchasing behaviors. Our model reveals several intriguing findings. First, we find that higher entertainment bandwidth encourages more manufacturers to adopt a demand-oriented, low-pricing strategy to attract buyers, rather than a margin-oriented, high-pricing strategy that appeals only to buyers who perceive a high likelihood of finding a suitable product. Second, we observe that an increase in entertainment bandwidth initially boosts consumer traffic to shoppertainment sessions, but this trend may reverse once the bandwidth becomes excessively high, indicating an inverted-U-shaped relationship. Conversely, the streamer’s expected profit per attending consumer first decreases and then increases with higher entertainment bandwidth, forming a U-shaped relationship. Third, our results suggest that a commission-maximizing streamer facing bandwidth constraints should not exclusively focus on providing product information. Instead, it is advantageous to allocate significant bandwidth to entertainment, even with limited entertaining capability, and potentially beneficial to dedicate all bandwidth to entertainment if the capability is substantial. Lastly, we demonstrate that the streamer’s quality threshold moderates the impact of entertainment bandwidth on these dynamics.

Bio:  Xuying Zhao conducts research on supply chain management and interface between operations management and marketing, especially for platform economy, video game, and retail industries. In recent papers, she has investigated theoretical models of video game design and pricing, social media content length and variety control, advance selling strategy, and inventory management with machine learning. Xuying has published many papers in journals such as Management Science, MSOM, POM, Decision Science, and IEEE. She won 2009 eBusiness Best Paper Award from INFORMS. Xuying is an editorial review board member and a senior editor for POM. She has been a track chair or cluster chair of the interface between OM and Marketing for numerous INFORMS and POMS annual conferences. After gaining a BA in Computer Science from ZheJiang University in China, Xuying has worked for Microsoft. She subsequently earned an M.S. and Ph.D in Management Science from University of Texas at Dallas. Before joining Texas A&M University, Xuying has worked at University of Notre Dame. She has designed and taught many MBA courses, including Process Analytics, Supply Chain Analytics, Digital Supply Chain Innovations, and International Operations.

Title: Designing Specialist-Response Policies in Hospital Emergency Departments

Abstract: This study focuses on designing an efficient systematic specialist-response strategy for various types of specialists and hospitals to reduce patient wait times for specialist consultations (SC) and, consequently, decrease emergency department (ED) length of stay (LOS). We begin with an empirical study to validate our hypothesis that SC is a bottleneck in ED patient flow. Initially, we examine fixed-time policies where specialists visit the ED on a fixed schedule. Motivated by a dataset of over one million ED visits, we model SC requests using a queueing model characterized by time-varying cyclical Poisson arrivals and a single server available at specified times each day. We employ the Martingale Representation Theorem to analytically determine the optimal timing for specialists to minimize patients’ average wait time. Subsequently, we compare the fixed-time policy with alternative strategies and propose guidelines for selecting an optimal response policy based on different specialist types and hospital settings. Finally, we validate our analytical results through comprehensive simulation models using data from two EDs. Our proposed strategy effectively minimizes patients’ SC wait times without increasing the frequency of specialist visits to EDs. The simulation study demonstrates that our strategy can reduce SC delays by approximately 25-42% and decrease ED LOS by over 10%, without requiring additional specialist visits. These findings underscore the potential for significant improvements in ED efficiency and patient outcomes through optimized specialist-response strategies.

Bio: Dr. Emily Zhu Fainman is an Assistant Professor of Analytics at the McCoy College of Business, Texas State University. Her research addresses critical issues in transportation, healthcare, public health, and service operations systems, focusing on enhancing efficiency and effectiveness through advanced interdisciplinary methodologies. Her expertise spans mathematics, analytics, statistics, operations management, and economics, contributing to strategic, operational, and technical policymaking. Dr. Fainman actively collaborates with industrial partners, offering strategic recommendations and practical tools for urban mobility and sustainability. She works closely with physicians, health administrators, and researchers on projects related to maternity care, oncology, emergency departments, electric vehicles, and shared transport systems. Her work has significantly advanced computation methodologies for problem-solving, attracting global attention from policymakers, city planners, transportation authorities, and researchers. Dr. Fainman received her Ph.D. in Operations Management from McGill University, an M.Sc. in Mathematical and Computational Finance from the University of Oxford, and a B.Sc. in Mathematics from Nanjing University.

Post-doc & Student Speakers

Title: Markovian Search with Socially Aware Constraints

Abstract: We investigate constrained sequential search problems where multiple candidates from diverse societal groups are selected. Constraints aim for outcomes like demographic parity, diversity quotas, or support for disadvantaged groups within budget limits. Starting with the Pandora’s boxmodel, under a single affine constraint on candidate selection or inspection probabilities, we find optimal policies that maintain the structure of the unconstrained model. These policies may randomize between dual-based adjustments, ensuring computational ease and economic interpretability. Expanding to more complex search scenarios like multistage search with rejection possibility for the offer, modeled by joint Markov scheduling (JMS), we introduce algorithms for near-feasible and near-optimal policies under multiple general affine and convex ex-ante constraints. These algorithms randomize over a polynomial number of index-based policies, adapting Gittins indices to constrained JMS scenarios. Our approach leverages a key observation: a relaxation to the Lagrange dual function allows index-based policies akin to unconstrained ones. Through numerical studies, we analyze the implications and price of imposing various constraints, evaluating their effectiveness in achieving intended societal outcomes.

Title: Assortment Optimization of the GoodRx Model

Abstract: We consider an assortment problem motivated by popular application of GoodRx (GDRX), a coupon vendors presenting Pharmaceutical Benefit Managers (PBMs) discounted affordable drug prices directly to general population for profit. In such a price presentation problem, the decision maker must select at most one price (coupon) from a predefined choice set dedicated to each customer purchase option. Such a feature translates to a special type of spatial constraint in a basic attraction model assortment optimization framework, which typically becomes NP-hard. Such problems are common among today digital marketers (GDRX, Honey, RetailMeNot, CapitalOneShopping, Goodshop, etc.) who consolidate purchase options for customers and provide discounted pricing so as to gain commission through affiliated sales. We propose an innovative solution approach that finds the optimal assortment at exponential rate.

Title: The Driver-Aide Problem in Urban Areas

Abstract: The dramatic growth in package delivery volume in recent years has presented significant challenges for last-mile delivery, especially in urban areas. We consider operations supported by the driver-aide problem, where an aide can assist the driver. In the so-called “helper” mode, the aide can be dropped off by the truck at a stop and serve several stops independently, while the driver serves other stops and picks up the aide later. This is particularly feasible (and desired) in urban areas and generalizes the previous “helper” mode, where the aide cannot move between different stops independently. The aide can also be used in the so-called “jumper” mode to work with the driver and expedite the delivery at a stop. We must determine both the delivery route and the most effective way to use the aide to minimize the total delivery time. We model this problem as an integer program with an exponential number of constraints and variables, and propose a branch-cut-and-price approach with several algorithmic enhancements. We first demonstrate the necessity of the enhancements and show that we can find high-quality solutions in a reasonable amount of time. We then conduct a case study on instances based on real-world data from New York City to explore the impact of the ride movement on last-mile delivery in urban areas.

Title: Transparent or Not? Optimal Performance Feedback in Gamified Services

 Abstract: Famified services (e.g., fitness) often provide user evaluations upon service completion. Such performance feedback, sometimes presented together with a goal and/or other users' scores, shapes their perception of individual performance (through prospect theory) and relative status (through social comparison). How transparent should service providers be in their disclosure of individual performance feedback to enhance users' utility? In this paper, we employ a Bayesian persuasion framework to determine the optimal information disclosure policy. We find that when a goal is specified but other users' scores are not communicated, an upper censorship policy is optimal, i.e., revealing the exact scores to the low-performing users and only telling the high-performing users that they lie in the top range. When no goal is specified but other users' scores are communicated, full (resp. no) information is optimal when users are ahead-seeking (resp. behind-averse). When a goal is specified and other users' scores are communicated, the optimal information policy is hybrid. Our paper demonstrates how service providers can enhance user utility, and thus increase the value of their service, by engineering the design of their relative performance feedback.

Title: Optimizing Health Supply Chains with Decision-Aware Learning

Abstract: We study the problem of allocating limited supply of medical resources in developing countries, in particular, Sierra Leone. We address this problem by combining machine learning (to predict demand) with optimization (to optimize allocations). A key challenge is the need to align the loss function used to train the machine learning model with the decision loss associated with the downstream optimization problem. Traditional solutions have limited flexibility in the model architecture and scale poorly to large datasets. We propose a decision-aware learning algorithm that uses a novel Taylor expansion of the optimal decision loss to derive the machine learning loss. Importantly, our approach only requires a simple re-weighting of the training data, ensuring it is both flexible and scalable, e.g., we incorporate it into a random forest trained using a multitask learning framework. In collaboration with the Sierra Leone government, we deployed our framework in a staggered rollout across all 1,123 government healthcare facilities nationwide. We use synthetic differences-in- differences to evaluate the impact of our tool, finding a 23% increase of overall consumption of essential medicines, thereby significantly improving real-world patient access to care.

Title: Research directions in alternative fuel vehicle transitions for commercial fleets

Abstract: This study examines the challenges and opportunities of using alternative fuel vehicles (AFVs) in commercial fleets, important for sustainable operations. We focus on how these vehicles can help meet the Paris Agreement goals due to their lower emissions compared to traditional vehicles. Our review shows there’s still a big gap in how businesses adopt electric and hybrid vehicles compared to personal use. We also look at how market conditions, financial uncertainties, and government policies affect this shift. Our findings point out that commercial fleets are slow in adopting these technologies, suggesting a great opportunity to improve both environmental and operational efficiency. Moving forward, we suggest more research that connects these areas and uses new modeling techniques to keep up with technological and economic changes.

Title: Online Matching with Cancellation Costs

Abstract: We study the online resource allocation problem with overbooking and cancellation costs, also known as the buyback setting. To model this problem, we consider a variation of the classic edge-weighted online matching problem in which the decision maker can reclaim any fraction of any offline resource that is pre-allocated to an earlier online vertex; however, by doing so not only the decision maker loses the previously allocated edge-weight, it also has to pay a non-negative constant factor f of this edge-weight as an extra penalty. Parameterizing the problem by the buyback factor f, our main result is obtaining optimal competitive algorithms for all possible values of f through a novel primal-dual family of algorithms. We establish the optimality of our results by establishing separate lower-bounds for each of small and large buyback factor regimes, and showing how our primal-dual algorithm exactly matches this lower-bound by appropriately tuning a parameter as a function of f. We further study the lower and upper bounds on the competitive ratio in variants of this model, such as matching with deterministic integral allocations or single-resource with different demand sizes.

Title: Optimizing Sponsored Humanitarian Parole 

Abstract: The humanitarian sponsorship pathway has recently seen greater use by the US government to respond to the urgent needs of displaced peoples. We discuss two related research projects that address the operational needs of nonprofits involved in humanitarian sponsorship programs. First, we developed RUTH (Refugees Uniting Through HIAS) as a novel algorithmic matching system deployed at HIAS—a refugee resettlement agency involved in the parole process—that matches Ukrainian parolees with US community-formed sponsors based on the preferences of refugees by adapting the Multiple Waitlist Procedure of Thakral (2016). Second, in collaboration with another nonprofit organization supporting refugees in humanitarian parole, we used multi-criteria performance analysis to evaluate the performance of community-formed sponsors known as Neighborhood Support Teams (NSTs). Our data-driven approaches also have application to other recent humanitarian parole schemes such as the Welcome Corps and processes for Cubans, Haitians, Nicaraguans, and Venezuelans. Together, our research demonstrates how technology and data-driven optimization methods can aid decision-making in parole efforts, drawing insights from practical cases.

Title: Leveraging Assortment Similarities for Data-driven Choice Predictions

Abstract: Choice models serve as fundamental tools in revenue management, primarily employed for predicting customer demand and optimizing operational decisions like pricing and assortment. Existing literature has leveraged a wide array of choice models, ranging from parametric choice models like the MNL, nested logit, etc., to nonparametric models like the rank-based model. Parametric choice models rely on specific distributional assumptions for the customer choice process, which facilitates both efficient estimation from sales transaction data as well as (near-) optimal operational decisions. However, imposing parametric forms may render the model susceptible to misspecification issues, potentially impacting its prediction accuracy. Non-parametric choice models, on the other hand, do not impose any restrictions on the customer choice behavior, allowing them to capture more sophisticated substitution patterns and therefore, generate more accurate demand predictions. On the downside, estimating such models is much harder and typically requires considerably large amounts of sales transaction data. Moreover, solving operational decisions like assortment optimization is typically intractable, necessitating the need for approximate algorithms and heuristics. As a result, we are faced with a trade-off when picking a choice model in practice. In this work, we aim to break this trade-off by proposing simple data-driven approaches for demand predictions that do not impose any assumptions on the underlying choice behavior of customers. Our approach is model-free, that is, requires no estimation of model parameters, and relies solely on assessing the similarity between assortments to generate predictions.

Title: Last-iterate Convergence in No-regret Learning: Games with Reference Effects Under Logit Demand

Abstract: This work is dedicated to the algorithm design in an oligopoly price competition, with the primary goal of examining the long-run market behavior. We consider a realistic setting where n firms engage in a multi-period price competition within a partial information setting under reference effects. Consumers’ choices follows the multinomial logit choice model. We use the stationary Nash equilibrium (SNE), defined as the fixed point of the equilibrium pricing policy, to simultaneously capture the long-run equilibrium and stability. With loss-neutral reference effects, we propose the online projected gradient ascent (OPGA) algorithm, where each firm adjusts the price using the first-order derivatives of its log-revenues, accessible through the market feedback mechanism. Despite the absence of typical properties required for the convergence of online games, we demonstrate that under diminishing step-sizes, the price and reference price paths generated by OPGA attain last-iterate convergence to the unique SNE. Moreover, with appropriate step-sizes, we prove a convergence rate of O(1/t^2) and a constant dynamic regret. When loss-averse reference effects are introduced, we propose the conservative-OPGA (C-OPGA) algorithm to handle the non-smooth revenue functions and demonstrate that the price and reference price achieve last-iterate convergence to the set of SNEs with the rate of O(1/t^{1/2}).

Title: Emergency department wait time forecasts: Outdoing complex machine learning with parametric models

 Abstract: Machine learning (ML) and artificial intelligence are increasingly utilized in operations research and management. Despite their potential, current ML applications in operations face significant limitations. They often provide only point estimates, overlooking the need for entire distribution estimates, which are crucial for problems like demand forecasting and service time estimation. Furthermore, existing ML algorithms typically ignore the rich body of knowledge in operations research, such as specific parametric distributions identified by the literature.

This paper introduces a novel methodology to address these issues. The proposed methodology leverages gradient boosting to flexibly estimate the parametric distributions, incorporating domain-specific knowledge from operations research. Using data from an emergency department, we demonstrate that the proposed method outperforms off-the-shelf ML benchmark. Our findings show that the parametric knowledge improves distributional prediction accuracy by 6.2% for service times and 12% for waiting times, translating to a 10% increase in patient satisfaction and a 4% reduction in mortality for cardiac arrest patients. This work underscores the importance of integrating ML with operations research knowledge to enhance distributional estimation in operational contexts.

Title: "Uber" Your Cooking: The Sharing-Economy Operations of a Ghost-Kitchen Platform

Abstract: "Ghost kitchens" are emerging as an innovative alternative to traditional restaurants. A ghost kitchen has no storefront or dining area: It only accepts online orders. Due to limited capacity, each ghost kitchen offers only a limited number of dishes. We study a ghost kitchen platform that works as an intermediary between home chefs and customers. It allows customers to shop for meals from multiple kitchens in a single order; we call it Multi-Dash. While ghost kitchens unlock the great potential of delicious homemade food for customers and earnings for home chefs, the operational challenges are immense: The multi-dash setting leads to longer waiting times, which affects customer adoption rate. Additionally, the setting increases routing costs, imposing a negative effect on cost efficiency. We address these challenges by building a business model that integrates the adoption rate, waiting cost, and delivery cost. Our findings demonstrate that ghost kitchen platforms can be more profitable than traditional food delivery platforms because of their multi-dash capability, reduced fixed costs, increased productivity, and chef specialization. On the other hand, the optimal service radius of ghost kitchens is smaller than the service radius of traditional food-delivery restaurants because of the additional routing costs of multi-dash.

Title: Beyond Basic Reusability: Joint inventory and online assortment optimization with a network of evolving resources

Abstract: In this study, we consider the joint inventory and assortment optimization when the decision-makers decide the initial capacity of each resource before a finite selling season starts. Then, across the selling horizon, the decision-makers offer personalized assortment in real time based on customer arrivals and remaining inventory to maximize the total expected revenue. Unlike prior work on joint inventory selection and online resource allocation that only considers the perishable resources, our framework can be extended to the reusable resources. Beyond the classic definition of reusable resource, we introduce the new concept of network reusability by allowing for the transformation of resources upon return. This advanced model captures more realistic scenarios, such as products returning to inventory in a different condition or rentals returning to different locations, thereby broadening the applicability of our framework to diverse domains like product returns, fashion rentals, and car-sharing services. The modeling richness introduces new technical hurdles when it comes to solving our joint inventory and online assortment problem. Under the MNL choice model, we propose an inventory refinement procedure that achieves a constant-factor approximate performance. Our computational experiments show that our approximation framework performs well under different application scenarios.

Title: How do Gender and Ethnic Diversity Impact Consumer Returns?

Abstract: Using one large-scale transaction-level dataset from one leading company in the fashion luxury industry, we empirically investigate the effects of both gender and ethnic diversity on consumer returns. This paper examines the intersection of two well-known phenomena in consumer behavior and retail operations. In consumer behavior, there is ample empirical evidence that behaviors vary across genders and ethnicities. In retail operations, the problem of consumer returns has become increasingly prevalent and costly. Despite the well-established nature of both of these phenomena, the academic literature lacks insights into how a consumer’s gender and ethnicity might drive their return behaviors. We address this research need by estimating individual genders and ethnicities for each consumer in the dataset, while also quantifying neighborhood ethnicities at the census block group level. This allows us to examine return behaviors by gender, as well as for in-group and out-group ethnicities. After using logistic regression models to analyze approximately 1.6 million transactions, our results show that females are more likely to return products than males, and individuals matching the dominant ethnicity in ethnicity-homogeneous communities are less likely to return products compared with those in ethnicity-diverse communities. These findings have important managerial implications for online retailers to reduce product return rates from two diverse perspectives, genders and ethnicities.

Title: Robotic Mobile Fulfillment Systems: Frameworks for Performance Analysis

 Abstract: Motivated by the growing demand for efficient order fulfillment in e-commerce, our study investigates three interconnected problems arising in managing robotic mobile fulfillment (RMF) systems: (i) allocation of SKU inventories into pods, (ii) pod selection for picking operations, and (iii) pod scheduling for picking operations. We deploy two distinct methodological approaches, modular and integrated, to explore how the level of integration in addressing the three problems influences overall system performance. System performance is defined by two metrics: (i) the total completion time (TCT) to fulfill all orders assigned to a specific interval and (ii) the number of required robots (NRR) to support seamless picking operations. By investigating pod scheduling as a standalone problem, we show it is NP-hard with two pickers [resp., one picker] under the TCT minimization [resp., NRR minimization] objective. Our findings suggest that the modular approach achieves an overall performance level comparable to the integrated one. In particular, we show that managers could divide and conquer the three problems by considering intuitive objective functions for each rather than having to rely on complex models that simultaneously address all aspects of RMF system operations.

Title: Familiarity-Based Dynamic Pricing with Hierarchical Bayes Estimation of Consumer Choice

Abstract: Past consumption affects customers’ familiarity with a product and influences their preference for future consumption of the product. This effect is heterogeneous among individuals. More generally, individuals’ preferences may be nonmonotonic with respect to familiarity and exhibit various patterns. Meanwhile, the prevalence of customer management programs has made consumer transactional data readily accessible. This creates opportunities for the firm to tailor pricing decisions to each customer’s specific familiarity-based utility pattern. A major hurdle, however, is the complexity of deciphering the diverse familiarity-based utility patterns embedded in the transactional data and integrating this information with the price optimization model. In this paper, we present a value-passing dynamic pricing strategy that is optimal under a familiarity-based multinomial logit (MNL) choice model. In this context, we develop several flexible variants of the familiarity-based pricing model that dramatically reduce the complexity of integrating diverse utility patterns into optimal pricing decisions. Employing a hierarchical Bayes estimation approach, we demonstrate how this data-driven dynamic pricing strategy improves a firm’s bottom line.

Title: Dynamic Resource Allocation: Algorithmic Design Principles and Spectrum of Achievable Performances

Abstract: In this work, we consider a broad class of dynamic resource allocation problems, and study the performance of practical algorithms. In particular, we focus on the interplay between the distribution of request types and achievable performance, given the broad set of configurations that can be encountered in practical settings. While prior literature studied either a small number of request types or a continuum of types with no gaps, our work allows for a large class of type distributions. Using initially the prototypical multi-secretary problem to explore fundamental performance limits as a function of type distribution properties, we develop a new algorithmic property “conservativeness with respect to gaps,” that guarantees near-optimal performance. In turn, we introduce a practically-motivated, simulation-based algorithm called RAMS, and establish its near-optimal performance, not only for multi-secretary problems, but also for general dynamic resource allocation problems.

Title: Who Leads the Way? Buyer vs. Supplier Initiatives in Supply Chain Carbon Footprint Reduction 

Abstract:  We study two supply chain models: (i) Buyer-Led Supplier Development and (ii) Supplier-Led Supplier Development, which are consisting of a buyer and a supplier. We analyze which player should initiate the carbon footprint reduction efforts in the supply chain under a scope 3 emission tax by using different levers. The supplier, as the producer, may invest to reduce the product carbon footprint. The buyer, as the downstream partner, may incentivize the supplier through supplier development initiatives. In the buyer-led model, the buyer offers a wholesale price premium rate contingent on improvement, and the supplier decides her carbon footprint reduction efforts accordingly. In the supplier-led model, the supplier offers sharing the carbon footprint reduction investment cost and the buyer decides the improvement level. We compare these two models and gain insights into their effectiveness under varying market conditions. Our results show that carbon footprint reduction is guaranteed if the supplier takes the lead, however, a wider range of improvement opportunities occur when the buyer initiates the reduction effort. 

Title: Failures of Health Equity: An Examination of Bias in Healthcare Treatment

Abstract: Failures of health equity are closely intertwined with biases in healthcare treatment. We study how underrepresentation bias in clinical trials affects health outcomes and equity. We propose behavioral models to analyze physicians’ treatment decisions. In our model, as supported in the literature, physicians’ treatment decisions are influenced by two distinct cognitive systems: the intuitive and the rational. We model the intuitive system as a Markov decision process where decisions are influenced by the treatment outcome of the previous patient using a "win-stay/lose-shift" heuristic. This illustrates a propensity among physicians to modify their treatment choices following negative outcomes. By modeling the rational system as a multi-armed bandit problem, physicians select treatments based on their subjective probabilities of treatment efficacy, subsequently updating these evaluations based on treatment outcomes. Moreover, physicians can alternate between the two cognitive systems. To model this behavior, we construct a hybrid cognitive model employing an \varepsilon_n-greedy strategy, which increasingly favors rational decision-making as patient outcome data accumulates. We first investigate how the probability of choosing each treatment converges under three cognitive models (intuitive only, rational only, and hybrid of both) for the case where there is no clinical trial bias. Then we compare how the treatment choice probabilities change in the presence of the underrepresentation bias in the clinical trial step. Our findings indicate that, when no bias is present, both intuitive and rational systems maintain a nonzero probability of choosing the less effective treatment. When underrepresentation bias in the clinical trials is introduced, we show that the incomplete learning (i.e., the choice of the less effective treatment) resulting from the Gittins index policy in the rational system might be exacerbated. However, our key takeaway involves showing that the hybrid cognitive model overcomes this bias such that the physician eventually chooses the more effective treatment.

Title: LEGO: Optimal Online Learning under Sequential Price Competition

Abstract: We consider price competition among multiple sellers over a selling horizon of T periods. In each period, sellers simultaneously offer their prices and subsequently observe their respective demand that is unobservable to competitors. The realized demand of each seller depends on the prices of all sellers following a private unknown linear model. We propose a least-squares estimation then gradient optimization (LEGO) policy, which does not require sellers to communicate demand information or coordinate price experiments throughout the selling horizon. We show that our policy, when employed by all sellers, leads at a fast rate O(1/\sqrt{T}) to the Nash equilibrium prices that sellers would reach if they were fully informed. Meanwhile, each seller achieves an optimal order-of-\sqrt{T} regret relative to a dynamic benchmark policy. Our analysis further shows that the unknown individual price sensitivity contributes to the major difficulty of dynamic pricing in sequential competition and forces regret to the order of \sqrt{T} in the worst case. If each seller knows their individual price sensitivity coefficient, then a gradient optimization policy can achieve an optimal order-of-1/T convergence rate to Nash equilibrium as well as an optimal order-of-log T regret.

Title: Empowering or Exploiting? The Implications of Direct Market Access for Improving Smallholder Farmers' Welfare

Abstract: Poor market access is a major hurdle to poverty reduction for smallholder farmers. Traditional strategies that connect farmers with wholesale intermediaries are criticized for exposing farmers to exploitation. While it is believed that this issue can be resolved by direct market access under which farmers sell directly to consumers, practice indicates that farmers often need to work with service intermediaries to facilitate such direct sales and continue to suffer from exploitation. Hence, it is unclear whether and under what circumstances direct market access can benefit smallholder farmers. We analyze these questions focusing on comparing two widely observed strategies: contract farming, which provides farmers with assured sales of their output to a buying firm, and rural livestreaming, whereby farmers sell to consumers in live broadcast run by media companies. We show that compared to contract farming, the direct market access enabled by livestreaming may mitigate exploitation and improve farmers’ income when they plant niche crops or incur low planting costs. Otherwise, direct market access can aggravate exploitation and hurt farmers’ income. Furthermore, direct market access can become more effective in improving farmers’ income under yield uncertainty, but may backfire when farmers are supported by subsidies.

Title: Virality of Information Diffusion on WhatsApp

Abstract: This paper explores the structural characteristics of information dissemination on WhatsApp, focusing particularly on the concepts of ”breadth” and ”depth.” ”Breadth” refers to the maximum number of groups to which a message is simultaneously forwarded, while ”depth” indicates the maximum number of times a message is forwarded. Using a dataset from 1,600 groups in India comprising over 760,000 messages spanning text, images, and videos, this study employs hashing techniques to track message propagation in a privacy-preserving manner. Analysis of cascade size, breadth, and depth reveals significant trends: text and video messages tend to generate larger cascade sizes compared to images. Contrary to public platforms, depth emerges as the primary driver behind widespread information dissemination (which could be due to WhatsApp’s limitations on message broadcasts). Additionally, distinct disparities among message types show depth as the decisive factor in text and video cascades, while both breadth and depth significantly contribute to image cascades. These findings underscore the importance of considering structural nuances in understanding information spread dynamics on private messaging platforms, providing valuable insights for effective dissemination strategies and management in digital communication landscapes.

Title: Mass Vaccination Scheduling: Trading off Infections, Throughput, and Overtime

Abstract: Mass vaccination is essential for epidemic control, but long queues can increase infection risk. We study how to schedule arrivals at a mass vaccination center to minimize a tri-objective function of the expected number of infections acquired while waiting, throughput, and overtime. Leveraging multi-modularity results from a related optimization problem, we construct a solution algorithm and find that our model-based policy significantly outperforms an equally-distributed, equally-spaced schedule. We also discuss managerial insights regarding the optimal schedule's structure and compare it to the well-known "dome-shaped" policies found in other appointment scheduling settings.

Title: Learning in Lost-Sales Inventory Systems with Stochastic Lead Times and Random Supplies

Abstract: Supply uncertainty, characterized by stochastic lead times and random supply quantities, has attracted increasing attention from academia, industries, and governments, particularly in the aftermath of the COVID-19 pandemic. In this paper, we consider the problem of managing lost-sales inventory systems with general supply uncertainty: stochastic lead times and random supplies. Unlike the previous studies, we assume the decision maker has no prior information on the stochastic demand and supply. We propose the first provably effective learning algorithm for inventory management problems with censored demand and supply data under general supply uncertainty. Then we establish a regret of O(L+poly( L’)) for this learning algorithm compared to the best constant-order policy, where L’ is the upper bound of the random part and L is the deterministic part of the stochastic lead times. Due to the complicated nature of the considered inventory systems, this problem exhibits three primary technical challenges: the non-convexity of cost function, the establishment of stability for inventory systems under constant-order policies, and the accurate estimation of long-run average costs. We overcome these challenges through novel approaches, some of which are of independent interest. We also conduct numerical experiments to demonstrate the effectiveness of our algorithm.

Title: Customer Reward Programs for Two-Sided Markets

Abstract: Some two-sided markets have been spending billions of dollars on customer reward programs every year. Yet, there is little research on the rationale and impacts of such programs. This paper examines customer reward programs in two-sided markets, investigating the efficacy of such programs and highlighting their interplay with matching schemes. We adopt an analytical model for a platform that interacts with customers and service providers over an infinite time horizon. Under a customer reward program, a customer earns a cash reward for every purchase with a finite expiration term, which can be used to offset the selling price in a subsequent purchase. Customers are heterogeneous in their request probabilities and valuations, while providers differ in their service costs. We show that adopting a customer reward program can often dramatically improve the platform's profit. Importantly, matching schemes play an important role -- customer reward programs are much more lucrative under priority matching schemes than a random matching scheme. Overall, we conclude that customer reward programs can be an even more important profit-boosting tool for two-sided markets than for traditional one-sided markets. We also discuss the welfare implications of our findings.

Title: The Wisdom of Crowds When Experts Use Algorithmic Advice

Abstract: In many managerial settings, good decision making depends on obtaining an accurate forecast of an uncertain variable of interest. Useful knowledge about the variable may be accessible to both human experts and AI models, where each may have their own relative advantages in forming an accurate forecast. To make use of both sources of information, AI advice can be provided to human experts who can assess how much weight it should be given when updating their prior forecasts.Since useful information may be held by different individuals, combining forecasts from multiple experts can also boost the accuracy of the aggregate forecast. However, while providing AI advice to humans can be helpful at an individual level, it may also induce correlations in judgment errors that hamper efficient combination of information at the aggregate level. Using a stylized Bayesian model of information held by human experts and the algorithm, we derive a new procedure for aggregating judgments when humans receive AI advice. The method uses individual responses to estimate how much weight to put on human versus AI forecasts, and forms an aggregate forecast by taking the final average human forecast and adjusting it toward or away from both the initial average human forecasts and the average AI advice.

Title: Optimal Congestion Signaling to Customers with Heterogeneous Patience

Abstract: In an unobservable queue, where customers lack the complete wait time information, a throughput-maximizing server aims to exploit the information asymmetry by strategically signaling coarse congestion information to incentivize customers' arrival into the system. The customers make a calculated decision about joining the queue by creating a belief of their utility given the congestion signal provided by the server. Using the Bayesian persuasion framework to model the customers' response, we map the problem of designing an optimal signaling mechanism to finding an optimal policy in a Constrained MDP problem. Afterward, we exploit the Constrained MDP formulation to derive the structure of the optimal policy. When customers are heterogeneous, we discover a counter-intuitive phenomenon where the optimal signaling mechanism attains a laminar structure, as opposed to a monotone structure commonly seen in MDP settings. We also show that the laminar structure of the optimal policy is also prevalent in a large class of admission control problems.

Title: Dynamic Assignment of Jobs to Workers with Learning Curves

Abstract: We study the problem of dynamically assigning workers to jobs that arrive stochastically over time. Departing from existing versions of this problem, we consider a variant motivated by a core problem in operating room management in which workers can develop familiarity with jobs.  When a worker is assigned a job, their familiarity with that job increases, and it decreases when left unassigned. The job completion cost is a function of worker familiarity, with higher familiarity levels leading to lower costs. This problem gives rise to a challenging Markov decision process with exogenously evolving stochastic combinatorial constraints and endogenously evolving familiarity levels; therefore, approximations are needed. First, we develop familiarity-agnostic (FA) policies that prescribe an assignment to each realization of worker-job availability, independent of familiarity levels, to sidestep the complex endogenous dynamics. Second, we construct Lagrangian relaxation (LR) policies that relax the stochastic combinatorial constraints, decoupling the problem across worker-job pairs. We provide insights into scenarios where FA and LR policies perform near-optimally. For scenarios in which these policies are sub-optimal, we propose a new mechanism to combine multiple LR and FA policies, capturing their collective strengths. We establish the theoretical properties of our policies and numerically compare their performance.

Title: When Where Watt: Harnessing the Value of Time and Location of Electricity Generation for Renewables

Abstract: Renewable energy sources are expected to account for 38% of global electricity generation by 2027, nearly doubling from 20% in 2015, due to growing policy support, energy security concerns, and cost competitiveness. Given this unprecedented rise, selecting renewable generation plant locations using existing ad-hoc approaches poses challenges arising from the failure to capture system-wide impacts of time and location of intermittent electricity generation. As a remedy, we propose a new site selection metric – Quality Adjusted Power Value (QAPV) – which captures monetary value of electricity by accounting for time and location of generation. We then model a price-making mechanism for wholesale electricity markets and measure counterfactual revenue for renewable power plants when sited with QAPV. We validate our methodology to locate wind turbines using high-granularity wind-speed data from Texas – we propose a counterfactual siting plan for wind turbines installed in Texas between 2010-2015 and measure changes in revenue for plants. Our work contributes two insights – for renewable developers, our site selection metric can be used to compare potential investment sites, and for grid planners and policymakers, our results on counterfactual revenue can enable discussions around policies that allow efficient investments in renewables.

Title: Data-Driven Dynamic Assortment in Online Platforms: Learning about Two Sides

Abstract: Two-sided online platforms are reshaping the landscapes of various sectors. Unlike traditional marketplaces such as Amazon, a two-sided platform allows participants on the demand side (e.g., customers) or the supply side (e.g., providers) to be ``active.'' When participants are active, they must initiate proposals to interact with agents on the other side of the platform to facilitate transactions. With the rise of online platforms with active participants (e.g., HomeAdvisor), understanding the preferences of not only customers but also supply-side participants plays a crucial role in enhancing the platforms' financial performance. The caveat is that the preferences of customers and sellers, which are key to the success of assortment decisions, are unknown to platform managers, and they must be learned over time. Furthermore, platforms often lack adaptive decision support systems that facilitate successful matches between supply- and demand-side participants under incomplete information. This paper aims to bridge this crucial gap by formulating a dynamic assortment selection problem faced by the manager of a two-sided online platform. We investigate a dynamic capacitated assortment selection problem faced by the manager of a two-sided online platform, where heterogeneous customers sequentially arrive on the platform. In each period, the platform manager chooses a subset of sellers from a finite set of sellers, tailored to the customer type. Customers are active, so an arriving customer must initiate a proposal to a seller on the platform to acquire products (or services). Upon observing the assortment, the customer sends a proposal to at most one seller in the assortment, considering a multinomial logit (MNL) choice model. After $K$ periods, all sellers review their respective proposals and decide to match with at most one customer according to an MNL model, and this cycle repeats. The objective of the platform manager is to dynamically learn the choice model parameters of both customers and sellers while maximizing the total expected reward over a planning horizon of $T$ periods. Literature on dynamic assortment selection problems has solely analyzed one-way learning scenarios, where the focus is to understand the preferences of customers. However, that formulation assumes either nonexistent or known preferences of the other side. Our study diverges from this research stream by studying a dynamic assortment selection problem with two-way learning. Specifically, in our study, the decision maker (the platform manager) simultaneously learns about both customer and seller preferences to dynamically choose assortments under incomplete information. To our knowledge, our paper is the first to examine a dynamic assortment selection problem with this two-way learning. In our study, we employ an exploration-exploitation approach to develop an asymptotically optimal online algorithm that dynamically learns about the choice model parameters of both customers and sellers while optimizing the platform’s objective. Our online algorithm achieves a non-asymptotic worst-case regret bound that significantly improves upon the worst-case regret bounds achieved by the existing one-way learning algorithms developed for dynamic assortment selection problems. Our study contributes to the advancement of assortment optimization algorithms for online platforms, especially within the framework of dynamic two-way learning. By leveraging the power of data-driven approaches and a two-way learning framework, online platforms can increase transaction volumes, and ultimately drive growth and profitability. 

Title: Solving Policy and Reverse Supply Chain Design Using Continuous Approximation, Monotonicity Analysis, and Contextual Optimization

Abstract: Driven by the need for circular economy, it is critical to solve recycling policy problems to improve recycling rate. In this talk, I plan to address policies to improve then recycling rate for the California's Beverage Container Recycling Program (BCRP, also known as the “Bottle Bill”) and Massachusetts’ Municipal Solid Waste (MSW) recycling network. For California’s Bottle Bill, I integrate continuous approximation for the facility location portion and monotonicity analysis to optimize for container redemption value (CRV) and distance to recycling center. For Massachusetts’s MSW recycling, I leverage continuous approximation for the waste collection routing portion and contextual optimization for the reverse logistics portion to optimize for the endogenous recycling collection policy. I plan to share simple and approximately optimal policy options that California and Massachusetts can implement in their state. Finally, I will share ways that the integration of these previously disjoint methods can also be extended to many interesting and impactful problems.

Title: Wasserstein Distributionally Robust Logistic Regression with Sparse Cardinality Constraints

Regret Distribution in Stochastic Bandits: Optimal Interplay between Expectation and Tail Risk

Abstract: In this study, we tackle the problem of Wasserstein distance-based distributionally robust logistic regression with a cardinality constraint, aiming to minimize expected risk against the worst-case distribution around the empirical distribution, while imposing sparsity on the model. This robustness and sparsity enhance the model’s interpretability. We recast the sparsity constraints using a difference of convex functions penalty and solve the reformulated problem via the proximal subgradient method, for which we provide a convergence guarantee. We also present statistical analysis results for the sparse robust logistic regression estimator

Title: Promoting Circular Supply Chains under Repair Heterogeneity

Abstract: Problem definition: We study a contract design problem where a manufacturer outsources repairs of returned components to a supplier. The manufacturer faces a cost-sustainability trade-off: minimizing the total cost while incentivizing the supplier for a higher repair rate. However, the supplier, endowed with finite resource capacity, prioritizes repairs based on costs, and allocates its resource capacity between repair and production. Methodology/results: We propose a Stackelberg model where the manufacturer offers a price premium based on the severity under both linear and quadratic structures. Based on the offered premium, the supplier applies its prioritization rule and commits to a repair rate. We identify multiple equilibrium outcomes. The manufacturer aims to enforce a certain equilibrium outcome through contracts, depending on its level of commitment to sustainability. We extend our model by considering a dual-channel incentive contract where the manufacturer offers an upfront investment to improve the supplier’s repair capability alongside the price premium. Managerial implications: Our study sheds light on how price premium and investment can be formulated as contract terms to manage outsourced repairs for capital goods in the B2B market. By adjusting the premium and investment, capital goods manufacturers can balance cost-effectiveness with desired repair rates for circular after-sales service.

Title: Tactical Fleet Planning in Drone Enabled Deliveries and Predicting Drone Delivery Efficiency in Urban Areas using GNNs.

Abstract: Last-mile delivery is the time-sensitive and costly leg of the supply chain. Drones offer substantial value to this sector by avoiding congested roads, using faster aerial pathways. Strategic incorporation of drones necessitates a balanced assessment of reduced delivery times against their acquisition cost. This study presents drone-assisted last-mile delivery strategies, including such assessments of characterization. Initially, we introduce a parametric routing design to optimize drone-assisted delivery that bridges the gap of incorporating operational constraints, including drone flight range, and coordinating truck routes with drone trajectories. Our analysis reveals three scenarios in which the delivery efficiency is restricted by truck's capacity, drone's range, or the synchronization between trucks and drones. Furthermore, we use machine learning methods to evaluate the effectiveness of drone delivery in different urban settings, as it varies based on socio-geographic differences and the structure of the route network. We use Graph Neural Networks (GNNs) to leverage the data associated with each city's urban structure. We developed and leveraged the "Drone Sidekick Tool," an interactive tool designed to collect data on travel distance, travel time, and environmental impacts of implementing drone delivery. This tool, coupled with our algorithmic approach, supports the development of efficient and adaptable drone delivery solutions.

Title: Extended CNA Training Hours and Its Impact on Nursing Homes

Abstract:  The study investigates the impact of extended Certified Nurse Aide (CNA) training hours on nursing home quality and staffing. CNAs play a critical role in providing care for residents in nursing homes, yet concerns persist regarding the adequacy of their training. While federal regulations mandate a minimum of 75 training hours, composed of clinical and didactic components, several states have opted to increase this requirement. However, the causal relationship between extended training hours and nursing home outcomes remains unclear. By analyzing changes in training requirements between 2009 and 2019, this study employs a Diff-in-Diff analysis to evaluate the effects on nursing home quality and staffing. The findings address challenges in quality improvement and staffing shortages faced by nursing homes, thereby enhancing resident care and ensuring workforce sustainability within the healthcare system.

Title: Data-driven Price Optimization: From Observational Study to Experimental Design

Abstract: Data-driven price optimization has gained its popularity over the past decades. This talk will start with our collaboration with a leading consumer electronics retailer in the Middle East, where only historical sales data can be used to optimize price decisions. The intrinsic sparsity within the dataset, characterized by limited price changes and low sales volumes, renders traditional models ineffective in deriving a reasonable price elasticity function. To address this challenge, we advocate the adoption of a separable model that leverages two submodels to distinctly capture the effects of price and contextual information. Subsequently, we explore the design of dynamic pricing experiments that permit adaptive price adjustments to gauge market response. Extending beyond the price elasticity estimation, we are also interested in maximizing the expected revenue through the experiment, and controlling for the tail risk that, if not controlled, may lead to significant financial losses. Our analysis statistically reveals the interplay between these three pivotal objectives, offering a comprehensive perspective on the multifaceted nature of experimental design for evaluating effective pricing strategies

Title: Transformers as Operations Manager

Abstract: Operations managers need to make sequential decisions under uncertainty, balancing the need to learn about an unknown environment with the risk of immediate revenue losses. This paper introduces the Operations Management Generative Pre-training Transformer (OMGPT), a transformer-based framework for solving sequential decision-making tasks. Our empirical evaluation of OMGPT underscores its effectiveness, demonstrating a notable improvement in performance over existing benchmark algorithms across various tasks, like the dynamic pricing and the newsvendor problem. We also assess OMGPT's robustness in environments that deviate from the training environments, including its adaptability to changes in context dimensions, action spaces, and horizon length. Additionally, the framework's capability to navigate multiple and evolving environments is tested. A series of experiments further visualize and demonstrate OMGPT's operational logic and strategic approach to balancing exploration and exploitation. Through this study, OMGPT establishes itself as a potent tool for operations management.

Title: 30 Million Canvas Grading Records Reveal Widespread Sequential Bias and System-Induced Surname Initial Disparity

Abstract: The widespread adoption of learning management systems in educational institutions has yielded numerous benefits for teaching staff but also introduced the risk of unequal treatment towards students. We present an analysis of over 30 million Canvas grading records, revealing a significant bias in sequential grading tasks. We find that assignments graded later in the sequence tend to (1) receive lower grades, (2) receive comments that are notably more negative and less polite, and (3) exhibit lower grading quality measured by post-grade complaints from students.

Furthermore, we show that the system design of Canvas, which pre-orders submissions by student surnames, transforms the sequential bias into a significant disadvantage for students with alphabetically lower-ranked surname initials. This surname initial disparity is observed across a wide range of subjects, and is more prominent in social science and humanities as compared to engineering, science and medicine. The assignment-level surname disparity aggregates to a course-level surname disparity of students' GPA and can potentially lead to inequitable job opportunities. For platforms and education institutions, the system-induced surname grading disparity can be mitigated by randomizing student submissions in grading tasks. Also, education institutions should keep the workload of graders at a reasonable level to reduce fatigue.

Title: Regularization for Adversarial Robust Learning

Abstract:  Despite the growing prevalence of artificial neural networks in real-world applications, their vulnerability to adversarial attacks remains to be a significant concern, which motivates us to investigate the robustness of machine learning models. While various heuristics aim to optimize the distributionally robust risk using the $\infty$-Wasserstein metric, such a notion of robustness frequently encounters computation intractability. To tackle the computational challenge, we develop a novel approach to adversarial training that integrates entropic regularization into the distributionally robust risk function. This regularization brings a notable improvement in computation compared with the original formulation. We develop stochastic gradient methods with near-optimal sample complexity to solve this problem efficiently. Moreover, we establish the regularization effects and demonstrate this formulation is asymptotic equivalence to a regularized empirical risk minimization (ERM) framework, by considering various scaling regimes of the regularization and robustness levels. These regimes yield gradient norm regularization, variance regularization, or a smoothed gradient norm regularization that interpolates between these extremes. We numerically validate our proposed method in supervised learning and reinforcement learning applications and showcase its state-of-the-art performance against various adversarial attacks.

Title: The Relative Indirect Effects of Technology Bias and Implicit Bias on Racial Disparity in Service Delivery and Sepsis Mortality

Abstract: Racism has been identified as a cause of disparity in healthcare outcomes in the U.S. but much work remains to be done. We disentangle the relative impact of two types of racism on inpatient hospital mortality: technology bias embedded in medical equipment and implicit bias of clinicians that negatively affects service delivery. Drawing on clinical data from intensive care unit (ICU) patients with sepsis, we use propensity score matching to create groups of white and nonwhite patients balanced on severity of illness and other variables. We run a causal mediation analysis to test our model that links patient race to hospital mortality through two mediating variables related to service delivery: discrepancies in blood oxygen saturation measurements from a known technology bias in the medical device used to measure blood oxygen saturation at the bedside (pulse oximeter), and administration of supplemental oxygen, which could be impacted by implicit biases of clinicians. We first replicate prior findings that technology bias of pulse oximeters results in a higher probability of a discrepancy in oxygen saturation readings from the bedside equipment versus from a more accurate laboratory test for nonwhite patients than for white patients; and that a higher discrepancy lowers the likelihood that the patient receives supplemental oxygen during the ICU stay. We then make a unique contribution by finding that nonwhites with sepsis have a 79% higher risk of hospital mortality in the ICU than whites after controlling for the severity of illness and our mediating variables and that 45% of this racial disparity in mortality stems from technology bias embedded in pulse oximeters while 16% arises from implicit bias. Our results indicate that reducing racial disparities will require addressing both types of racism, but that technology bias has a larger negative impact on mortality than implicit bias does. We estimate that over 2,300 lives of racial and ethnic minorities with sepsis could be saved each year by eliminating these two biases. 

Title: New Formulations and Valid Inequalities for the Least Cost Influence Problem on Social Networks

Abstract: This work studies the least cost influence problem where the decision maker wants to promote a new product over a given social network, and the goal is to have every individual adopt this product with the least cost. In the given network, each individual receives influence from their neighbors who have adopted the products, and they also may influence others once they become adopted. A threshold is applied to the activation condition. We developed several IP formulations and designed valid inequalities to enhance them. Theoretical results are given, and comprehensive numerical experiments are implemented to certify the promise of the proposed methods. In addition, we conducted a thorough study regarding the optimal promotion strategy on different network structures and gave some managerial insights into this problem.

Title: Advice Provision in the Pandemic: The Impact of Information Granularity on Social Protection

Abstract: This paper investigates the strategies for providing public health advice during a pandemic, such as COVID-19 or potential future disease, focusing on the granularity of information given by a social planner. We compare two main information provision strategies: uniform and targeted information provision. Under a uniform information provision strategy, the social planner announces a weighted average of the disease burden across the entire society. Under a targeted information provision strategy, the social planner specifies disease burdens for different social groups. We identify conditions where a uniform information provision is more effective than a targeted one. We further extend the analysis of the granularity of information in multi-period scenarios with single- and multi-dimensional information. For single-dimensional information (age), by considering both the available data on age-structured contact matrices and incorporating the information granularity provided by the US and UK models, we analyze a population consisting of 19 distinct base age groups. Our results demonstrate that the optimal information provision aggregates 19 base groups into 9 distinct groups and achieves significant cost savings. For multiple dimensions (age and race/ethnicity), our results indicate that the optimal information provision aggregates the 15 base groups into 6 distinct groups. Besides, aggregating the information along the age dimension can be more effective than aggregating the information along the race/ethnicity dimension. These findings show that more granular information does not necessarily lead to better pandemic management and offer insights for designing effective public health information strategies.

Title: Contextual Data-Integrated Newsvendor Solution with Operational Data Analytics (ODA)

Abstract:We study the data-integrated newsvendor problem in which the random demand depends on a set of covariates. Observing from the solutions analyzed in the existing studies, we identify the equivariant class of operational statistics (i.e., the mapping from the demand and covariate data to the inventory decision) to develop the operational data analytics (ODA) framework for the contextual newsvendor problem. The equivariant property is intuitively appealing, and it is justified by the fact that, regardless of the sample size, no other decision rule can uniformly dominate the optimal operational statistic within the equivariant class. We also demonstrate that nonequivariant solutions can produce unstable empirical performance with limited samples, whereas equivariant solutions exhibit robustness. When the distribution family of the demand is known but the coefficients of the demand function are unknown, we can directly validate the decision performance of operational statistics within the equivariant class, and derive the uniformly optimal solution. When the distribution family of the demand is unknown, we formulate the data-integration model as a subclass of equivariant operational statistics obtained through adaptively boosting some candidate solution. For decision validation, we project the validation data to the demand for the covariates of interests, and the projection is constructed by utilizing the structure of the candidate solution. We demonstrate the superior small-sample performance of adaptive boosting, and establish the consistency of the boosted operational statistics. Our ODA formulation, building on the inherent characteristics of the contextual newsvendor problem, highlights the importance of understanding structural properties in data-integrated decision making.

Title: Crafting Freelance Success: Unveiling the Impact of Availability on Match Quality through Conversational Analytics

Abstract: The past decade witnessed the rapid growth of online labor marketplaces. However, the key challenge for marketplace intermediaries remains to foster high-quality matches to ensure that outsourced tasks are completed successfully by freelancers. This study delves into the critical role of a freelancer's availability in shaping match quality. Utilizing a large-scale conversational dataset on a freelance platform, we uncover the relationship between availability and match quality. Our findings reveal that match quality is notably improved when both parties engage in positive discussions about availability at the outset of their conversation. However, this positive effect diminishes over time. Conversely, negative discussions about availability result in consistently lower match quality throughout the conversation, with this negative effect persisting until the middle stages. Our paper offers essential operational and managerial guidance for platform operators. To encourage clients to prioritize workers with greater availability in the early stages after a match is formed, platforms can effectively signal this dynamic availability through the ongoing conversations between freelancers and clients. This proactive approach not only enhances transparency but also facilitates informed decision-making, ultimately leading to more successful and satisfying collaborations for all parties involved.

Title: The Operational Data Analytics (ODA) for Service Speed Design

Abstract: We develop the operational data analytics (ODA) framework for the classical service design problem of G/G/c/k systems. The customer arrival rate is unknown. Instead, some historical data of interarrival times are collected. The data-integration model, specifying the mapping from the arrival data to the service rate, is formulated based on the time-scaling property of the stochastic service process. Validating the data-integration model against the long-run average service reward leads to a uniformly optimal service rate for any given sample size. We further derive the ODA-predicted reward function based on the data-integration model, which gives a consistent estimate of the underlying reward function. Our numerical experiments show that the ODA framework can lead to an efficient design of service rate and service capacity, which is insensitive to model specification. The ODA solution exhibits superior performance compared with the conventional estimation-and-then-optimization solutions in the small sample regime.

Title: A Minibatch-SGD-based Learning Meta-Policy for Inventory Systems with Myopic Optimal Policy

Abstract: Stochastic gradient descent (SGD) has proven effective in solving many inventory control problems with demand learning. However, it often faces the pitfall of an infeasible target inventory level that is lower than the current inventory level. Several recent works (e.g., (Huh and Rusmevichientong 2009, Shi et al. 2016)) are successful to resolve this issue in various inventory systems. However, their techniques are rather sophisticated and difficult to be applied to more complicated scenarios. In this paper, we address the infeasible-target-inventory-level issue from a new technical perspective -- we propose a novel minibatch-SGD-based meta-policy. Our meta-policy is flexible enough to be applied to a general inventory systems framework covering a wide range of inventory management problems with myopic clairvoyant optimal policy. By devising the optimal mini-batch scheme, our meta-policy achieves O(T^(1/2)) regret for the general convex case and O(log T) regret for the strongly convex case. To demonstrate the power and flexibility of our meta-policy, we apply it to three important inventory control problems: multi-product and multi-constraint systems, multi-echelon serial systems, and one-warehouse and multi-store systems by carefully designing application-specific subroutines.

Title: The Impact of Information-Granularity and Prioritization on Patients’ Care Modality

Abstract: The past few years have witnessed a significant expansion in telemedicine adoption. On one hand, telemedicine has the potential to increase patients' access to medical appointments. On the other hand, due to the limitations of remote diagnostic and treatment methods, telemedicine may be insufficient for patients' treatment needs and may necessitate subsequent in-person follow-up visits. To better understand this tradeoff, we model the healthcare system as a queueing network providing two types of service: telemedicine and in-person consultations. We assume that an in-person visit guarantees successful treatment, whereas a telemedicine visit may fail to meet the patient's treatment needs with a probability that is contingent on individual patient characteristics. We formulate patients' strategic choices between these care modalities as a queueing game and characterize the game-theoretic equilibrium and the socially optimal patients' choices. We further examine how improving patients' understanding of their telemedicine suitability through predictive analytics at the online triage stage affects system performance. We find that increasing information granularity maximizes the stability region of the system but may not always be optimal in reducing the average waiting time. This limitation, however, can be overcome by simultaneously deploying a priority rule that induces the social optimum under specific conditions.

Title: Language Prompt Selection via Simulation Optimization

Abstract: With the advancement in generative language models, the selection of prompts has gained significant attention in recent years. A prompt is an instruction or description provided by the user, serving as a guide for the generative language model in content generation. Despite existing methods for prompt selection that are based on human labor, we consider facilitating this selection through simulation optimization, aiming to maximize a pre-defined score for the selected prompt. Specifically, we propose a two-stage framework. In the first stage, we determine a feasible set of prompts in sufficient numbers, where each prompt is represented by a moderate-dimensional vector. In the subsequent stage for evaluation and selection, we construct a surrogate model of the score regarding the moderate-dimensional vectors that represent the prompts. We propose sequentially selecting the prompt for evaluation based on this constructed surrogate model. We prove the consistency of the sequential evaluation procedure in our framework. We also conduct numerical experiments to demonstrate the efficacy of our proposed framework, providing practical instructions for implementation.

Title: A Multi-Treatment Forest Approach for Analyzing the Heterogeneous Effects of Team Familiarity

Abstract: Extensive research has revealed that prior collaborative experiences among team members (called “team familiarity”) enhance outcomes of group work in many different environments. In this study, we examine the effect of team familiarity on surgery duration and extend the literature on team dynamics by examining whether the effect of team familiarity is heterogeneous across patients. Because we use multiple variables to measure team familiarity (i.e., multiple treatments of interest), we first develop a new approach, which we call the “MT forest” approach, to estimate heterogeneous effects of multiple treatments and demonstrate the effectiveness of this approach using synthetic data. Then, we apply the MT forest approach to an orthopedic surgery setting to estimate the heterogeneous effects of team familiarity on surgery duration, and investigate how the effect varies across patient features. We find (1) an increase in team familiarity score, especially the anesthesiologist-nurse and surgeon-anesthesiologist familiarity scores, significantly reduces surgery duration, and (2) the effect of team familiarity is heterogeneous across patients with different features. Finally, we develop an optimization model to assess the value of leveraging the heterogeneous effects of team familiarity to better match surgical teams with patients.

Title: Policy Gradient Methods for Finite Horizon Markov Decision Process and Applications in Operations Models

Abstract: We explore policy gradient methods for achieving the optimal policy of the finite horizon Markov Decision Processes (MDPs) with continuous state and action spaces. Policy gradient methods for Markov Decision Processes (MDPs) do not converge to global optimal solutions in general due to the non-convexity of the objective functions. We identify several easily verifiable conditions shared by various applications to demonstrate the global Kurdyka-Łojasiewicz (KŁ) condition for the objectives of policy gradient optimization problems. This closes the gap in characterizing the nonconvex landscape of policy gradient objective function for finite horizon MDPs and implies that policy gradient methods achieve sample complexity in the order of $1/\epsilon$ and polynomial at the planning horizon length to attain $\epsilon$-global optimal solutions. Our results find applications in a host of operations models including the stochastic cash balance problem and multi-period inventory system with Markov-modulated demand, giving the first sample complexity results in the literature.

Title: Optimal Online Learning of Linear Inflation Rules under the Random Yield

Abstract: We consider a periodic review inventory control system with uncertain demand and yield and focus on the Linear Inflation Rules (LIR) for minimizing the long-term average cost. Unlike existing literature, we assume that the firm does not have access to the demand or yield distribution a priori and relies on past observed realizations with possible censorship on yield factors. We design an online learning algorithm that collects fully observed realizations and updates the LIR policies by minimizing the pseudo-empirical cost functions. With high probability, the algorithm admits a regret upper bound of $O(\sqrt{T}\log T)$. This is the first provable online learning algorithm of the LIR rules for the random yield problem. We also make multiple technical contributions. 1) Under a fixed LIR policy, we prove the uniform ergodicity of the inventory sequence under some assumptions and provide counterexamples when those assumptions do not hold. 2) We prove the Lipschitz continuity of the inventory sequence on policy parameters under some assumptions. Otherwise, counterexamples are provided. 3) Using the empirical process, we show uniform convergence of the empirical cost to the long-term average cost over policy parameters, which establishes the goodness of the empirical minimizer.

Title: How Do Robots Affect Firms’ Innovation Performance? Evidence from Spanish Manufacturers

Abstract: This paper examines the impact brought by robot use on manufacturing firms’ innovation performance. The  analysis uses a rich panel dataset of Spanish manufacturing firms over 27 years (1990-2016). Our findings document, the first time in the literature, that robot use has a negative effect on firms’ process innovation. However, we do not observe a similar effect on firms’ product innovation. We also explore mechanisms by which robot use may affect process innovation. We find that the negative effect of robot use on process innovation is only salient for complex manufacturing, rather than light manufacturing or heavy manufacturing. In addition, we find that the negative effects brought by robots on process innovation are smaller for older firms. These results point to a potential mechanism whereby robots may impede process innovation through reducing human involvement. Our findings highlight possible disadvantages brought by robots in manufacturing firms, a notion neglected by the previous literature.

Title: Bayesian Online Multiple Testing: A Resource Allocation Approach

Abstract: We consider the problem of sequentially conducting multiple hypothesis testing experiments, where an irrevocable decision of whether to reject the null hypothesis (or equivalently claim a discovery) must be made before conducting the next experiment. The goal is to maximize the number of discoveries while maintaining a low false discovery rate at all times. We formulate the problem as an online knapsack problem with exogenous random budget replenishment. For general arrival distributions, we show that a simple policy achieves a sqrt T regret and such regret rate is in general not improvable. For discrete arrival distributions, we find that many existing re-solving heuristics in the online resource allocation literature, albeit achieve bounded regret in canonical settings, may appear too optimistic and over claim discoveries, and thus incur even a linear T regret. We show that a little more safety can greatly enhance efficiency --- small additional logarithmic safety budget buffers suffice to reduce the regret to polylog T. From a practical perspective, we extend the policy to handle continuous and non-stationary arrival distributions as well as unknown T. We conduct both synthetic experiments and empirical applications on a time series data of New York City taxi passengers to validate the performance of our proposed policies.

Contacts

Zhan Pang

Zhan Pang
Lewis B. Cullman Rising Star Professor of Management

403 Mitch Daniels Blvd., West Lafayette, IN 47907-2056
1-765-494-4489  zpang@purdue.edu