01-22-2026
Psychologists know that reframing, or changing your mindset, can lead to breakthroughs for entrenched problems. What works therapeutically has a parallel in writing algorithms that solve problems more efficiently and faster without losing accuracy, as Daniels School Assistant Professor Alex L. Wang and his coauthors prove in their new paper, “Beyond Minimax Optimality: A Subgame Perfect Gradient Method.” Wang, along with coauthors Benjamin Grimmer and Kevin Shu, challenged the assumption that finding solutions to complex data problems should focus only on a worst-case scenario.
“We changed the way we measure what makes an algorithm good, moving from a static worst‑case metric to a more dynamic one. By changing the metric, we got a better algorithm,” says Wang. He notes that while the algorithm itself is nice — more than nice: it solves smooth convex problems faster — the deeper contribution is a new way of thinking about efficiency. “Down the road, we hope to develop algorithms with similar guarantees for other classes of problems you may care about. We will be able to solve more interesting problems than we can tackle today.”
The shift in thinking, from planning only for the worst case to adapting as information arrives, is at the heart of optimization. Optimization solutions quietly power workforce schedules, last‑mile delivery routes, inventory planning and even the machine learning models that forecast demand or detect fraud. They’ve long relied on algorithms that search for the “best” decision under constraints.
Wang and his coauthors began their research on a class of mathematical problems called smooth convex optimization. For decades, the gold-standard algorithms for these problems were designed around starting with only the worst-case scenario in mind.
Wang and his coauthors argue that this worst‑case lens ignores the information the algorithm gathers while it runs. Their key idea is to borrow from game theory — specifically subgame perfect Nash equilibria — to model optimization as a dynamic interaction where each step can respond to what has already been learned.
Traditionally, the problem is treated as an adversary that fixes the hardest possible instance in advance, and the algorithm chooses a strategy it must follow to the end. In the new model, the algorithm still prepares for the worst at the outset, but it is allowed to revise its guarantees as new data and structure are revealed along the way.
This change in mindset leads to a new gradient‑based algorithm that behaves differently from its predecessors while still solving the same problems and returning the same answers. The difference is what happens under the hood: as the algorithm takes many small computational steps, it continuously monitors the information it uncovers and tightens its prediction of how long the solve will take. If the actual instance turns out to be easier than the worst case, as is often true in practice, the method yields significantly faster, “dynamic guarantees” that improve as evidence accumulates.
For businesses, faster results directly affect how quickly data can be turned into action. If training a predictive model or solving a large planning problem shrinks from days to hours, firms can update decisions more frequently and reduce the lag between reality and the models guiding strategy.
Practically, this algorithm allows business leaders to:
Wang offers the following example. Consider insurers fitting logistic regression models to classify risk segments. Under the hood, this is a smooth convex optimization problem. A more efficient algorithm can enable more frequent retraining, so pricing and risk assessments better track day‑to‑day or even hour‑to‑hour shifts in the data. Across industries, data scientists and machine learning engineers can plug‑and‑play the new algorithm wherever these core optimization subproblems appear.
Beyond any single algorithm, Wang emphasizes the broader lesson: how we model a decision problem shapes what counts as “best.” Treating every situation as if it were the worst case can produce safe but sluggish systems that ignore valuable information arriving over time.
Wang and his cohort are already following up with research that extends this dynamic mindset to other classes of problems, with the long‑term goal of building optimization tools that are not just robust, but intelligently responsive to the data‑rich environments where modern businesses operate.