Skip to Content

What Businesses Should Know About ‘Algorithms with Predictions’

02-05-2026

Many organizations now rely on predictive tools, such as machine learning models that forecast demand, customer behavior or system performance, to guide operational decisions. But predictions alone do not create value. The real business challenge is how to act on those predictions in a way that improves outcomes, even when the forecasts are imperfect.

A paper by Daniels School Assistant Professor Billy Jin and Will Ma from Columbia University, “Online Bipartite Matching with Advice: Tight Robustness-Consistency Tradeoffs for the Two-Stage Model,” explores this issue and offers practical lessons for decision-makers.

From prediction to action

The paper studies a fundamental tension in operations: the trade-off between trusting a prediction and hedging against risk. In the "Algorithms with Predictions" framework, an algorithm is evaluated on two metrics:

  1. Consistency: How well does it perform if the prediction is correct?
  2. Robustness: How well does it perform if the prediction is dead wrong?

While it is tempting to assume the best systems achieve both perfectly, the research reveals a more nuanced reality: there is a fundamental mathematical trade-off. To get higher consistency (more upside from good predictions), you often must sacrifice some robustness (safety against bad predictions). The key innovation in this work is identifying the "efficient frontier" — the optimal curve where you get the maximum possible safety for a given level of aggressiveness.

Why this matters for businesses

In many business settings — such as ride-hailing, logistics, pricing, cloud computing or inventory management — firms must make decisions before all information is available. For example, a ride-sharing platform receives a request from a passenger 10 minutes away. Should it dispatch a nearby driver immediately?

If an AI system forecasts "low demand," the system might dispatch the driver to secure the fare. But if that forecast is wrong and a new passenger appears right next to the driver seconds later, the system has failed. The driver is now enroute to the distant pickup, while the nearby passenger waits for a car from miles away.

Conversely, if the system is too conservative and always "saves" drivers for hypothetical nearby passengers who never show up, the fleet sits idle and revenue is lost.

The research provides a principled framework for managing this balance. It allows businesses to explicitly define their maximum tolerable risk. The algorithm then maximizes the potential upside from the AI prediction, while mathematically guaranteeing that the safety limit is never breached.

Actionable insights for decision-makers

  • Don't let “perfect” predictions ruin operations
    Algorithms optimized 100% for a specific forecast become fragile when that forecast is wrong. Instead of asking "What is optimal if this is true?", design systems that perform well if the forecast is right, but don't crash if it's wrong.
  • Adopt a portfolio approach to algorithm design
    Decision-making isn't binary. Like a financial portfolio balancing stocks and bonds, operational algorithms should balance aggressive predictions with conservative safeguards.
  • Leverage the power of marginal safety
    The trade-off between risk and reward is non-linear. Adding just a tiny amount of robustness to an aggressive system can drastically reduce downside risk with negligible cost in performance. Avoid the extremes of blind trust or total caution.
  • Balance with real-world validation: a lesson from Chicago
    In simulations using Chicago ride-sharing data, a balanced algorithm outperformed both purely aggressive (prediction-reliant) and purely conservative (robustness-focused) approaches. Avoiding massive failures during unexpected demand shifts often yields better long-term results than optimizing solely for the best-case scenario.

Practical takeaways

For businesses, the competitive advantage of AI lies not just in better forecasting, but also in better operationalization. Organizations that integrate predictive insights into structured decision processes will outperform those that treat AI as a standalone tool.

The future of business intelligence is not just about predicting what will happen. It is about building systems that can intelligently integrate these predictions into decisions, improving outcomes even if the predictions are imperfect.