Skip to Content

Shaping Industries Through Automation and Augmentation

Published on 04-08-2025

How might artificial intelligence (AI) automate jobs and how will it augment others?

A research scientist at MIT’s Computer Science and AI Lab and director of the MIT FutureTech Research Project addressed that question and many more when he visited Purdue this January.

In his talk titled “Which Tasks are Cost-Effective to Automate with Computer Vision,” Neil Thompson unpacked his lab’s research on AI exposure and predictions from recent Nobel winner Daron Acemoglu, Goldman Sachs and the Bank of England, which have provided vastly different predictions on AI job disruption. Acemoglu predicted 5% of jobs would be replaced, Goldman Sachs said 25% and the Bank of England claimed 50%. 

To evaluate those dramatic differences, Thompson’s lab studied AI exposure in industry, in particular some of the forms most likely to automate jobs versus those that will augment them. First, he notes that just because we imagine it doesn’t mean it will happen.

“The traditional way that people have looked at AI job placement is very broad. If you say, ‘Could I imagine AI doing this task?’ You get a very broad sweep of what AI could do. What we say is that there are lots of places in society where you could imagine no more cars and everyone having planes or hovercraft or something like that. That doesn't mean we have those things, right? We have lots of trade-offs in society,” Thompson said. 

His lab looked at AI cost-effectiveness across jobs. Researchers pulled in tasks for jobs listed in the Bureau of Labor Statistics and predicted which tasks could be automated. Since automation requires a high upfront investment, his lab also evaluated the size of the business, noting that a business such as a bakery might automate as its scale grows, but small businesses will not be able to afford robotics or visual AI systems. For his talk, he focused on AI vision because it has more history of implementation data. 

They assessed that not every job that AI touches will automatically be replaced. The distinction between which jobs are replaced with automation will be based on a cost-benefit analysis: how precise must the systems be, and how costly is it to build and run the system to replicate the human tasks compared to paying humans. If the automation doesn’t increase productivity when replacing humans, it’s less likely that the human’s job will be automated. The laws that guide these decisions are scaling laws.

Scaling laws say that as models get larger (more parameters) and are trained on more data, their performance improves in a predictable power law fashion.

These scaling laws have important political and economic implications. They drive the growing influence of industry in AI research, as large tech companies can throw more money into building bigger models and datasets, giving them an advantage. 

The rapid improvements in performance enabled by scaling laws mean AI capabilities are advancing quickly. That’s what leads to concerns about the pace of job automation and disruption to the workforce.

While it’s true that exponential decreases in costs will help achieve linear increases in performance, Thompson’s research team found that AI adoption may happen unevenly. Larger firms and industries will be able to more easily afford the upfront investment.

Thompson’s research shows that scaling laws in deep learning are a key driver of the rapid progress in AI capabilities but also contribute to the uneven economic and political impacts as AI is adopted across different industries and firm sizes. Understanding these scaling dynamics is crucial for anticipating and managing the societal effects of AI. 

Other tools his lab provides include an analysis of which algorithms are improving the most, benefiting society with more productivity, and an AI risk repository. The repository helps organizations understand and manage the risks associated with adopting AI systems. It’s built upon a systematic literature review that categorizes the different types of AI risks, such as bias, disinformation, and political risks. The repository also includes an incident tracking feature, where news reports about real-world AI incidents are categorized against the risk taxonomy. This allows users to see trends in the types of AI risks that are emerging and how they are evolving over time.

The AI risk repository is a tool developed by the speaker's lab to provide a comprehensive and up-to-date catalog of the various risks associated with AI deployment to aid organizations in their AI adoption and risk management efforts.