Skip to Content

The Dual-Edged Sword of AI in Innovation

07-21-2025

As we rapidly integrate generative AI into scientific and corporate innovation, are we reckoning with its philosophical and ethical implications? Do we compartmentalize, saying we’ll address it later or that other experts will do the thinking and we’ll read up on it? How, or when, will we expose ourselves to divergent voices to ensure we’re vigilant with the changes?

Those questions suggest why panels such as one the Daniels School of Business hosted at the 2025 Cornerstone for Business Conference bring high value to business and academic leaders. Purdue brought together Eamon Duede, assistant professor of philosophy, with a joint appointment in Purdue’s Data Science & Learning Division; Mohammad Rahman, Daniels School Chair in Management and professor of management; Kasie Roberson, clinical assistant professor in business communication; Erika Gilmore, vice president of HR – global learning and development at Eli Lilly; and Roy Dejoie, discussion moderator and clinical professor of management.

Duede noted that as a technology that augments human capacities, AI holds transformative potential. But, like all general-purpose technologies, it risks eroding the very skills it seeks to enhance. He reminded listeners of history’s cautionary tales: Plato warned that writing would weaken memory — in the era of oral tradition, entire poems the length of The Odyssey were memorized by members of a community — while the advent of mechanical computation diminished our innate numerical intuition. Today, AI’s capacity to automate tasks as fundamental as writing and problem-solving invites similar concerns.

AI operates as a “prosthetic” for human cognition, accelerating productivity and creativity. Gilmore noted that Eli Lilly employees use tools like Microsoft Copilot to streamline workflows, brainstorm ideas and tackle complex challenges. Yet every prosthetic requires a trade-off. Just as the printing press reshaped oral traditions, AI risks dulling critical human faculties: the ability to think deeply, question assumptions and synthesize novel ideas. When AI drafts emails or generates code, it relieves cognitive burden — but also distances users from the intellectual rigor that fuels true innovation.

In scientific research, this duality is stark. While AI can model protein structures and analyze datasets faster than any human, overreliance risks creating a generation of researchers who lack the intuition to challenge algorithmic outputs. As Duede noted, technologies like the telescope transformed science by expanding observational capabilities, but also demanded new frameworks for validating truth. AI’s “hallucinations” and biases necessitate similar vigilance.

The ethical integration of AI hinges on two pillars: education and intentionality. At Purdue, Roberson noted, courses like Strategic Business Writing teach students to use AI as a collaborator, not a crutch. By requiring drafts, prompts, and reflections, educators cultivate “human-first” skills: critical thinking, emotional intelligence and ethical judgment. Gilmore noted this approach mirrors Eli Lilly’s cultural shift toward “reasoned risk-taking,” where AI adoption is paired with robust review processes and leadership training.

Organizations must also confront the incentive structures AI introduces. When productivity metrics prioritize speed over originality, AI risks homogenizing innovation. Conversely, when used to amplify human creativity — such as generating “wacky” ideas for leadership programs, which Gilmore has used AI for — it can become a catalyst for breakthroughs. The challenge lies in designing systems that reward discernment, not just efficiency.

“What we do want people to do is not go on autopilot, not lose pieces of themselves that are so important. Instead, [we want them to] really be intentional about making choices of when to use AI and when not to,” says Gilmore.

AI’s role in innovation is inevitable, Rahman noted, with an analogy to steam engines. Its trajectory is not. As educators, Rahman, Roberson and Dejoie know they must equip students to navigate AI’s limitations while preserving the curiosity and skepticism that drive discovery. As leaders, they aim to foster cultures where AI augments, rather than replaces, human ingenuity. This requires humility: recognizing that every technological leap carries unseen costs, and that the most profound innovations often emerge from the friction between human and machine.

In the words of philosopher Socrates, “Wisdom begins in wonder.” AI may answer questions, but it is our responsibility to ensure it never stifles the urge to ask them.