Sun. Mar 15th, 2026

J.P. Morgan 2026: How MedTech Leaders Are De-Risking Product Development


AI Becomes Infrastructure, Rather Than Differentiator

Artificial intelligence dominated many of the conversations at JPM 2026, whilst highlighting that AI is no longer dominating product development in the same way, which aligns with our takeaways from CES 2026. The focus has moved from what AI can do in theory to where it is already delivering tangible return on investment. This mirrors what we are seeing across the sector more broadly, where recently announced initiatives, such as NVIDIA and Eli Lilly’s co‑innovation AI lab, signal a move towards AI as a core capability, embedded within long‑term technology and development strategy rather than positioned as a standalone breakthrough.

Importantly, this progression has also brought greater nuance into how AI is discussed and applied. Whilst AI is now better understood and widely adopted, JPM 2026 reinforced that AI should not be a default requirement for every medical device. As a result, AI is no longer the primary selling point it once was, which underscores an important reframing for product teams.

The objective is no longer to build AI‑branded devices, but devices that are AI‑ready. This means designing systems where intelligence is carefully considered, appropriately integrated, and able to withstand regulatory, clinical and commercial scrutiny. In this environment, AI can still differentiate products, but it is no longer the only differentiator. Since the use of AI in medical devices is often assumed, MedTech leaders must now consider how well AI is integrated into the broader system: how it supports workflows, fits within regulatory expectations and holds up in real‑world use.

This reframing changes how AI is viewed through a risk lens. Discussions at JPM 2026 reflected a clear understanding that AI can either de‑risk or amplify risk in product development, depending on how and when it is introduced. Poorly governed or ill‑defined AI can increase system complexity, extend validation effort and create additional regulatory considerations. Used selectively and intentionally, AI systems can be used to identify incomplete data, flag anomalies or validate inputs, reducing the risk of human error in clinical and operational use.

From a product development perspective, this places greater importance on making deliberate choices about if, when and how AI is embedded. Teams that treat AI as part of a well‑designed system, rather than a bolt‑on feature, are better positioned to build confidence with regulators, investors and end users alike. We explore this in more depth in our previous article, A comprehensive guide to developing an AI‑enabled product, which looks at how to integrate intelligence in a way that supports delivery rather than introducing new points of risk. It is also a challenge we encounter regularly at eg technology, where helping teams determine whether AI genuinely adds value is often as important as helping them design it in.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *