Session 2-A: AI-Driven Personalized Medicine: Evolution 2.0

Click on their image to learn more about each speaker

Cesar de la Fuente, PhD

Presidential Associate Professor at the University of Pennsylvania

Irene Rombel, PhD, MBA

CEO, President and Co-Founder, BioCurie

Morgan Cheatham, MD

Vice President at Bessemer Venture Partners

Moderator: Derek Alan Oldridge, MD, PhD

Assistant Professor of Pathology and Laboratory Medicine
Perelman School of Medicine at the University of Pennsylvania

Pre-panel Q&A

Question: What have been the most significant advancements brought by AI in the pharmaceutical industry so far, and what are the most highly anticipated breakthroughs that AI is expected to deliver in the future but have yet to materialize?

Answer: “AI has revolutionized antibiotic discovery. Tasks that once required years of trial‑and‑error experimentation can now be accomplished in a matter of hours on a computer, yielding pre‑clinical candidates with the potential to treat infections that are currently untreatable. Our group has also pioneered the identification of therapeutic molecules from extinct organisms and uncovered an astonishing diversity of peptide molecules across the tree of life, revealing an unrecognized branch of host immunity. Looking ahead, AI will enable us to unearth whole families of useful molecules buried in genomic and proteomic data, deliver individualized therapies by matching interventions to the molecular profile of each patient in real time, and to design multifunctional therapeutics that integrate sensing, targeting, and treatment within a single, purpose‑built scaffold.”— Cesar de la Fuente, PhD

Question: What are the biggest challenges in applying AI to developing personalized medicine/treatment today, and how can researchers overcome barriers such as data integration and privacy concerns?

Answer: “The central obstacles are fragmented data and stringent privacy safeguards. Clinical records, genomic sequences, imaging studies, and lifestyle metrics are typically collected under different protocols and stored in incompatible formats, making it difficult to assemble the comprehensive, high‑quality datasets that powerful AI models require. Researchers can address these issues by converging on universal data standards that ensure interoperability, adopting privacy‑preserving techniques such as federated learning and differential privacy so that algorithms can learn from sensitive information without exposing it, and working with regulators to establish clear guidelines that balance innovation with ethical obligations.”— Cesar de la Fuente, PhD