Top of page

Confidence Calibration for Large Language Models by Dr. Guanchu Wang

Join us Monday, November 17 at 2:00 PM in Manc 120! Dr. Guanchu Wang will talk about his research titled Confidence Calibration for Large Language Models. Abstract: Despite the widespread adoption of large language models (LLMs) in daily life, their reliability remains a major concern. One […]


AI-in-the-loop for Informed Healthcare by Dr. Sriraam Natarajan

Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI in complex domains such as healthcare, it is necessary to make these different […]


Knowledge Editing in Multi-modal Foundation Models by Dr. Kaixiong Zhou

Abstract: While FMs have demonstrated remarkable capabilities in storing knowledge and performing reasoning across modalities, including language, vision, and structured data. their stored knowledge is often static and prone to rapid obsolescence due to an evolving world. Moreover, their reliance on imperfect internal representations can […]


URECA Day: Time to Shine!

Huge congratulations to Mario, Julia, Ruicheng, and Mallory! These talented students have been awarded the Wake Forest Research Fellowship and received vital support from the URECA Center. Since last summer, they’ve been working incredibly hard, conducting intensive research, collaborating closely, and developing truly creative ideas […]


Archives