Skip navigation links

May

12

Online

Doctoral Defense - Shuyang Yu

the famous Belmont tower facing a sunset

About the Event

The Department of Computer Science & Engineering

Michigan State University

Ph.D. Dissertation Defense

May 12, 2025 at 9:00AM EST

Online -Information Available Upon Request from Vincent Mattison or Advisor

 

ABSTRACT

Enhancing the Robustness and Trustworthiness of Machine Learning Models in Diverse Domains

By: Shuyang Yu

Advisor: Dr. Jiayu Zhou



The rapid advancement of machine learning, particularly over-parameterized deep neural networks (DNNs), has led to significant progress across diverse domains. While the over-parameterization of DNNs gives them the power to capture complex mappings between input data points and target labels, in real-world challenges, they can inevitably be exposed to unseen out-of-distribution (OoD) examples that deviate from the training distribution. This raises critical concerns around robustness, adaptiveness, and trustworthiness of such models.

In this thesis, we propose three different methods to enhance model robustness and adaptiveness under distribution shifts. 1) Robust Unsupervised Domain Adaptation (UDA): We propose a simple, parallelizable UDA framework that efficiently transfers knowledge from corrupted source domains to target domains. The method is compatible with existing UDA approaches. 2) Handle the long-tail knowledge of LLMs for downstream tasks: we develop a reinforcement learning–based in-context learning (ICL) strategy. A dynamic uncertainty-aware ranking system selects informative examples using LLM feedback with a budget controller. 3) Federated OoD Detection: We introduce a privacy-preserving federated OoD synthesizer that leverages data heterogeneity to improve OoD detection across clients. This allows each client to learn from external class knowledge distributed across non-IID collaborators.

Domain adaptation can also pose risks of unauthorized reproduction or intellectual property (IP) theft, especially for high-value models. To strengthen model trustworthiness, this thesis introduces two IP protection techniques: 1) A novel OoD-based watermarking method that does not require access to the training data, addressing the limitations of backdoor-based watermarking in privacy-sensitive scenarios. It is both sample-efficient and time-efficient, while preserving model utility. 2) A federated learning (FL) watermarking scheme that enables both ownership verification and leakage tracing, shifting model accountability from anonymity to accountability.

Tags

Doctoral Defenses

Date

Monday, May 12, 2025

Time

9:00 AM

Location

Online

Organizer

Shuyang Yu