Click here to expand my research envision 
⬇️
(1) Past years witnessed the rise of foundation models trained on enormous datasets.
It is worth exploring how foundation models open up new research avenues to robustify wide ranges of neural models,
explain black-box model decisions, and innovate adversarial mechanisms by encoding rich multimodal information into adversarial feature representation.
(2) Rapid growth of dataset scale and optimization complexity makes it hard to democratize the trustworthy CV to the
greater community (e.g., laymen and non-experts). I envision significant importance on distilling pivotal knowledge
from large-scale datasets with multiple forms of supervision to shed light on efficiently improving model reliability and
generalization.
(3) Generative intelligence, especially the recent trends on diffusion model, has triggered waves of revolution in the
multimedia industry and empowered the deep learning community for various downstream tasks. Such potential intrigues
me to investigate the intrinsic expressivity, sample efficiency, and controllability of generative models to provide
constructive insights on robust, fast, and responsible data synthesis.