I am currently a Ph.D. student in Computer Science at the University of Hawai‘i at Mānoa. My research focuses on computer vision and healthcare AI, with an emphasis on how foundation models can support real-world diagnostic tasks such as early ASD screening.
My journey in computer vision began with facial expression recognition, where I learned how subtle muscle movements could be captured effectively by well-designed feature extractors, even with relatively small labeled datasets. Collecting and analyzing my own video data taught me the critical role of temporal information — that meaningful patterns often emerge not from isolated images but through the dynamics of movement over time.
As I moved deeper into research, I became increasingly convinced that foundation models represent the next frontier. By leveraging large-scale unlabeled image, text, and video datasets, these models can significantly reduce the burden of manual annotation. In my work, I have explored self-supervised learning methods — including contrastive and generative approaches — to build representations that capture both fine-grained textures and broader scene dynamics. Specifically, I aim to integrate pose estimation, attention mechanisms, and action recognition into unified frameworks that offer interpretable visual summaries for clinicians working on ASD detection.
Beyond research, I am committed to teaching and service. As a teaching assistant, I design lab exercises emphasizing reproducibility and clear evaluation. I lead weekly sessions where students debug pipelines, analyze model outputs, and engage in peer-review discussions to improve both code quality and scientific communication. I also mentor undergraduates on open-source projects, guiding them through data preprocessing, annotation tool design, and experimental scripting — fostering a collaborative and inclusive research environment.
Looking forward, my goal is to continue building systems that make computer vision models more adaptable and trustworthy in sensitive fields like healthcare. I believe that the combination of large-scale foundation models with domain-specific fine-tuning holds great promise for improving early detection and intervention, even in real-world settings where data is scarce and patient outcomes matter most.