Complete list available on Google Scholar.
Published in: Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops — The First Workshop on Short-Form Video Understanding (SVU 2025)
Authors: Yang Qian, Ali Kargarandehkordi, Yinan Sun, Parnian Azizian, Onur Cezmi Mutlu, Saimourya Surabhi, Zain Jabbar, Dennis Wall, Peter Washington, Huaijin Chen
PDF: View on CVF
Introduces the Hashtag2Action (H2A) pipeline that curates 283 k short-form video clips across 386 actions using adaptive hashtag mining and vision-based filtering for self-supervised VideoMAE V2 pre-training. Achieves competitive accuracy on UCF101, HMDB51, Kinetics-400, and SSv2 using only 20 % of the original pre-training data.
Citation:
Qian Y, Kargarandehkordi A, Sun Y, Azizian P, Mutlu O C, Surabhi S, Jabbar Z, Wall D P, Washington P, Chen H. (2025). Hashtag2Action: Data Engineering and Self-Supervised Pre-Training for Action Recognition in Short-Form Videos. ICCV Workshops (SVU 2025), Honolulu, Hawai‘i.
Published in: ASME International Mechanical Engineering Congress and Exposition (IMECE 2024)
Authors: Yang Qian, Peter Washington, Tarun K. Podder, Bardia Konh
Proposes a two-stage optimization framework combining linear programming and deep reinforcement learning (DDPG) to jointly select dwell positions and times for HDR prostate brachytherapy, achieving improved dosimetric balance and organ protection.
Citation: Qian Y, Washington P, Podder T K, Konh B. (2024). A Linear Programming and Deep Reinforcement Learning Framework to Choose Dwell Positions and Dwell Time in High-Dose-Rate Prostate Brachytherapy Using Curvilinear Catheters. ASME IMECE 2024.
Published in: arXiv preprint arXiv:2303.10741
Authors: Yang Qian, Ali Kargarandehkordi, Onur Cezmi Mutlu, Saimourya Surabhi, Mohammadmahdi Honarmand, Dennis P. Wall, Peter Washington
PDF: View on arXiv
Develops vision-based and multimodal deep models to estimate continuous emotion reaction intensity for the Hume-Reaction dataset, achieving a Pearson correlation of 0.408 and advancing fine-grained affective computing beyond discrete emotion labels.
Citation: Qian Y, Kargarandehkordi A, Mutlu O C, Surabhi S, Honarmand M, Wall D P, Washington P. (2023). Computer Vision Estimation of Emotion Reaction Intensity in the Wild. arXiv preprint arXiv:2303.10741.
Published by: University of Hawai‘i at Mānoa (MS Thesis)
Author: Yang Qian (2023)
Master’s thesis investigating personalized multimodal transformer architectures for automatic emotion recognition and reaction intensity quantification in clinical settings. Approved as Plan A thesis for the M.S. in Computer Science program.