I am currently a first-year Ph.D. student at the Human-Computer Interaction Institute at Carnegie Mellon University, working with Dr. Jeff Bigham and Dr. Amy Pavel. I am also an Adobe Research Ph.D. Fellow.
Previously, I completed my B.S. in Computer Science at National Taiwan University, where I worked with Dr. Mike Chen and Dr. Lung-Pan Cheng.
My research lies at the intersection of inclusive and immersive experiences, with a recent focus on building systems to facilitate accessible content consumption and creation. I also did work in the fields of computational interaction, virtual and augmented reality, haptics and perception.
An authoring system that visualize real-time and post-hoc accessibility feedback for speakers to deliver more accessible presentations.
Full paper accepted to ACM CHI 2021, Yokohama, Japan
A novel approach to provide tempo variation to strength training in order to enable the design of more entertaining strength training-based exergames.
Full paper presented at ACM MobileHCI 2020, Oldenburg, Germany
An unobtrusive head-mounted vibro-tactile stimulation to reduce VR sickness and improve realism for VR navigation.
Full paper presented at ACM CHI 2020, Hawaii, USA
A recommendation pipeline which suggests personalized, optimal touchscreen accessibility settings based on individual touchscreen behavior.
Full paper presented at ACM CHI 2019, Glasgow, England
A head-mounted haptic device which applies alternating haptic cues synchronized to users’ footsteps in virtual reality.
Full paper presented at IEEE VR 2019, Osaka, Japan
A multiword rapid-serial-visual-presentation (RSVP) AR interface which allows users to read virtual context with periphery meanwhile observe physical environment via central vision.
Full paper presented at IEEE VR 2019, Osaka, Japan
A real-time speech recognition AR interface aiming to provide desirable caption visualization for deaf and hard of hearing (DHH) users during group conversations with hearing people.
Full paper presented at ACM CHI 2018, Montréal, Canada
A novel selection technique which aids motor impaired users to select targets on desktops by combining gaze pointing and simple touch swipe gestures for tagging and triggering.
Collaborated project with CSE student Tony Tung and Prof. Jacob Wobbrock from University of Washington