


We conceptualized a way to break screen borders to use the system everywhere in the room by projection as an interface medium. Meanwhile, combining voice inputs, users is able to use smart home applications without physical contact to expand use cases and use everywhere.
In this project, we focused on designing conversational AI and the gesture-voice input methods. My main contributions included gesture research, prototyping animations, and the design of AI visualizations.
We explored how gesture and voice could replace traditional touch-based interaction in future home systems. Instead of tapping screens, users could simply reach toward projected interfaces or give a verbal command, allowing the system to adapt interface size and position dynamically.
Our prototype, built with projection and motion detection, mimicked how these spatial interactions might feel in a real home environment.
Idle State of AI
Speaking State of AI
Generating State of AI
Loading State of AI