It was a great pleasure to attend the ACM UIST 2015 conference at Charlotte, NC for the first time. First, let me explain my initial motivation to UIST (skip this if you felt boring 🙂 I just keep it as a memory.
The first time I learnt about HCI is when I was conducting a student research project, EyeControl, in my sophomore year, 2010. Unfortunately, I knew nothing about SIGCHI and I read too few papers at that time; nobody led me to the field. When I was interviewed by Xiang Cao at MSRA in 2012, I just realized that this is the research field that I would like to devote my life into it. It was Xiang that instructed me to read broad and read deep; to brainstorm, to conduct solid research. I did not choose Xiang Cao only because I want to do more technical things and learn more when in undergraduate, so I chose Zhiwei Li and Rui Cai. Both of them are great teachers, coders and researchers, but I really want to cross the gap between research and deployment with my contribution. I think ACM UIST did the best job in connecting research to real-world deployment.
Sadly, I have made two mistakes this year. One is to submit to ISMAR instead of UIST; the other is to submit to SIGGRAPH Asia full paper instead of technical brief. Indeed, I did not worth a 10-page contribution; but I made my mind to make it a 10-page contribution for next year’s UIST.
For this article, I would like to write a UIST 2015 Summary from AR / VR / CDI (Cross-device interaction) perspective instead of tangible / sensing / fabrication though I am amazed by Harrisons’ students’ work.
Intro
UIST is a symposium sponsored by SIGGRAPH and SIGCHI and considered as the top conference on technical HCI. Submission is around mid-April every year and the symposium is usually held in November.
Last year there was 1,000+ attendees (Hawaii) but this year we have only 346 attendees (Charlotte is not attractive enough? :-). Acceptance rate: ~23% It covers almost everything in Varshney’s 828V class but focuses more on VR & AR, 3D Printing (fabrication), sensing in recent years.
UIST is about novel inventions.
Large-screen / Cross-device Interaction
Waterloo’s Gunslinger: Subtle Arms-down Mid-air Interaction is an interesting paper by Liu and et al. and Vogel which utilizes LeapMotion in the waist. Crazy idea, uh? Xuetong tried to put it on the head for gesture tracking but never thought about this idea. Me, neither. The main contribution of the paper is that they show how the Gunslinger form factor enables an interaction language that is equivalent, coherent, and compatible with large display touch input.
Alt and et al. and Buschek’s GravitySpot: Guiding Users in Front of Public Displays Using On-Screen Visual Cues explores 6 visual cues to explore ways of guiding users for public displays.
Pietroszek and Lank’s Tiltcasting: 3D Interaction on Large Displays using a Mobile Device was something that I was thinking to do for large displays, but they did it in a great manner. They use a metaphor that enables users to interact within a 2D plane that is ‘cast’ from their phone into the 3D space.
Webstrates: Shareable Dynamic Media wins one of this year’s three best papers. The software works so well!
So, indeed, cross-device interaction is getting more and more popular. Next, what about Gaze + Pen + Gesture + Large Screen?
VR / AR
HPI’s Impacto: Simulating Physical Impact by Combining Tactile Stimulation with Electrical Muscle Stimulation was a absolute winner of VR this year in my mind.
MSR Dr. Benko’s FoveAR: Combining an Optically See-Through Near-Eye Display with Spatial Augmented Reality Projections extends the idea of illumiRoom to AR headsets, which yield another great novel invention!
HPI’s another VR work TurkDeck: Physical Virtual Reality Based on People is crazy.
Columbia’s Virtual Replicas for Remote Assistance in Virtual and Augmented Reality introduces new interface for remote assistance. I was serving as the reviewer of their poster in ISMAR
Brett Jones’ Projectibles: Optimizing Surface Color For Projection yields another cool application with the ShadowLamp. Solid work in projection mapping!
Havard and Disney’s Joint 5D Pen Input for Light Field Displays has a really cool demo. Unfortunately, the contribution of this paper lies upon the interactivity part rather than the cool screen part. But I love interactive light field!
The best demo this year is Scope+ : A Stereoscopic Video See-Through Augmented Reality Microscope. This is a practical invention for educational purpose!
From my perspective, haptic + VR is getting more and more popular. Spatial AR + interactivity (novel input) might be the next hot topic.
Wearables
The paper CyclopsRing: Enabling Whole-Hand and Context-Aware Interactions Through a Fisheye Ring had a really clever form factor. I was shamed that I never thought about this form factor when I was conducting research in HandSight project.
Orbits: Gaze Interaction for Smart Watches using Smooth Pursuit Eye Movements was another best paper winner.
Zihao Jin‘s Corona: Positioning Adjacent Device with Asymmetric Bluetooth Low Energy RSSI Distributions and Tracko: Ad-hoc Mobile 3D Tracking Using Bluetooth Low Energy and Inaudible Signals for Cross-Device Interaction introduced fancy ways to do cross-device interaction with BLE. When I was doing research in BLE, I was shamed again that I did not come up with similar ideas to Jon 🙁
Next year’s UIST 2016 in Tokyo, Japan; hosted by the I3D & UI god Dr. Igrashi!
Last note about our VRSurus Project: It was a great fun! Thanks to my collaborator Liang He and enjoy the videos:
It was entirely based on WebGL and WebVR techniques!
??