Reviews of ColourID: Improving Colour Identification for People with Impaired Colour Vision.

This paper introduces three new techniques to assist people with impaired color vision (ICV) to identify colors. Unlike traditional color identification software which suffers from speed, precision and generality for all ICV users, the proposed technique visualizes all colors from a big picture in a fast, accurate and considerable way. The three techniques visualizes colors by name, directions of hues, as well as highlighting. In addition, the authors conducted two solid user study to evaluate and compare among the three techniques, eyes and an Android tool, in desktop and mobile platform.

There are three lessons that I learnt from this paper.

First, the more research and observation you make efforts to, the more ideas you have. It is well acknowledged that color identification is a solved problem in computer vision. However, when you really consider the current existing applications, you will realize that it is not that convenient: you have to frame the camera really well to identify the target color patch. So why not invent a new tool to generate the big picture of all colors? I think these ideas origin from careful observation and solid research.

Second, the more prototypes you made, the more interesting your story is. It could be a boring paper if the authors only presented the last highlighting technique: ColorPopper. Besides, it has already been explored by Colorblind Vision app in Android. The reason why the paper can be interesting is because of its iterative development. Even ColorMeter, a work-in-progress visualization at first glance, can provide useful information from user studies.

Finally, think deep into experimental design and think deep into the results, no matter good or bad. At first thought, color identification can be a boring and simple user study. However, the authors took participants with various ICV condition into consideration. As a result, both the test and color palette are designed with careful consideration. As for the results, the authors gain lots of insights from both good and bad results, which can inspire future researchers.

As for the weakness of the paper, the lack of more discussion of future work can be a con. The lack of video and open-sourced software can be another con. As a more technology-oriented researcher, I would argue that except for the visualization part, the described identification technique such as LUV color space, dictionary mapping as well as GPU processing is not a contribution. In real-world scenario, what can we do for the ICV users in low-luminance condition? What can we do for moving scenario? How to augment the color to reduce user’s’ reaction time? What’s the different among using mobile devices, wearable hand-worn devices and glasses?

Still, lots of questions remain to be answered.