計算眼科学 – 二色覚障がい支援(Lab)

Naturalness- and information-preserving image recoloring for red–green dichromats

Zhenyang ZHU, Masahiro TOYOURA, Kentaro GO, Issei FUJISHIRO, Kenji KASHIWAGI, Xiaoyang MAO

概要

More than 100 million individuals around the world suffer from color vision deficiency (CVD). Image recoloring algorithms have been proposed to compensate for CVD. This study has proposed a new recoloring algorithm to make up shortages of contrast enhancement and naturalness preservation of the state-of-the-art methods. The recoloring task is formulated as an optimization problem that is solved by using the colors in a simulated CVD color space to maximize contrast and to preserve the original color as much as possible. In addition, the dominant colors are extracted for recoloring. They are then propagated to the whole image so that the optimization problem could be solved at a reasonable cost independent of the image size. In the quantitative evaluation, the results of the proposed method are competitive with those of the best existing method. The evaluation involving subjects with CVD demonstrates that the proposed method outperforms the state-of-the-art method in preserving both the information and the naturalness of images.

リンク

論文: [PDF]

 

引用

Zhenyang Zhu, Masahiro Toyoura, Kentaro Go, Issei Fujishiro, Kenji Kashiwagi, Xiaoyang Mao, “Naturalness- and information-preserving image recoloring for red–green dichromats,” Signal Processing: Image Communication, Elsevier, vol. 76, pp. 68–80, 2019.

関連文献

  1. Zhenyang Zhu, Masahiro Toyoura, Kentaro Go, Issei Fujishiro, Kenji Kashiwagi, Xiaoyang Mao, “Processing images for red–green dichromats compensation via naturalness and information-preservation considered recoloring,” The Visual Computer, Springer-Verlag, vol. 35, no. 6-8, pp. 1053-1066, 2019.

謝辞

This work is supported by JSPS Grants-in-Aid for Scientific Research (Grant No. 17H00738). We would like to thank all the volunteers for evaluating our method and for their valuable comments which contributed to the improvement of our method. We thank Chunxiao Liu for providing some pictures used in our experiment. In addition, we are grateful to all reviewers and the editor for their valuable comments.