CheekInput
頬を入力面とした頭部搭載型ディスプレイ操作手法
Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-mounted Display
2016
山下幸輝,菊地高史,正井克俊,杉本麻樹,トーマスブルース,杉浦裕太
Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura

[Reference /引用はこちら]
Koki Yamashita, Takashi Kikuchi, Katsutoshi Masai, Maki Sugimoto, Bruce H. Thomas, Yuta Sugiura, CheekInput: Turning Your Cheek into an Input Surface by Embedded Optical Sensors on a Head-mounted Display, In Proceedings of the 23rd ACM Symposium on Virtual Reality Software and Technology (VRST ’17), ACM, Article 19, 8 pages, November 8-10, 2017, Gothenburg, Sweden. [DOI]

本研究では頬をタッチサーフェースとして利用し、頭部搭載型ディスプレイに対し地図の操作、表示画像の変更などの情報操作を行う手法を提案する。本手法では光センサを用いて前頬に触れた際に生じる皮膚の形状変化を計測する。頭部搭載型デバイスに直接センサを装着したため、HMD以外の他の入力デバイスを持ち歩くこと無く頭部搭載型デバイスに対し入力を行うことができる。評価実験では、着席時に利き手で利き手側の頬に入力を行い、方向ジェスチャを89.76%の精度で識別した。

We propose a novel technology called “CheekInput” with a head-mounted display (HMD) that senses touch gestures by detecting skin deformation. We attached multiple photo-reflective sensors onto the bottom front frame of the HMD. Since these sensors measure the distance between the frame and cheeks, our system is able to detect the deformation of a cheek when the skin surface is touched by fingers. Our system uses a Support Vector Machine to determine the gestures: pushing face up and down, left and right. We combined these 4 directional gestures for each cheek to extend 16 possible gestures. To evaluate the accuracy of the gesture detection, we conducted a user study. The results revealed that CheekInput achieved 80.45% recognition accuracy when gestures were made by touching both cheeks with both hands, and 74.58 % when by touching both cheeks with one hand.