研究著作內容
360° Video Viewing Dataset in Head-Mounted Virtual Reality
(NOTE: Sheng-Wei Chen is also known as Kuan-Ta Chen.)

Abstract
360° videos and Head-Mounted Displays (HMDs) are geŠing increasingly popular. However, streaming 360° videos to HMDs is challenging. Œis is because only video content in viewers’ Fieldof- Views (FoVs) are rendered, and thus sending complete 360° videos wastes resources, including network bandwidth, storage space, and processing power. Optimizing the 360° video streaming to HMDs is, however, highly data and viewer dependent, and thus dictates real datasets. However, to our best knowledge, such datasets are not available in the literature. In this paper,we present our datasets of both content data (such as image saliency maps and motion maps derived from 360° videos) and sensory data (such as viewer head positions and orientations derived from HMD sensors). We put extra e‚orts to align the content and sensory data using the timestamps in the raw log €les. Œe resulting datasets can be used by researchers, engineers, and hobbyists to either optimize existing 360° video streaming applications (like rate-distortion optimization) and novel applications (like crowd-driven camera movements). We believe that our datasets will stimulate more research activities along this exciting new research direction.

Materials
Citation
Wen-Chih Lo, Ching-Ling Fan, Jean Lee, Chun-Ying Huang, Kuan-Ta Chen, and Cheng-Hsin Hsu, "360° Video Viewing Dataset in Head-Mounted Virtual Reality," In Proceedings of ACM MMSys 2017 (Dataset Track), Jun 2017.

BibTex
@INPROCEEDINGS{lo17:360_video_streaming,
  AUTHOR     = {Wen-Chih Lo and Ching-Ling Fan and Jean Lee and Chun-Ying Huang and Kuan-Ta Chen and Cheng-Hsin Hsu},
  TITLE      = {360° Video Viewing Dataset in Head-Mounted Virtual Reality},
  BOOKTITLE  = {Proceedings of ACM MMSys 2017 (Dataset Track)},
  MONTH      = {Jun},
  YEAR       = {2017}
}