Indexed by:
Abstract:
The booming of High Definition (HD) and Ultra-HD (UHD) videos in the last decade has greatly promoted the use of video display on diversified terminals such as TV, tablets and smartphones. To present more video contents with high quality to users, it is imperative to assess the visual quality of videos. The approach denoted as Video Quality Assessment (VQA) has become an appealing problem to researchers. In this work, we take the video contents into consideration besides of media format and pixel quality. This paper utilizes four measures, picture resolution, bitrates, Spatial Information (SI) and Temporal Information (TI), to represent the visual quality in four separate dimensions. Then, a subjective database is constructed to train a neural network that is capable of scaling content-aware video qualities. Experiments have shown that our method is superior to human observers in terms of correlations to Mean Opinion Score (MOS) values.
Keyword:
Reprint 's Address:
Email:
Source :
PROCEEDINGS OF 2018 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION, ELECTRONICS AND ELECTRICAL ENGINEERING (AUTEEE)
Year: 2018
Page: 173-176
Language: English
Cited Count:
SCOPUS Cited Count:
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 3
Affiliated Colleges: