Abstract
Evaluating quality of experience in video streaming services requires a quality metric that works in real time and for a broad range of video types and network conditions. This means that, subjective video quality assessment studies, or complex objective video quality assessment metrics, which
would be best suited from the accuracy perspective, cannot be used for this
tasks (due to their high requirements in terms of time and complexity, in ad-
dition to their lack of scalability). In this paper we propose a light-weight
No Reference (NR) method that, by means of unsupervised machine learning
techniques and measurements on the client side is able to assess quality in
real-time, accurately and in an adaptable and scalable manner. Our method
makes use of the excellent density estimation capabilities of the unsupervised
deep learning techniques, the restricted Boltzmann machines, and light-weight
video features computed just on the impaired video to provide a delta of qual-
ity degradation. We have tested our approach in two network impaired video
sets, the LIMP and the ReTRiEVED video quality databases, benchmarking
the results of our method against the well-known full reference metric VQM.
We have obtained levels of accuracy of at least 85% in both datasets using all
possible cases.
would be best suited from the accuracy perspective, cannot be used for this
tasks (due to their high requirements in terms of time and complexity, in ad-
dition to their lack of scalability). In this paper we propose a light-weight
No Reference (NR) method that, by means of unsupervised machine learning
techniques and measurements on the client side is able to assess quality in
real-time, accurately and in an adaptable and scalable manner. Our method
makes use of the excellent density estimation capabilities of the unsupervised
deep learning techniques, the restricted Boltzmann machines, and light-weight
video features computed just on the impaired video to provide a delta of qual-
ity degradation. We have tested our approach in two network impaired video
sets, the LIMP and the ReTRiEVED video quality databases, benchmarking
the results of our method against the well-known full reference metric VQM.
We have obtained levels of accuracy of at least 85% in both datasets using all
possible cases.
Original language | English |
---|---|
Pages (from-to) | 22303-22327 |
Number of pages | 25 |
Journal | Multimedia Tools and Applications |
Volume | 76 |
Issue number | 21 |
DOIs | |
Publication status | Published - Nov 2017 |
Keywords
- Deep learning
- No-reference video quality assessment
- Quality of experience
- Unsupervised machine learning