Relevance as a metric for evaluating machine learning algorithms

A. Kota Gopalakrishna, T. Ozcelebi, A. Liotta, J.J. Lukkien

Research output: Chapter in Book/Report/Conference proceedingConference contributionAcademicpeer-review

7 Citations (Scopus)
1 Downloads (Pure)

Abstract

In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this paper, we propose a novel probability-based performance metric called Relevance Score for evaluating supervised learning algorithms. We evaluate the proposed metric through empirical analysis on a dataset gathered from an intelligent lighting pilot installation. In comparison to the commonly used Classification Accuracy metric, the Relevance Score proves to be more appropriate for a certain class of applications.
Original languageEnglish
Title of host publicationMachine Learning and Data Mining (9th International Conference, MLDM 2013, New York NY, USA, July 19-25, 2013. Proceedings)
EditorsP. Perner
Place of PublicationBerlin
PublisherSpringer
Pages195-208
ISBN (Print)978-3-642-39711-0
DOIs
Publication statusPublished - 2013
Event9th International Conmference on Machine Learning and Data Mining in Pattern Recognition (MLDM 2013) - New York, United States
Duration: 19 Jun 201325 Jun 2013
Conference number: 9

Publication series

NameLecture Notes in Computer Science
Volume7988
ISSN (Print)0302-9743

Conference

Conference9th International Conmference on Machine Learning and Data Mining in Pattern Recognition (MLDM 2013)
Abbreviated titleMLDM 2013
CountryUnited States
CityNew York
Period19/06/1325/06/13

Fingerprint

Dive into the research topics of 'Relevance as a metric for evaluating machine learning algorithms'. Together they form a unique fingerprint.

Cite this