Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems

Aravind Kota Gopalakrishna (Corresponding author), Tanir Ozcelebi, Johan J. Lukkien, Antonio Liotta

Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

Uittreksel

Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input–output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input–output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot.

TaalEngels
Pagina's1005-1016
Aantal pagina's12
TijdschriftFuture Generation Computer Systems
Volume100
DOI's
StatusGepubliceerd - 1 nov 2019

Vingerafdruk

Cognitive systems
Learning algorithms
Learning systems
Lighting
Decision making

Trefwoorden

    Citeer dit

    @article{f4a92b841e574b0caf60279e6678e2b6,
    title = "Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems",
    abstract = "Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input–output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input–output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot.",
    keywords = "Classification problems, Cognitive systems, Human factors, Machine learning, Non-deterministic multiple output classification, Performance metric, Relevance score, Smart lighting",
    author = "Gopalakrishna, {Aravind Kota} and Tanir Ozcelebi and Lukkien, {Johan J.} and Antonio Liotta",
    year = "2019",
    month = "11",
    day = "1",
    doi = "10.1016/j.future.2019.05.043",
    language = "English",
    volume = "100",
    pages = "1005--1016",
    journal = "Future Generation Computer Systems",
    issn = "0167-739X",
    publisher = "Elsevier",

    }

    Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems. / Gopalakrishna, Aravind Kota (Corresponding author); Ozcelebi, Tanir; Lukkien, Johan J.; Liotta, Antonio.

    In: Future Generation Computer Systems, Vol. 100, 01.11.2019, blz. 1005-1016.

    Onderzoeksoutput: Bijdrage aan tijdschriftTijdschriftartikelAcademicpeer review

    TY - JOUR

    T1 - Runtime evaluation of cognitive systems for non-deterministic multiple output classification problems

    AU - Gopalakrishna,Aravind Kota

    AU - Ozcelebi,Tanir

    AU - Lukkien,Johan J.

    AU - Liotta,Antonio

    PY - 2019/11/1

    Y1 - 2019/11/1

    N2 - Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input–output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input–output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot.

    AB - Cognitive applications that involve complex decision making such as smart lighting have non-deterministic input–output relationships, i.e., more than one output may be acceptable for a given input. We refer them as non-deterministic multiple output classification (nDMOC) problems, which are particularly difficult for machine learning (ML) algorithms to predict outcomes accurately. Evaluating ML algorithms based on commonly used metrics such as Classification Accuracy (CA) is not appropriate. In a batch setting, Relevance Score (RS) was proposed as a better alternative, which determines how relevant a predicted output is to a given context. We introduce two variants of RS to evaluate ML algorithms in an online setting. Furthermore, we evaluate the algorithms using different metrics for two datasets that have non-deterministic input–output relationships. We show that instance-based learning provides superior RS performance and the RS performance keeps improving with an increase in the number of observed samples, even after the CA performance has converged to its maximum. This is a crucial result as it illustrates that RS is able to capture the performance of ML algorithms in the context of nDMOC problems while CA cannot.

    KW - Classification problems

    KW - Cognitive systems

    KW - Human factors

    KW - Machine learning

    KW - Non-deterministic multiple output classification

    KW - Performance metric

    KW - Relevance score

    KW - Smart lighting

    UR - http://www.scopus.com/inward/record.url?scp=85067041870&partnerID=8YFLogxK

    U2 - 10.1016/j.future.2019.05.043

    DO - 10.1016/j.future.2019.05.043

    M3 - Article

    VL - 100

    SP - 1005

    EP - 1016

    JO - Future Generation Computer Systems

    T2 - Future Generation Computer Systems

    JF - Future Generation Computer Systems

    SN - 0167-739X

    ER -