TY - GEN
T1 - Encore Abstract: Presumably Correct Decision Sets
AU - Nápoles, Gonzalo
AU - Grau, Isel
AU - Jastrzębska, Agnieszka
AU - Salgueiro, Yamisleydi
PY - 2023/11
Y1 - 2023/11
N2 - The paper presents the presumably correct decision sets as a tool to analyze uncertainty in the form of inconsistency in decision systems. As a first step, problem instances are gathered into three regions containing weak members, borderline members, and strong members. This is accomplished by using the membership degrees of instances to their neighborhoods while neglecting their actual labels. As a second step, we derive the presumably correct and incorrect sets by contrasting the decision classes determined by a neighborhood function with the actual decision classes. We extract these sets from either the regions containing strong members or the whole universe, which defines the strict andrelaxed versions of our theoretical formalism. These sets allow isolating the instances difficult to handle by machine learning algorithms as they are responsible for inconsistent patterns. The simulations using synthetic and real-world datasets illustrate the advantages of our model compared to rough sets, which is deemed a solid state-of-the-art approach to cope with inconsistency. In particular, it is shown that we can increase the accuracy of selected classifiers up to 36% by weighting the presumably correct and incorrect instances during the training process.
AB - The paper presents the presumably correct decision sets as a tool to analyze uncertainty in the form of inconsistency in decision systems. As a first step, problem instances are gathered into three regions containing weak members, borderline members, and strong members. This is accomplished by using the membership degrees of instances to their neighborhoods while neglecting their actual labels. As a second step, we derive the presumably correct and incorrect sets by contrasting the decision classes determined by a neighborhood function with the actual decision classes. We extract these sets from either the regions containing strong members or the whole universe, which defines the strict andrelaxed versions of our theoretical formalism. These sets allow isolating the instances difficult to handle by machine learning algorithms as they are responsible for inconsistent patterns. The simulations using synthetic and real-world datasets illustrate the advantages of our model compared to rough sets, which is deemed a solid state-of-the-art approach to cope with inconsistency. In particular, it is shown that we can increase the accuracy of selected classifiers up to 36% by weighting the presumably correct and incorrect instances during the training process.
M3 - Conference contribution
SP - 1
EP - 3
BT - Pre-proceedings of the Joint International Scientific Conferences On AI And Machine Learning BNAIC/BeNeLearn 2023
PB - TU Delft Open
T2 - The 35th Artificial Intelligence and 32nd Machine Learning Conferences of the Benelux, BNAIC/BENELEARN 2023
Y2 - 8 November 2023 through 10 November 2023
ER -