Symbolic Knowledge Extraction Algorithms Analysis


Pedagogical SKE algorithms always require a dataset for the extraction of knowledge from a predictor. The goal of this thesis is to investigate how the knowledge is affected by the choice of the dataset. Very often the same training set used to train the predictor is used in the extraction. After that, scientists compute the fidelity score of the knowledge w.r.t. the predictor (i.e., an accuracy computed not on the original test set, but on the test set with the output labels/values of the predictor).

Some research questions to be answered are:

1) how does the fidelity change if the DS is not representative of the population?
2) how does the knowledge/fidelity change with different kind of SKE algorithms?
3) there exist SKE algorithms that are (more) robust to DS changing?
4) how do behave SKE algorithms if a predictor has low accuracy? Does knowledge still have high fidelity?

It would be interesting (and mandatory) to use different SKE algorithms, predictors and datasets. Concerning SKE algorithms choose at least 4 of them (one from each category):

a) one based on decision trees (e.g., CART, C4.5, etc.);
b) one based on hypercubes (e.g., ITER, GridEx, etc.);
c) one NOT based on hypercubes or decision trees (e.g., REAL, Trepan, etc.);
d) one decompositional SKE algorithm to compare it with the pedagogical ones.

(keywords) eXplainable AI; symbolic knowledge extraction; PSyKE

Theses / Views

Home

Clouds
•  tags  •  supervisors  •  co-supervisors  

Status
•  completed  •  ongoing  •  available  

Year
 2023    2022    2021    2020    2019    2018    2017    2016    2015    2014–1995

Cycle
•  1st cycle  •  2nd cycle  •  3rd cycle  

Thesis

supervision

— reference

Matteo Magnini

— supervisors

Andrea Omicini

sort

— cycle

second-cycle thesis

— status

available thesis

— language

wgb.gif

dates

— available since

31/08/2022

Partita IVA: 01131710376 — Copyright © 2008–2023 APICe@DISI – PRIVACY