- Theses
- Interpretable Prediction of Galactic Cosmic-Ray Short-Term Variations with Artificial Neural Networks
Interpretable Prediction of Galactic Cosmic-Ray Short-Term Variations with Artificial Neural Networks
- Manage
- Copy
- Actions
- Export
- Annotate
- Print Preview
Choose the export format from the list below:
- Office Formats (1)
-
Export as Portable Document Format (PDF) using Apache Formatting Objects Processor (FOP)
-
- Other Formats (1)
-
Export as HyperText Markup Language (HTML)
-
Monitoring the galactic cosmic-ray flux variations is a crucial issue for all those space missions for which cosmic rays constitute a limitation to the performance of the on-board instruments. If it is not possible to study the galactic cosmic-ray short-term fluctuations on board, it is necessary to benefit of models that are able to predict these flux modulations. Artificial neural networks are nowadays the most used tools to solve a wide range of different problems in various disciplines, including medicine, technology, business and many others. All artificial neural networks are black boxes, i.e. their internal logic is hidden to the user. Knowledge extraction algorithms are applied to the neural networks in order to obtain explainable models when this lack of explanation constitutes a problem. This thesis work describes the implementation and optimisation of an explainable model for predicting the galactic cosmic-ray short-term flux variations observed on board the European Space Agency mission LISA Pathfinder. The model is based on an artificial neural network that benefits as input data of solar wind speed and interplanetary magnetic field intensity measurements gathered by the National Aeronautics and Space Administration space missions Wind and ACE orbiting nearby LISA Pathfinder. The knowledge extraction is performed by applying to the underlying neural network both the ITER algorithm and a linear regressor. ITER was selected after a deep investigation of the available literature. The model presented here provides explainable predictions with errors smaller than the LISA Pathfinder statistical uncertainty. |
(keywords) Artificial neural networks, knowledge extraction, explainable artificial intelligence, interpretable prediction, LISA Pathfinder, cosmic rays |
Thesis
— thesis student
supervision
— supervisors
— co-supervisors
Giovanni Ciatto, Catia Grimani
sort
— cycle
second-cycle thesis
— status
completed thesis
— language
dates
— activity started
15/03/2020
— degree date
17/12/2020
files