MURAL - Maynooth University Research Archive Library



    Visualisation Techniques for Interpreting Machine Learning Models


    Inglis, Alan (2022) Visualisation Techniques for Interpreting Machine Learning Models. PhD thesis, National University of Ireland Maynooth.

    [img]
    Preview
    Download (19MB) | Preview


    Share your research

    Twitter Facebook LinkedIn GooglePlus Email more...



    Add this article to your Mendeley library


    Abstract

    With the increase of complex Machine Learning (ML) models making decisions in everyday life in a wide range of fields from economics to healthcare, the demand for Interpretable Machine Learning (IML) techniques has grown. One method to broaden the understanding of the behaviour of a fitted ML model is through the use of informative visualisations. Visualisations can aid in interpretation and can provide a more thorough examination into the nature of the predictions generated from an ML model. This is of particular importance when using so-called blackbox models, such as random forests or Bayesian Additive Regression Trees (BART) models. In this thesis, various IML approaches are proposed through the use of novel visualisations for displaying different metrics and model summaries which can be used for examining the behaviour of a fitted ML model. First, we present flexible methods for investigating variable importance, interactions, and variable effects by presenting a suite of visualisations that can aid in the interpretation of statistical and ML models through the use of model-specific and agnostic methods. Following from this, motivated in part by the lack of existing visualisation methods and by the rise in popularity of this particular model, we develop novel visualisations for examining BART models that include examining the tree structures and, through the posterior distribution, the uncertainty surrounding predictions. Lastly, we demonstrate and discuss our implementation of the R package software vivid (Variable Importance and Variable Interaction Displays) which is used to explore the behaviour of fitted ML models. Here, we focus on key package features and general architectural principles used in vivid when designing informative IML visualisations and provide a practical illustration of the package in use.

    Item Type: Thesis (PhD)
    Keywords: Visualisation Techniques; Interpreting; Machine Learning Models;
    Academic Unit: Faculty of Science and Engineering > Mathematics and Statistics
    Item ID: 17359
    Depositing User: IR eTheses
    Date Deposited: 23 Jun 2023 09:40
    URI:
      Use Licence: This item is available under a Creative Commons Attribution Non Commercial Share Alike Licence (CC BY-NC-SA). Details of this licence are available here

      Repository Staff Only(login required)

      View Item Item control page

      Downloads

      Downloads per month over past year

      Origin of downloads