Sagemaker Explaining Credit Decisions Save

Amazon SageMaker Solution for explaining credit decisions.

Project README

Explaining Credit Decisions with Amazon SageMaker

Given the increasing complexity of machine learning models, the need for model explainability has been growing lately. Some governments have also introduced stricter regulations that mandate a right to explanation from machine learning models. In this solution, we take a look at how Amazon SageMaker can be used to explain individual predictions from machine learning models.

As an example application, we classify credit applications and predict whether the credit would be paid back or not (often called a credit default). More context can be found here. We train a tree-based LightGBM model using Amazon SageMaker and explain its predictions using a game theoretic approach called SHAP (SHapley Additive exPlanations).

Ultimately, we deploy an endpoint that returns the model prediction and the associated explanation.

What is an explanation?

Given a set of input features used to describe a credit application (e.g. credit__amount and employment__duration), an explanation reflects the contribution of each feature to the model's final prediction. We include a number of visualizations that can be used to see how each feature pushes up or down the risk of credit default for an individual application. Click on the screenshot below to see an example of an exported explanation report.

Getting Started

You will need an AWS account to use this solution. Sign up for an account here.

To run this JumpStart 1P Solution and have the infrastructure deploy to your AWS account you will need to create an active SageMaker Studio instance (see Onboard to Amazon SageMaker Studio). When your Studio instance is Ready, use the instructions in SageMaker JumpStart to 1-Click Launch the solution.

The solution artifacts are included in this GitHub repository for reference.

Note: Solutions are available in most regions including us-west-2, and us-east-1.

Caution: Cloning this GitHub repository and running the code manually could lead to unexpected issues! Use the AWS CloudFormation template. You'll get an Amazon SageMaker Notebook instance that's been correctly setup and configured to access the other resources in the solution.

Contents

  • cloudformation/
    • explaining-credit-decisions.yaml: Creates AWS CloudFormation Stack for solution.
    • glue.yaml: Used to create AWS Glue components.
    • sagemaker.yaml: Used to create Amazon SageMaker components.
    • solution-assistant.yaml: Used to prepare demonstration datasets and clean up resources.
  • dataset/
  • glue/
    • etl_job.py: Use by AWS Glue Job to transform datasets.
  • lambda/
    • datasets.py: Used to generate synthetic datasets.
    • lambda_function.py: Solution Assistant create and delete logic.
    • requirements.txt: Describes Python package requirements of the AWS Lambda function.
  • sagemaker/
    • requirements.txt: Describes Python package requirements of the Amazon SageMaker Notebook instance.
    • setup.py: Describes Python package used in the solution.
    • containers/
      • dashboard/
      • model/
        • Dockerfile: Describes custom Docker image hosted on Amazon ECR.
        • requirements.txt: Describes Python package requirements of the Docker image.
        • entry_point.py: Used by Amazon SageMaker for training and endpoint hosting.
    • notebooks/
      • notebook.ipynb: Orchestrates the solution.
    • package/
      • config.py: Stores and retrieves project configuration.
      • utils.py: Various utility functions for scripts and/or notebooks.
      • visuals.py: Contains explanation visualizations.
      • data/
        • datasets.py: Contains functions for reading datasets.
        • glue.py: Manages the AWS Glue workflow of crawling datasets and running jobs.
        • schemas.py: Schema creation and data validation.
      • machine_learning/
        • preprocessing.py: Scikit-learn steps to pre-process data for model.
        • training.py: Scikit-learn steps to train and test model.
      • sagemaker/
        • containers.py: Manages the Docker workflow of building and pushing images to Amazon ECR.
        • estimator_fns.py: Contains functions used by estimator.
        • explainer_fns.py: Contains functions used by explainer.
        • predictor_fns.py: Contains functions used by predictor.
        • predictors.py: Custom predictor for using JSON endpoint from notebook.

Architecture

As part of the solution, the following services are used:

Costs

You are responsible for the cost of the AWS services used while running this solution.

As of 6th April 2020 in the US West (Oregon) region, the cost to:

  • prepare the dataset with AWS Glue is ~$0.75.
  • train the model using Amazon SageMaker training job on ml.c5.xlarge is ~$0.02.
  • host the model using Amazon SageMaker Endpoint on ml.c5.xlarge is $0.119 per hour.
  • run an Amazon SageMaker notebook instance is $0.0582 per hour.

All prices are subject to change. See the pricing webpage for each AWS service you will be using in this solution.

Cleaning Up

When you've finished with this solution, make sure that you delete all unwanted AWS resources. AWS CloudFormation can be used to automatically delete all standard resources that have been created by the solution and notebook. Go to the AWS CloudFormation Console, and delete the parent stack. Choosing to delete the parent stack will automatically delete the nested stacks.

Caution: You need to manually delete any extra resources that you may have created in this notebook. Some examples include, extra Amazon S3 buckets (to the solution's default bucket), extra Amazon SageMaker endpoints (using a custom name), and extra Amazon ECR repositories.

Customizing

Our solution is easily customizable. You can customize the:

FAQ

What is explainability?

Model explainability is the degree to which humans can understand the cause of decisions made by a machine learning model. Many methods now exist for formulating explanations from complex models that are interpretable and faithful.

Why is explainability useful?

An explanation gives stakeholders a way to understand the relationships and patterns learned by a machine learning model. As an example, an explanation can be used to verify that meaningful relationships are being used by the model instead of spurious relationships. Such checks can give stakeholders more confidence in the reliability and robustness of the model for real-world deployments. It’s critical for building trust in the system. When issues are found, explanations often give scientists a strong indication of what needs to be fixed in the dataset or model training procedure: saving significant time and money. Other serious issues, such a social discrimination and bias, can be clearly flagged by an explanation.

Why is credit default prediction useful? And how does explainability help?

Given a credit application from a bank customer, the aim of the bank is to predict whether or not the customer will pay back the credit in accordance with their repayment plan. When a customer can't pay back their credit, often called a 'default', the bank loses money and the customer's credit score will be impacted. On the other hand, denying trustworthy customers credit also has a set of negative impacts.

Using accurate machine learning models to classify the risk of a credit application can help find a good balance between these two scenarios, but this provides no comfort to those customers who have been denied credit. Using explainability methods, it's possible to determine actionable factors that had a negative impact on the application. Customers can then take action to increase their chance of obtaining credit in subsequent applications.

What is SHAP?

SHAP (Lundberg et al. 2017) stands for SHapley Additive exPlanations. 'Shapley' relates to a game theoretic concept called Shapley values that is used to create the explanations. A Shapley value describes the marginal contribution of each 'player' when considering all possible 'coalitions'. Using this in a machine learning context, a Shapley value describes the marginal contribution of each feature when considering all possible sets of features. 'Additive' relates to the fact that these Shapley values can be summed together to give the final model prediction.

As an example, we might start off with a baseline credit default risk of 10%. Given a set of features, we can calculate the Shapley value for each feature. Summing together all the Shapley values, we might obtain a cumulative value of +30%. Given the same set of features, we therefore expect our model to return a credit default risk of 40% (i.e. 10% + 30%).

Useful Resources

Credits

Our datasets (i.e. credits, people and contacts) were synthetically created from features contained in the German Credit Dataset (UCI Machine Learning Repository). All personal information was generated using Faker.

License

This project is licensed under the Apache-2.0 License.

Open Source Agenda is not affiliated with "Sagemaker Explaining Credit Decisions" Project. README Source: awslabs/sagemaker-explaining-credit-decisions

Open Source Agenda Badge

Open Source Agenda Rating