InterpretDL Versions Save

InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。

v0.8.0

1 year ago

We release the version 0.8.0 of InterpretDL, with new features as follows:

  • Add new trustworthiness/faithfulness evaluation metrics. Infidelity is added. NLP tasks are supported too.
  • TransformerInterpreters support models that may have global pooling at end.
  • Add SmoothGradNLPInterpreter.
  • General compatibility.

Depreciation:

  • use_cuda is removed. Use device.
  • _paddle_prepare is removed. Use _build_predict_fn.

We have two more papers got accepted by AAAI'23 and Artificial Intelligence respectively. See implementations at G-LIME and TrainingDynamics.

v0.7.0

1 year ago

We release the version 0.7.0 of InterpretDL, with new features as follows:

  • Examples are put into a separate directory examples/. Tutorials are still kept in the previous directory tutorials.
  • A new explanation algorithm bidirectional_transformer is implemented.
  • Documentation is improved.
  • Fix some bugs.

We also would like to brag about ourselves that our paper with InterpretDL is accepted by Journal of Machine Learning Research (JMLR).

Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Zeyu Chen, and Dejing Dou. “InterpretDL: Explaining Deep Models in PaddlePaddle.” Journal of Machine Learning Research, 2022. https://jmlr.org/papers/v23/21-0738.html.

One survey paper is accepted by Knowledge and Information Systems (KAIS):

Xuhong Li, Haoyi Xiong, Xingjian Li, Xuanyu Wu, Xiao Zhang, Jiang Bian, and Dejing Dou. “Interpretable Deep Learning: Interpretations, Interpretability, Trustworthiness, and Beyond.” Knowledge and Information Systems, 2022, Springer. https://arxiv.org/abs/2103.10689.

And two research works got accepted by ECML'22 and Machine Learning Journal:

Xuhong Li, Haoyi Xiong, Siyu Huang, Shilei Ji, Dejing Dou. Cross-Model Consensus of Explanations and Beyond for Image Classification Models: An Empirical Study. ECML'22, Machine Learning Journal Track. https://arxiv.org/abs/2109.00707.

Xuhong Li, Haoyi Xiong, Yi Liu, Dingfu Zhou, Zeyu Chen, Yaqing Wang, and Dejing Dou. "Distilling ensemble of explanations for weakly-supervised pre-training of image segmentation models." Machine Learning (2022): 1-17. https://arxiv.org/abs/2207.03335.

We have also released a dataset containing 1.2M+ pseudo semantic segmentation images of ImageNet. Refer to PaddleSeg:PSSL for downloading the dataset and the pretrained models.

v0.6.2

1 year ago

We release the version 0.6.2 of InterpretDL, with new features as follows:

v0.6.1

1 year ago

We release the version 0.6.1 of InterpretDL, with new features as follows:

v0.6.0

2 years ago

We release the version 0.6.0 of InterpretDL, with new features as follows:

Methods Representation Model Type Example
LIME Input Features Model-Agnostic link1 | link2
LIME with Prior Input Features Model-Agnostic link
NormLIME/FastNormLIME Input Features Model-Agnostic link1 | link2
LRP Input Features Differentiable link
SmoothGrad Input Features Differentiable link
IntGrad Input Features Differentiable link
GradSHAP Input Features Differentiable link
Occlusion Input Features Model-Agnostic link
GradCAM/CAM Intermediate Features Specific: CNNs link
ScoreCAM Intermediate Features Specific: CNNs link
Rollout Intermediate Features Specific: Transformers link
TAM Intermediate Features Specific: Transformers link
ForgettingEvents Dataset-Level Differentiable link
TIDY (Training Data Analyzer) Dataset-Level Differentiable link
Consensus Features Cross-Model link
Generic Attention Input Features Specific: Bi-Modal Transformers link (nblink)*

* For text visualizations, NBViewer gives better and colorful rendering results.

v0.5.3

2 years ago

We release the version 0.5.3 of InterpretDL, with improvements of code styles and documentation.

v0.5.2

2 years ago

We release the version 0.5.2 of InterpretDL, with improvements in NormLIME. The tutorial of NormLIME is modified accordingly too.

Besides, the argument of use_cuda has been removed from tutorials and unit tests. use_cuda would be removed in the next version.

v0.5.1

2 years ago

We release the version 0.5.1 of InterpretDL, with small fixes:

  • Update readme, add the schema of relations among interpretation, interpretability and trustworthiness.
  • Fix some imports errors.
  • Add one more base Interpreter IntermediateLayerInterpreter.

Thanks @Wgm-Inspur for correcting the parameter of GradShapNLPInterpreter used in tutorials.

We would also like to mention that the arguments use_cuda is deprecated. Use device directly.

v0.5.0

2 years ago

We release the version 0.5.0 of InterpretDL, with new features as following:

  • Two more evaluation metrics are added: Perturbation tests and PointGame, for measuring the trustworthiness of interpretation algorithms. APIs for Perturbation, PointGame and PointGameSegmentation are available with corresponding tutorials, 1 and 2.
  • Gradients are supported in the eval mode since Paddle2.2.1, which is supported too by InterpretDL, making the gradient computation easier.
  • Deprecation of use_cuda is on the way. Use device directly.

v0.4.0

2 years ago

We release the version 0.4.0 of InterpretDL, with new features as following:

  • Add Consensus of Cross-Model Explanation Algorithm. See the API and the tutorial for details.
  • Add Deletion and Insertion Evaluation Algorithms, for measuring the trustworthiness of interpretation algorithms. See the API and the tutorial for details.
  • Support Continuous Integration for code quality. We choose circleci for InterpretDL. The code coverage is 93% at this version.
  • We add colorful badges in README ;)