Algorithms for explaining machine learning models
This is a patch release to correct a regression in AnchorImage
introduced in v0.6.3
.
v0.6.3
where AnchorImage
would ignore user segmentation_kwargs
(#581).Pillow
and scikit-image
have been bumped to 9.x and 0.19.x respectively.IntegratedGradients
via the target_fn
argument, in order to calculate the scalar target dimension from the model output. This is to bypass the requirement of passing target
directly to explain
when the target
of interest may depend on the prediction output. See the example in the docs. (#523).AnchorImage
leading to an error (#542).CounterfactualRLTabular
(#550).numpy
typing has been updated to be compatible with numpy 1.22
(#543). This is a prerequisite for upgrading to tensorflow 2.7
.Optional
type-checking with mypy
has been reinstated (#541).tensorflow
version has been bumped from 2.6 to 2.7 (#377).AnchorTabular
, AnchorImage
and AnchorText
now expose an additional dtype
keyword argument with a default value of np.float32
. This is to ensure that whenever a user predictor
is called internally with dummy data a correct data type can be ensured (#506).alibi.exceptions
defining the alibi
exception hierarchy. This introduces two exceptions, AlibiPredictorCallException
and AlibiPredictorReturnTypeError
. See #520 for more details.AnchorImage
, coerce image_shape
argument into a tuple to implicitly allow passing a list input which eases use of configuration files. In the future the typing will be improved to be more explicit about allowed types with runtime type checking.shap
version to the latest 0.40.0
as this fixes an installation issue if alibi
and shap
are installed with the same command.transformers
library (#528).IntegratedGradients
with forward_kwargs
not always being correctly passed (#525).TreeShap
predictor (#534).readthedocs
Docker image in our CI to replicate the doc building environment exactly. Also enabled readthedocs
build on PR feature which allows browsing the built docs on every PR.myst
(a markdown superset) for more flexible documentation (#482).alibi.explainers.CounterfactualRL
and alibi.explainers.CounterfactualRLTabular
classes. The method is model-agnostic and the implementation is written in both PyTorch and TensorFlow. See docs for more information.CounterFactual
and CounterFactualProto
classes have been changed to Counterfactual
and CounterfactualProto
respectively for consistency and correctness. The old class names continue working for now but emit a deprecation warning message and will be removed in an upcoming version.dill
behaviour was changed to not extend the pickle
protocol so that standard usage of pickle
in a session with alibi
does not change expected pickle
behaviour. See discussion.AnchorImage
internals refactored to avoid persistent state between explain
calls.pandoc
version for docs building updated to 1.19.2
which is what is used on readthedocs
.AnchorText
now supports sampling according to masked language models via the transformers
library. See docs and the example for using the new functionality.AnchorText
the public API for the constructor has changed. See docs for a full description of the new API.AnchorTabular
now supports one-hot encoded categorical variables in addition to the default ordinal/label encoded representation of categorical variables.IntegratedGradients
changes to allow explaining a wider variety of models. In particular, a new forward_kwargs
argument to explain
allows passing additional arguments to the model and attribute_to_layer_inputs
flag to allow calculating attributions with respect to layer input instead of output if set to True
. The API and capabilities now track more closely to the captum.ai PyTorch
implementation.IntegratedGradients
to explain transformer
models.IntegratedGradients
- fix the path definition for attributions calculated with respect to an internal layer. Previously the paths were defined in terms of the inputs and baselines, now they are correctly defined in terms of the corresponding layer input/output.dill
. See docs for more details.model.layers
for IntegratedGradients
.numpy
1.20.KernelShap
and TreeShap
now requires installing the shap
dependency explicitly after installing alibi
. This can be achieved by running pip install alibi && pip install alibi[shap]
. The reason for this is that the build process for the upstream shap
package is not well configured resulting in broken installations as detailed in https://github.com/SeldonIO/alibi/pull/376 and https://github.com/slundberg/shap/pull/1802. We expect this to be a temporary change until changes are made upstream.reset_predictor
method for black-box explainers. The intended use case for this is for deploying an already configured explainer to work with a remote predictor endpoint instead of the local predictor used in development.alibi.datasets.load_cats
function which loads a small sample of cat images shipped with the library to be used in examples.alibi.datasets.fetch_imagenet
function as the Imagenet API is no longer available.IntegratedGradients
now works with subclassed TensorFlow models.IntegratedGradients
as this was not working properly and is difficult to do in the general case.AnchorTabular
tests not being picked up due to a name change of test data fixtures.IntegratedGradients
now supports models with multiple inputs. For each input of the model, attributions are calculated and returned in a list. Also extends the method allowing to calculate attributions for multiple internal layers. If a list of layers is passed, a list of attributions is returned. See https://github.com/SeldonIO/alibi/pull/321.ALE
now supports selecting a subset of features to explain. This can be useful to reduce runtime if only some features are of interest and also indirectly helps dealing with categorical variables by being able to exclude them (as ALE
does not support categorical variables).AnchorTabular
coverage calculation was incorrect which was caused by incorrectly indexing a list, this is now resolved.ALE
was causing an error when a constant feature was present. This is now handled explicitly and the user has control over how to handle these features. See https://docs.seldon.io/projects/alibi/en/latest/api/alibi.explainers.ale.html#alibi.explainers.ale.ALE for more details.AnchorText
functionality as the way lexeme_prob
tables are loaded was changed. This is now fixed by explicitly handling the loading depending on the spacy
version.Explanation
object instead of the old dict
object.CounterFactual
, CounterFactualProto
and CEM
docs to explain the necessity of clearing the TensorFlow graph if switching to a new model in the same session.requirements/dev.txt
and requirements/docs.txt
..readthedocs.yml
to control how user-facing docs are built directly from the repo.setup.py
as the workflow is both unused and outdated.shap==0.38.1
as a dependency as it assumes IPython
is installed and breaks the installation.ray
. To use, install ray
using pip install alibi[ray]
.KernelShap
distributed version using the new distributed backend.data['raw']['instances']
which is a batch-wise version of the existing data['raw']['instance']
. This is in preparation for the eventual batch support for anchor methods.pyupgrade
via nbqa
for formatting example notebooks using Python 3.6+ syntax.data['raw']['prediction']
is now batch-wise, i.e. for AnchorTabular
and AnchorImage
it is a 1-dimensional numpy
array whilst for AnchorText
it is a list of strings. This is in preparation for the eventual batch support for anchor methods.prettyprinter
and substituted with a slightly modified standard library version of PrettyPrinter
. This is to prepare for a conda
release which requires all dependencies to also be published on conda
.update_metadata
method for any Explainer
object to enable easy book-keeping for algorithm parametersKernelShap
wrapper to work with the newest shap>=0.36
libraryKernelShap
and TreeShap