Evalify Save

Evaluate your biometric verification models literally in seconds.

Project README

evalify

Logo

License DOI Python 3.7 | 3.8 | 3.9 | 3 Release Status CI Status Documentation Status Code style: black PyPI Downloads/Month

Evaluate Biometric Authentication Models Literally in Seconds.

Installation

Stable release:

pip install evalify

Bleeding edge:

pip install git+https://github.com/ma7555/evalify.git

Used for

Evaluating all biometric authentication models, where the model output is a high-level embeddings known as feature vectors for visual or behaviour biometrics or d-vectors for auditory biometrics.

Usage

import numpy as np
from evalify import Experiment

rng = np.random.default_rng()
nphotos = 500
emb_size = 32
nclasses = 10
X = rng.random((self.nphotos, self.emb_size))
y = rng.integers(self.nclasses, size=self.nphotos)

experiment = Experiment()
experiment.run(X, y)
experiment.get_roc_auc()
print(experiment.roc_auc)
print(experiment.find_threshold_at_fpr(0.01))

How it works

  • When you run an experiment, evalify tries all the possible combinations between individuals for authentication based on the X and y parameters and returns the results including FPR, TPR, FNR, TNR and ROC AUC. X is an array of embeddings and y is an array of corresponding targets.
  • Evalify can find the optimal threshold based on your agreed FPR and desired similarity or distance metric.

Documentation:

Features

  • Blazing fast implementation for metrics calculation through optimized einstein sum and vectorized calculations.
  • Many operations are dispatched to canonical BLAS, cuBLAS, or other specialized routines.
  • Smart sampling options using direct indexing from pre-calculated arrays with total control over sampling strategy and sampling numbers.
  • Supports most evaluation metrics:
    • cosine_similarity
    • pearson_similarity
    • cosine_distance
    • euclidean_distance
    • euclidean_distance_l2
    • minkowski_distance
    • manhattan_distance
    • chebyshev_distance
  • Computation time for 4 metrics 4.2 million samples experiment is 24 seconds vs 51 minutes if looping using scipy.spatial.distance implemntations.

TODO

  • Safer memory allocation. I did not have issues but if you ran out of memory please manually set the batch_size argument.

Contribution

  • Contributions are welcomed, and they are greatly appreciated! Every little bit helps, and credit will always be given.
  • Please check CONTRIBUTING.md for guidelines.

Citation

  • If you use this software, please cite it using the metadata from CITATION.cff
Open Source Agenda is not affiliated with "Evalify" Project. README Source: ma7555/evalify
Stars
19
Open Issues
0
Last Commit
11 months ago
Repository
License

Open Source Agenda Badge

Open Source Agenda Rating