A kedro-plugin for integration of mlflow capabilities inside kedro projects (especially machine learning model versioning and packaging)
kedro mlflow init
, kedro mlflow ui
and kedro mlflow modelify
now work even inside a subdirectory and not at the root of the kedro project to be consistent with kedro>0.19.4
(#531)_is_project
and _find_kedro_project
to be resilient to changes (#531)km.random_name
resolver which enables to use auto-generated names for kedro runs instead of pipeline name in the mlflow.yml
configuration file (#426)KedroPipelineModel
(#516, sebastiandro)kedro==0.19.X
(#)kedro==0.18.X
series.copy_mode
to "assign"
in KedroPipelineModel
because this is the most efficient setup (and usually the desired one) when serving a Kedro Pipeline
as a Mlflow model. This is different from Kedro's default which is to deepcopy the dataset (#463).MlflowArtifactDataset.__init__
method data_set
argument is renamed dataset
to match new Kedro conventions (#391).DataSets
with the Dataset
suffix (without capitalized S
) to match new kedro conventions from kedro>=0.19
and onwards (#439, ShubhamZoro):
MlflowArtifactDataSet
->MlflowArtifactDataset
MlflowAbstractModelDataSet
->MlflowAbstractModelDataset
MlflowModelRegistryDataSet
->MlflowModelRegistryDataset
MlflowMetricDataSet
->MlflowMetricDataset
MlflowMetricHistoryDataSet
->MlflowMetricHistoryDataset
DataSets
to make their use more explicit, and use the Dataset
suffix:
TemplatedConfigLoader
and no mlflow.yml
configuration file (#452)kedro-mlflow
hook log parameters when the project is configured with the OmegaConfigLoader
instead of raising an error (#430)python=3.7
which has reached end-of-life status to prepare 0.19 (#391):sparkles: Added support for Mlflow 2.0 (#390)
:sparkles: The modelify
command now accepts a --run-name
to specifiy the run name where the model is logged (#408)
:memo: Update incorrect documentation about model registry with local relative filepath (#400)
:bug: The modelify
command now creates a conda environment based on your environment python and kedro versions instead of hardcoded python=3.7
and kedro=0.16.5
(#405)
:bug: The modelify
command now uses correctly the --pip-requirements
argument instead of raising an error (#405)
:bug: The modelify
command now uses modelify
as a default run name (#408)
MlflowModelRegistryDataSet
in kedro_mlflow.io.models
to enable fetching a mlflow model from the mlflow model registry by its name(#260)__default__
as a run name if the pipeline is not specified in the kedro run
commmand to avoid empty names (#392)kedro-mlflow
now uses the default configuration (ignoring mlflow.yml
) if an active run already exists in the process where the pipeline is started, and uses this active run for logging. This enables using kedro-mlflow
with an orchestrator which starts mlflow itself before running kedro (e.g. airflow, the mlflow run
command, AzureML...) (#358)server.mlflow_registry_uri
key in mlflow.yml
to set the mlflow registry uri. (#260)server.request_header_provider
entry in mlflow.yml
(#357)MlflowArtifactDataSet.load()
now correctly loads the artifact when both artifact_path
and run_id
arguments are specified. Previous fix in 0.11.4
did not work because when the file already exist locally, mlflow did not download it again so tests were incorrectly passing (#362)reload_kedro_mlflow
line magic for notebook because kedro will deprecate the entrypoint in 0.18.3. It is still possible to access the mlflow client associated to the configuration in a notebook with context.mlflow.server._mlflow_client
(#349). This is not considered as a breaking change since apparently no one uses it according to a discussion with kedro's team.MlflowArtifactDataSet.load()
now correctly loads the artifact when both artifact_path
and run_id
arguments are specified instead of raising an error (#362)