Hassle-free ML Pipelines on Kubernetes
🌟 With Paradigm, you ML code is production-ready from the beginning
Paradigm is a light-weight, lightning-fast, supremely adaptable tool, effortlessly packaging your ML code into robust pipelines for seamless deployment on Kubernetes. Bypass the need for code refactoring as Paradigm intelligently interprets your Python notebooks and scripts, priming them for scalable production. Paradigm is your ultimate ally in ML deployment, merging unparalleled speed, adaptability, and simplicity into one package.
Official website - paradigmai.net
$ paradigm launch --step <your-project-notebooks-or-scripts>
$ paradigm deploy
You need a Kubernetes cluster and kubectl
set up to be able to access that cluster. For this to run locally, we recommend using minikube
.
git clone https://github.com/ParadigmAI/paradigm.git
cd paradigm
chmod +x install.sh
./install.sh
paradigm --help
Your folder can contain one or more scripts or Python notebooks that you want to execute as steps in an ML pipeline.
eval $(minikube docker-env)
From here we follow a basic example project just to make it easier to exaplin the commands. Please change the necessary parameters according to your project
p1, p2 and p3
represent the names of the python scripts or notebooks you have. (Refer the examples/basic)
requirements.<file name>
files. You have to create a txt with that specific naming only for the scripts or notebooks that have additional dependencies. It becomes the requirements.txt
for that step. We promise this is the only file addition before taking your ML code to prodution. - 📁 project_root
- 📄 p1.py
- 📄 p2.ipynb
- 📄 p3.py
- 📄 requirements.p1
- 📄 requirements.p3
paradigm launch --steps p1 p2 p3
paradigm deploy --steps p1 p2 --dependencies "p2:p1,p3:p2|p1" --deployment p3 --deployment_port 8000 --output workflow.yaml --name pipeline1
In the above command:
--steps
should speicify all steps, except any step that should be run as a service, e.g., an API endpoint.--dependencies "p2:p1,p3:p2|p1"
defines the graph stucture (DAG) on how the steps should be run. In this example, we are stating that step p2
is dependent on p1
and step p3
is dependent on both p2
and p1
.--deployment p3
defines a service that needs to be run at the end of the pipeline. Hence, we don't mention is under --steps
.--deployment_port
is defined if the above service is exposed via a specific port internally.--name
can be any name that you want to give this particualr pipeline(OPTIONAL) You can use Argo UI to observe all pipelines and get logs. For that, first make it accessible via your browser by running the below command.
kubectl -n paradigm port-forward deployment/argo-server 2746:2746
http://localhost:2746
(OPTIONAL) To access the service that is deployed in the previous set (for example an API endpoint), run the following code since we're working inside minikube.
minikube service deploy-p3 -n paradigm
(OPTIONAL) In case you want to delete the running service and deployment, use the following commands. <deployment_step>
is the name of the file that has the deolyment code.
kubectl delete deployment deploy-<deployment_step> -n paradigm
kubectl delete service deploy-<deployment_step> -n paradigm
You need a Kubernetes cluster and kubectl
set up to be able to access that cluster. On AWS, we use Amazon Elastic Kubernetes Service (Amazon EKS) for this.
Also, make sure Docker is installed and running in your environment
In a terminal with the above kubectl access, follow the below steps.
git clone https://github.com/ParadigmAI/paradigm.git
cd paradigm
chmod +x install-aws.sh
./install-aws.sh
paradigm --help
Your folder can contain one or more scripts/notebooks that you want to execute as steps in an ML pipeline.
From here we follow a basic example project just to make it easier to exaplin the commands. Please change the necessary parameters according to your project
p1, p2 and p3
represent the names of the python scripts or notebooks you have. (Refer the examples/basic)
requirements.<file name>
files. You have to create a txt with that specific naming only for the scripts or notebooks that have additional dependencies. It becomes the requirements.txt
for that step. We promise this is the only file addition before taking your ML code to prodution. - 📁 project_root
- 📄 p1.py
- 📄 p2.ipynb
- 📄 p3.py
- 📄 requirements.p1
- 📄 requirements.p3
paradigm launch --steps p1 p2 p3 --region_name us-east-1
paradigm deploy --steps p1 p2 --dependencies "p2:p1,p3:p2|p1" --deployment p3 --deployment_port 8000 --output workflow.yaml --name pipe1 --region_name us-east-1
In the above command:
--steps
should speicify all steps, except any step that should be run as a service, e.g., an API endpoint.--dependencies "p2:p1,p3:p2|p1"
defines the graph stucture (DAG) on how the steps should be run. In this example, we are stating that step p2
is dependent on p1
and step p3
is dependent on both p2
and p1
.--deployment p3
defines a service that needs to be run at the end of the pipeline. Hence, we don't mention is under --steps
.--deployment_port
is defined if the above service is exposed via a specific port internally.--name
can be any name that you want to give this particualr pipeline--region_name
is the aws region that you want to use(OPTIONAL) You can use Argo UI to observe all pipelines and get logs. For that, first make it accessible via your browser by running the below command.
kubectl -n paradigm port-forward deployment/argo-server 2746:2746
http://localhost:2746
(OPTIONAL) In case you want to delete the running service and deployment, use the following commands. <deployment_step>
is the make of the file that has the deolyment code.
kubectl delete deployment deploy-<deployment_step> -n paradigm
kubectl delete service deploy-<deployment_step> -n paradigm
Section | Description |
---|---|
Documentation | Full documentation and tutorials |
Basic Tutorial | The simplest example with Paradigm |
Suggestions on additional features and functionality are highly appreciated. General instructions on how to contribute are mentioned in CONTRIBUTING
Please use the issues tracker of this repository to report on any bugs or questions you have.
Also you can join the DISCORD