Deploying ML Models via MLOps UI
Login to MLOps web interface and follow the steps to deploy a model to MLOps:
- From the left menu panel, click “Deploy” option
- Fill out the form (as shown in Figure 4.1 below).
- Model Name: Give a meaningful name of the model
- Select Model Category: Select from the dropdown the type of model you are deploying.
- Purpose (optional): Give a short description and reason why you are deploying this model
- Business Description (optional): Give a business description about the usage of the model
- Author: by default, the logged in user’s id is pre-filled. However, if the author of the model is someone else in the organization, provide that user’s id.
- Select Cluster Id: Select the cluster where the model needs to be deployed. Depending on your organization’s deployment, the option may vary. Selecting “default_cluster” means, the model will be deployed within the same machine where the MLOps is deployed.
- Choose File: Browse to the location in your machine where the model file is located and select the model to upload. The supported file formats are:
- .xml for PMML compliant models
- .onnx for ONNX compliant models
- .zip for TensorFlow models (make sure all the model artifacts are included and packages as a zip file).
- Click “Deploy” button.
- If everything goes well, you should see the metadata of the model with the “success” confirmation.
Figure 4.1: Screen showing the model deployment page