Skip to content

Deployment options

H2O Hydrogen Torch offers three options to deploy a built model:

  1. H2O Hydrogen Torch UI
  2. Python Environment
  3. H2O MLOps

Below, each option is explained in turn.

H2O Hydrogen Torch UI

You can score new data on built models (experiments) that generate downloadable predictions through the H2O Hydrogen Torch UI. To score new data through the H2O Hydrogen Torch UI:

  1. In the H2O Hydrogen Torch navigation menu, click Predict data.
  2. In the Experiment box, select the built experiment you want to use to score new data.
  3. In the Dataset and Environment settings section, specify the settings that are displayed.

    Note

    The problem type of the selected experiment will impact the settings that H2O Hydrogen Torch will display. To learn more, see Predict data (dataset and environment settings)

  4. Click Run predictions.

View running or completed prediction (UI)

To view a running or completed prediction through the H2O Hydrogen Torch UI:

  1. In the H2O Hydrogen Torch navigation menu, click View predictions.

  2. In the View predictions table, select the name of the prediction you want to view.

Note

  • To learn how to download a completed prediction, see Download Predictions

  • To learn about the available tabs when viewing a running or completed experiment, see Prediction Tabs

Python environment

H2O Hydrogen Torch offers the ability to download a standalone Python Scoring Pipeline that allows you to predict new data utilizing a trained model in any external Python environment.

To download the standalone Python Scoring Pipeline of a built model:

  1. In the H2O Hydrogen Torch navigation menu, click View experiments.

  2. In the View experiments table, select the name of the experiment (model) you want to download its standalone Python Scoring Pipeline.

    Note

    • Before selecting an experiment, make sure its status is finished:

    • A standalone Python Scoring Pipeline is only available for experiments with a finished status.

  3. Click Download scoring.

With the above in mind, we can quickly summarize the process of using the Python scoring pipeline, as follows:

  1. Select a finished experiment
  2. Download the Python Scoring pipeline
  3. Install the H2O Hydrogen Torch wheel package in a Python 3.7 environment of your choice

    Note

    • The H2O Hydrogen Torch .whl package is shipped with the downloaded Python scoring pipeline.

      • To install the .whl package run: pip install *.whl within the scoring pipeline folder
    • A fresh environment is highly recommended and can be set up using pyenv or conda. For more information, see pyenv or Managing Conda environments.

    • The H2O Hydrogen Torch scoring pipeline supports Ubuntu 16.04+ OS with Python 3.7.

      • Ensure that Python3.7-dev is installed for Ubuntu versions that support it. To install it, run: sudo apt-get install python3.7-dev

      • Ensure setuptools and pip are up to date, to upgrade, run: pip install --upgrade pip setuptools within the Python environment

    • Please refer to the shipped README.txt file in the downloaded Python scoring pipeline for more details

  4. Use provided sample code to score new data using your trained model weights

    Note

    The sample code comes with the downloaded Python Scoring pipeline inside a file name scoring_pipeline.py.

H2O MLOps

H2O Hydrogen Torch offers an MLOps Pipeline that can be used to directly deploy a trained model to H2O MLOps to score new data using the H2O MLOps REST API.

To download the MLOps Pipeline of a built model:

  1. In the H2O Hydrogen Torch navigation menu, click View experiments.

  2. In the View experiments table, select the name of the experiment (model) you want to download its MLOps Pipeline.

    Note

    • Before selecting an experiment, make sure its status is finished:

    • An MLOps Pipeline is only available for an experiment with a finished status.

  3. Click Download MLOps.

With the above in mind, we can quickly summarize the process of using the H2O MLOps REST API:

  1. Select a finished experiment
  2. Download the MLOps Pipeline
  3. Deploy MLFlow to H2O MLOps

    Note

    MLFlow (model.mlflow.zip) comes inside the downloaded MLOps Pipeline

  4. Score on new data (e.g., image data) by calling the API endpoint. For example:

    import base64
    import json
    
    import cv2
    import requests
    
    # fill in the endpoint URL from MLOps
    URL = "enpoint_url"
    
    # if you want to score an image, please base64 encode it and send it as string
    img = cv2.imread("image.jpg")
    input = base64.b64encode(cv2.imencode(".png", img)[1]).decode()
    
    # in case of text, you can simply send the string
    input = "This is a test message!"
    
    # json data to be sent to API
    data = {"fields": ["input"], "rows": [[input]]}
    
    # for text span prediction problem type, pass question and context texts
    # input = ["Input question", "Input context"]
    # data = {"fields": ["question", "context"], "rows": [input]}
    
    # post request
    r = requests.post(url=URL, json=data)
    
    # extracting data in json format
    ret = r.json()
    
    # read output, output is a dictionary
    ret = json.loads(ret["score"][0][0])
    

    Note

    The above code comes inside the downloaded Python Scoring pipeline in a file name api_pipeline.py.

  5. The received JSON response from an H2O MLOps REST API call follows the same format as the .pkl files discussed in the Download Predictions page.

  6. Monitor requests and predictions on H2O MLOps


Back to top