|
| 1 | +<!-- |
| 2 | +# Copyright 2023, NVIDIA CORPORATION & AFFILIATES. All rights reserved. |
| 3 | +# |
| 4 | +# Redistribution and use in source and binary forms, with or without |
| 5 | +# modification, are permitted provided that the following conditions |
| 6 | +# are met: |
| 7 | +# * Redistributions of source code must retain the above copyright |
| 8 | +# notice, this list of conditions and the following disclaimer. |
| 9 | +# * Redistributions in binary form must reproduce the above copyright |
| 10 | +# notice, this list of conditions and the following disclaimer in the |
| 11 | +# documentation and/or other materials provided with the distribution. |
| 12 | +# * Neither the name of NVIDIA CORPORATION nor the names of its |
| 13 | +# contributors may be used to endorse or promote products derived |
| 14 | +# from this software without specific prior written permission. |
| 15 | +# |
| 16 | +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY |
| 17 | +# EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE |
| 18 | +# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR |
| 19 | +# PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR |
| 20 | +# CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, |
| 21 | +# EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, |
| 22 | +# PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR |
| 23 | +# PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY |
| 24 | +# OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |
| 25 | +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE |
| 26 | +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
| 27 | +--> |
| 28 | + |
| 29 | +# Deploying a JAX Model |
| 30 | + |
| 31 | +This README showcases how to deploy a simple ResNet model on Triton Inference Server. While Triton doesn't yet have a dedicated JAX backend, JAX/Flax models can be deployed using [Python Backend](https://github.com/triton-inference-server/python_backend). If you are new to Triton, it is recommended to watch this [getting started video](https://www.youtube.com/watch?v=NQDtfSi5QF4) and review [Part 1](https://github.com/triton-inference-server/tutorials/tree/main/Conceptual_Guide/Part_1-model_deployment) of the conceptual guide before proceeding. For the purposes of demonstration, we are using a pre-trained model provided by [flaxmodels](https://github.com/matthias-wright/flaxmodels). |
| 32 | + |
| 33 | +## Step 1: Set Up Triton Inference Server |
| 34 | + |
| 35 | +To use Triton, we need to build a model repository. The structure of the repository as follows: |
| 36 | +``` |
| 37 | +model_repository |
| 38 | +| |
| 39 | ++-- resnet50 |
| 40 | + | |
| 41 | + +-- config.pbtxt |
| 42 | + +-- 1 |
| 43 | + | |
| 44 | + +-- model.py |
| 45 | +``` |
| 46 | +For this example, we have pre-built the model repository. Next, we install the required dependencies and launch the Triton Inference Server. |
| 47 | + |
| 48 | +``` |
| 49 | +# Replace the yy.mm in the image name with the release year and month |
| 50 | +# of the Triton version needed, eg. 22.12 |
| 51 | +docker run --gpus=all -it --shm-size=256m --rm -p8000:8000 -p8001:8001 -p8002:8002 -v$(pwd):/workspace/ -v/$(pwd)/model_repository:/models nvcr.io/nvidia/tritonserver:<yy.mm>-py3 bash |
| 52 | +
|
| 53 | +pip install --upgrade pip |
| 54 | +pip install --upgrade "jax[cuda]" -f https://storage.googleapis.com/jax-releases/jax_cuda_releases.html |
| 55 | +pip install --upgrade git+https://github.com/matthias-wright/flaxmodels.git |
| 56 | +
|
| 57 | +``` |
| 58 | + |
| 59 | +## Step 2: Using a Triton Client to Query the Server |
| 60 | + |
| 61 | +Let's breakdown the client application. First, we setup a connection with the Triton Inference Server. |
| 62 | +``` |
| 63 | +client = httpclient.InferenceServerClient(url="localhost:8000") |
| 64 | +``` |
| 65 | +Then we set the input and output arrays. |
| 66 | +``` |
| 67 | +# Set Inputs |
| 68 | +input_tensors = [ |
| 69 | + httpclient.InferInput("image", image.shape, datatype="FP32") |
| 70 | +] |
| 71 | +input_tensors[0].set_data_from_numpy(image) |
| 72 | +
|
| 73 | +# Set outputs |
| 74 | +outputs = [ |
| 75 | + httpclient.InferRequestedOutput("fc_out") |
| 76 | +] |
| 77 | +``` |
| 78 | +Lastly, we query send a request to the Triton Inference Server. |
| 79 | + |
| 80 | +``` |
| 81 | +# Query |
| 82 | +query_response = client.infer(model_name="resnet50", |
| 83 | + inputs=input_tensors, |
| 84 | + outputs=outputs) |
| 85 | +
|
| 86 | +# Output |
| 87 | +out = query_response.as_numpy("fc_out") |
| 88 | +``` |
| 89 | + |
0 commit comments