environment_variables: {}
model_name: My First Truss
requirements:
  - torch==2.0.1
  - transformers==4.30.0
resources:
  accelerator: null
  cpu: "1"
  memory: 2Gi
  use_gpu: false
secrets: {}
system_packages: []

For this tutorial, we want to package a basic text classification model with this Truss to deploy it on our model server. The model is from the transformers package and thus requires two Python packages to run it: Transformers and PyTorch.

We’ll use config.yaml to add the Python packages that the text classification model requires to the model server.

On the right side, you’ll see what your Truss should look like after each step!

Open config.yaml in a text editor

One of the two essential files in a Truss is config.yaml, which configures the model serving environment. For a complete list of the config options, see the config reference.

Note that the model_name parameter is already set to My First Truss or whatever name you picked when you ran truss init. Don’t change this name as it must remain consistant with the remote host.

Add Python requirements

Python requirements are listed just like they appear in a requirements.txt. ML moves fast; always pin your requirement versions to make sure you’re getting compatible packages.

Update config.yaml with the required packages:

Check the “Diff” tab to see exactly which lines of code change.

config.yaml
requirements:
  - torch==2.0.1
  - transformers==4.30.0

Check for a patch

After you save your changes to config.yaml, you should see two things happen:

  1. Your truss watch tab should show that a patch was applied to your model server.
  2. The model server logs on your Baseten account should show log entries from the packages being installed in your model server.

Now you’re ready to add the text classification model.