This is the first of three steps dedicated to learning the developer loop that Truss enables. We hope that once you try it, you’ll agree with us that it’s the most productive way to deploy ML models.

If this is your first time running truss push, you’ll need to configure a remote host for your model server.

Truss is maintained by Baseten, which provides infrastructure for running ML models in production. We’ll use Baseten as the remote host for your model server.

To set up the Baseten remote, you’ll need a Baseten API key.

If you don’t have a Baseten account, no worries, just sign up for an account and you’ll be issued plenty of free credits to get you started.

Push your Truss

To spin up a model server from your Truss, run:

truss push

Paste your Baseten API key if prompted.

Open up your model dashboard on Baseten to monitor your deployment and view model server logs.

class Model:
    def __init__(self, **kwargs):
        self._model = None

    def load(self):

    def predict(self, model_input):
        return model_input