Install Backend.AI Storage Proxy

Refer to Prepare required Python versions and virtual environments to setup Python and virtual environment for the service.

Install the latest version of Backend.AI Storage Proxy for the current Python version:

$ cd "${HOME}/storage-proxy"
$ # Activate a virtual environment if needed.
$ pip install -U

If you want to install a specific version:

$ pip install -U${BACKEND_PKG_VERSION}

Local configuration

Backend.AI Storage Proxy uses a TOML file (storage-proxy.toml) to configure local service. Refer to the storage-proxy.toml sample file for a detailed description of each section and item. A configuration example would be:

namespace = "local"
addr = { host = "bai-m1", port = 8120 }
user = ""
password = ""

node-id = "i-bai-m1"
num-proc = 2
pid-file = "/home/bai/storage-proxy/"
event-loop = "uvloop"
scandir-limit = 1000
max-upload-size = "100g"

# Used to generate JWT tokens for download/upload sessions
secret = "secure-token-for-users-download-upload-sessions"
# The download/upload session tokens are valid for:
session-expire = "1d"

user = 1100
group = 1100

# Client-facing API
service-addr = { host = "", port = 6021 }
ssl-enabled = false

# Manager-facing API
service-addr = { host = "", port = 6022 }
ssl-enabled = false

# Used to authenticate managers
secret = "secure-token-to-authenticate-manager-request"

enabled = false
asyncio = false
enhanced-aiomonitor-task-info = true

# Set the global logging level.
level = "INFO"

# Multi-choice of: "console", "logstash", "file"
# For each choice, there must be a "logging.<driver>" section
# in this config file as exemplified below.
drivers = ["console", "file"]

"" = "WARNING"
"aiotools" = "INFO"
"aiohttp" = "INFO"
"ai.backend" = "INFO"

# If set true, use ANSI colors if the console is a terminal.
# If set false, always disable the colored output in console logs.
colored = true

# One of: "simple", "verbose"
format = "simple"

path = "./logs"
filename = "storage-proxy.log"
backup-count = 10
rotation-size = "10M"


backend = "vfs"
path = "/vfroot/local"

# If there are NFS volumes
# [volume.nfs]
# backend = "vfs"
# path = "/vfroot/nfs"

Save the contents to ${HOME}/.config/ Backend.AI will automatically recognize the location. Adjust each field to conform to your system.

Run Backend.AI Storage Proxy service

You can run the service:

$ cd "${HOME}/storage-proxy"
$ python -m

Press Ctrl-C to stop both services.

Register systemd service

The service can be registered as a systemd daemon. It is recommended to automatically run the service after rebooting the host machine, although this is entirely optional.

First, create a runner script at ${HOME}/bin/

#! /bin/bash
set -e

if [ -z "$HOME" ]; then
   export HOME="/home/bai"

# -- If you have installed using pyenv --
if [ -z "$PYENV_ROOT" ]; then
   export PYENV_ROOT="$HOME/.pyenv"
   export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init --path)"
eval "$(pyenv virtualenv-init -)"

if [ "$#" -eq 0 ]; then
   exec python -m
   exec "$@"

Make the scripts executable:

$ chmod +x "${HOME}/bin/"

Then, create a systemd service file at /etc/systemd/system/backendai-storage-proxy.service:

Description= Backend.AI Storage Proxy



Finally, enable and start the service:

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now backendai-storage-proxy

$ # To check the service status
$ sudo systemctl status backendai-storage-proxy
$ # To restart the service
$ sudo systemctl restart backendai-storage-proxy
$ # To stop the service
$ sudo systemctl stop backendai-storage-proxy
$ # To check the service log and follow
$ sudo journalctl --output cat -u backendai-storage-proxy -f