Integration guide

NautilusTrader

NautilusTrader is an open-source, high-performance algorithmic trading platform. You write a strategy once, backtest it on historical data, and deploy it live with no code changes.

Every trading strategy has hyperparameters — numbers you choose before running it, like a lookback window, an ATR multiplier, or a position-size limit. Different values can mean the difference between a profitable strategy and a losing one. Hyperparameter optimization (HPO) is the process of systematically searching for the best combination of these values instead of guessing or manually tweaking them one at a time.

HyperOptimizer automates that search for you. You build a Docker image of your Nautilus backtest, tell us which parameters to tune and their ranges in our web dashboard, and we do the rest: we run your container hundreds of times with different parameter values (chosen by algorithms like Bayesian optimization), collect the metrics each run produces, and surface the best results in the dashboard. You don't run the trials on your machine and you don't manage any infrastructure.

How it works

  • You build a Docker image that runs your Nautilus backtest script (e.g. uv run backtest.py). In our dashboard you configure which parameters to optimize and their ranges.
  • We run your container for each trial and append command-line arguments (e.g. --hpo-bar-size=1 --hpo-dummy-float=0.5).
  • Your script parses these arguments, runs the backtest, and prints metrics to stdout in our required format.
  • Our system parses every log line; any line that matches the metrics format is recorded for that trial and shown in the dashboard.

1. Receive HPO parameters from the command line

Use argparse (or any CLI parser) to read the hyperparameters we inject. Name arguments however you like; we pass them as --hpo-<name>=<value>.

def _parse_args():
    p = argparse.ArgumentParser(description="HPO: Nautilus backtest with CLI hyperparameters.")
    p.add_argument("--hpo-bar-size", type=str, default="1")
    p.add_argument("--hpo-dummy-float", type=float, default=0.5)
    return p.parse_args()
 
# In main():
args = _parse_args()
hpo_params = {
    "hpo_bar_size": args.hpo_bar_size,
    "hpo_dummy_float": args.hpo_dummy_float,
}
# Pass hpo_params into your strategy or config

Then use hpo_params inside your Nautilus strategy or engine config (e.g. bar size, risk limits, indicator periods).

2. In your code: one backtest per trial

When we run a trial, we start your container with one set of --hpo-* arguments. Your script should run a single backtest as usual: add venue, instruments, data, and strategy; call engine.run(); then get the result with engine.get_result() and turn it into a metrics dictionary (e.g. total PnL, Sharpe, drawdown, number of trades).

3. Emit metrics to stdout so we can collect them

Our system collects trial results by scanning container stdout for lines that look like:

hpo.metrics.<key>=<json.dumps(value)>

Each metric must be on its own line, with the exact prefix hpo.metrics. so our collector can parse it. Values must be JSON-serializable (numbers, strings, lists, dicts). Use json.dumps(value, default=str) for non-JSON types (e.g. decimals or timestamps).

import json
 
backtest_result = engine.get_result()
metrics = {
    "total_pnl": float(backtest_result.stats_pnls.get("PnL", 0)),
    "total_orders": backtest_result.total_orders,
    "elapsed_time": backtest_result.elapsed_time,
    # add any other keys you want to optimize on
}
 
for key, value in metrics.items():
    print(f"hpo.metrics.{key}={json.dumps(value, default=str)}")

That's it. Once your script prints lines in this form, we'll associate them with the trial and surface them in the dashboard.

Summary

  1. Build a Docker image that runs your Nautilus backtest. In our dashboard: create an experiment and configure which parameters to optimize (names and ranges); we run the trials.
  2. In your code: parse HPO hyperparameters from CLI (e.g. --hpo-*) and use them in your Nautilus strategy or config; run the backtest and compute a metrics dict from engine.get_result().
  3. Print each metric as hpo.metrics.<key>={json.dumps(value)} to stdout so we can record the trial.

For a minimal runnable example, see the NautilusTrader demo in our repo. Need another framework or language? We'll add more guides soon—or get in touch and we can document your stack.