Linux & Docker Installation

This guide describes step by step how to install and configure an NVIDIA Jetson Orin Nano for use with the AI-Detector (CowCatcherAI). All steps are fully detailed so that nothing is skipped.

1. Hardware Installation & Flashing the Jetson

Option 1: Booting from SD card or SSD (Basic Installation)

  1. Download the correct JetPack image via the NVIDIA website or AI-lab.
  2. Download and install BalenaEtcher on your PC.
  3. Insert the SD card into your computer.
  4. Flash the downloaded image to the SD card using BalenaEtcher.
  5. Insert the flashed SD card into the Jetson Orin Nano.
  6. Start the Jetson and complete the initial configuration.

IMPORTANT: Remember the chosen username and password well.

Watch the explainer video

Option 2: Transferring boot from SD to SSD

  1. Connect a compatible SSD to the Jetson.
  2. Follow the steps in the video to boot the Jetson from the SSD instead of the SD card, so you can remove the SD card again.

Watch the explainer video

Option 3: Installation via NVIDIA SDK Manager

  1. Install NVIDIA SDK Manager on a Linux host PC.
  2. Connect the Jetson via USB in recovery mode.
  3. Select Jetson Orin Nano and the desired JetPack version.
  4. Follow the installation procedure completely via SDK Manager.

2. Installation of Docker & NVIDIA Container Toolkit

Open a terminal on the Jetson and execute the following commands:

sudo apt update
sudo apt install -y docker.io
sudo systemctl enable --now docker
sudo apt install -y nvidia-container-runtime nvidia-container-toolkit
sudo nvidia-ctk runtime configure --runtime=docker
sudo systemctl restart docker

3. Download AI-Detector Container

Download the correct container for JetPack 6:

sudo docker pull ghcr.io/eschouten/ai-detector:main-jetpack6

4. Creating configuration files

Create the necessary files using the following commands:

touch compose.yml
touch config.json

File 1: compose.yml

Open the file with `nano compose.yml`. Paste the following content:

services:
  aidetector:
    image: "ghcr.io/eschouten/ai-detector:main-jetpack6"
    runtime: nvidia
    ipc: host
    ulimits:
      memlock: -1
      stack: 67108864
    volumes:
      - ./config.json:/app/config.json
      - ./sprong24.mp4:/app/sprong24.mp4
      - ./detections/:/app/detections
    deploy:
      resources:
        reservations:
          devices:
            - driver: nvidia
              count: all
              capabilities: [gpu]

File 2: config.json

Open the file with `nano config.json`. Paste the following content and enter your own RTSP details and Telegram token/chat ID. Add more terms if you like, this is the minimal config file:

{
  "detectors": [
    {
      "detection": {
        "source": [
          "rtsp://admin:YourPassword123@192.168.100.22:554/h264Preview_01_sub"
        ]
      },
      "yolo": {
        "model": "https://github.com/CowCatcherAI/CowCatcherAI/releases/download/model-V16/cowcatcherV15.pt",
        "confidence": 0.84,
        "frames_min": 4,
        "timeout": 6,
        "time_max": 50
      },
      "exporters": {
        "telegram": [
          {
            "token": "<your_bot_token>",
            "chat": "<your_chat_id>",
            "alert_every": 5,
            "confidence": 0.87
          }
        ],
        "disk": {
          "directory": "mounts"
        }
      }
    }
  ]
}

5. Starting and Managing the Container

Start the container:

sudo docker compose up

Stop the container:

sudo docker compose down
After changes in config.json or compose.yml:
  1. sudo docker compose down [cite: 106]
  2. Save modifications [cite: 107]
  3. sudo docker compose up [cite: 108]

The detection results are saved in the folder: ./detections [cite: 109]