Frigate – Manage IP Camera with a Raspberry Pi

As a long-time user of high-quality video surveillance systems like Synology and Ubiquiti, I’ve grown accustomed to deploying and relying on their robust, feature-rich ecosystems. However, this time I was looking for something more affordable, focused solely on video management, without the overhead of NAS capabilities or other advanced features. I needed a lightweight solution that could run on a Raspberry Pi—and on Guillaume’s recommendation, I turned to Frigate. This open-source tool offers live video stream management, recording capabilities, and even optional AI-based video analysis. It looks promising and well-built. This post is, as usual, a log of my journey testing this setup in real-time. It’s also an excuse to finally experiment with a Raspberry Pi 5, which I’ve paired with an NVMe drive for video storage, avoiding the SD card’s limited endurance under heavy I/O workloads. I’ll admit, it’s slightly ironic to now need this much power for tasks I used to run smoothly on Synology boxes over a decade ago. Even funnier is that Frigate may require a neural accelerator for its AI features—something that seems excessive when you consider modern AI models like YoLo run on microcontrollers with far less processing power. That said, I don’t plan to use AI in this setup (at least not yet), but I’ve still opted for a dual PCIe HAT to keep the door open for testing a Coral accelerator in the future.

Pre-requisite / Bill of Materials

To get started, the first thing you obviously need is a Raspberry Pi 5. I opted for the 8GB model. Frigate will be running in Docker on the Pi, and while I initially considered Ubuntu as the OS, I quickly ran into the usual frustrations of headless setup—getting a Raspberry Pi to boot properly without a keyboard or display is still unnecessarily complex, and the procedure varies with each release. After wasting some time, I switched to Raspberry Pi Imager, which let me deploy Raspbian (Raspberry Pi OS) and configure my user account and SSH access in just a few clicks.

I housed the Pi in a fanless enclosure, since the system is meant to be sealed in a box and forgotten for a long time—I didn’t want a noisy, wear-prone fan running continuously. On top of that, I installed a PCIe HAT to add NVMe storage. One of the great new features of the Pi 5 is the native PCIe interface via a ribbon cable, making it straightforward to set up external storage. I was pleasantly surprised (and a bit lucky) to find that the PCIe HAT I picked came with a GPIO extender, which lifts the HAT high enough to clear the heatsink case—allowing me to mount it securely despite the bulky passive cooler.

My only concern is thermal management: I hope the system won’t overheat, especially the NVMe drive. Worst case, I have a backup plan—an extension ribbon cable that would let me place the NVMe off to the side. That said, the cable is quite short, so there’s not much flexibility. We’ll see how it holds up in real-world use.

The first step, once connected to Raspbian, is to install Docker.

The Docker website only provides instructions for 32-bit Raspbian, which isn’t ideal if you’re running the 64-bit version—as you probably should on a Pi 5. After some trial and error, here’s the method that worked reliably for installing Docker on 64-bit Raspbian:

root$ apt-get install ca-certificates curl
root$ install -m 0755 -d /etc/apt/keyrings

root$ curl -fsSL https://download.docker.com/linux/debian/gpg -o /etc/apt/keyrings/docker.asc

root$ echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/debian \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

root$ apt update
root$ apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

root$ docker --version
Docker version 28.1.1, build 4eba377

Now it’s just a matter of configuring the NVMe drive—by default, mine shows up as /dev/nvme0n1.

root$ mkfs.ext4 /dev/nvme0n1
root$ uuid=`blkid /dev/nvme0n1 | tr -s " " | cut -d " " -f2`
root$ echo "$uuid /frigate ext4 defaults 0 2" >> /etc/fstab
root$ mkdir /frigate
root$ systemctl daemon-reload
root$ mount /frigate

Let’s deploy Frigate

Frigate comes with a docker-compose file, so it’s easy to get-started. You need to setup the configuration a bit. Let’s create the docker-compose.yml file and configuration directories in /frigate (on the nvme drive)

I modified a bit the docker-compose file to increase the shared memory to 256MB. It depends of the number of camera and the resolution, let’s say about 64MB per 5MB camera.

version: "3.9"
services:
  frigate:
    container_name: frigate
    restart: unless-stopped
    stop_grace_period: 30s
    image: ghcr.io/blakeblackshear/frigate:stable
    volumes:
      - ./config:/config
      - ./storage:/media/frigate
      - type: tmpfs # Optional: 1GB of memory, reduces SSD/SD Card wear
        target: /tmp/cache
        tmpfs:
          size: 1000000000
      - type: tmpfs # Shared memory, about 64M per camera, 256 MB here
        target: /dev/shm
        tmpfs:
          size: 256000000
    ports:
      - "443:8971"
      - "8554:8554" # RTSP feeds

Then we can run docker compose up -d to start the service. The service will be accessible on port 443 ; the system is fully open. The configuration can be edited online in Settings menu and you can enable the authentication.

Before this we need to setup an admin password, in the UI, go to settings / settings / user tab and click on Update Password of the admin line. Then you can setup the configuration.

To enable the authentication, edit config/config.yaml file (with UI or file-system) and add the following lines

mqtt:
  enabled: false
...
auth:
  enabled: true
  reset_admin_password: false
...
cameras:

In case you loose your password, you can force reset from the file, restart the container, and a new password will be generated. The next step is to configure video flows. There is no graphical configuration for camera, so we need to make a manual setup.

Configure Camera

I chose Reolink cameras, as they seemed to be the best supported according to forums, provided you stick to the 5MP models. It’s also an option at a reasonable cost for decent quality.

Once powered on, the camera is online and will need to be initialized, for example, with the Reolink mobile app. It’s quite straightforward. For the setup with Frigate, I opted for a static IP configuration, which requires opening the HTTPS port on the camera. To do this, in the mobile app, you need to go to the Network Information menu >> Advanced >> HTTPS to open the port.

Then you can connect the camera webui and setup the IP

Next, you need to enable the RTSP and ONVIF streams, which are located in the Settings menu (1) above, then Networks, followed by Advanced, and finally Server Settings.

Now, you need to configure Frigate to fetch the camera stream. It seems to use a bridge, which means there is an entry in the configuration file for go2rtc, allowing you to pass the login & password as well as the path name to the stream. Then, there is a camera entry that links to Frigate. There are two different streams: a HD stream and a low-definition stream. In the configuration below, the low-definition stream is sent to the detection, while the HD stream is stored.

So, in my configuration file (below), I have declared 4 go2rtc streams from 2 cameras pointing to the fixed IPs I used, with the login/password chosen for the cameras. These are, of course, not my real passwords, and by the way, be cautious as I encountered issues related to parsing the string when the passwords contained special characters.

Next, in the file, you will find the declaration of the 2 cameras with the 2 streams, which are used differently. The HD stream will be recorded, and the LD stream will be used for detection (I haven’t added a TPU). For now, the detection is inactive, and there isn’t really any recording, but with this setup, I can view the streams, which is already a step forward.

mqtt:
  enabled: false

auth:
  enabled: true
  reset_admin_password: false

detectors:
  cpu1:
    type: cpu
    num_threads: 2

go2rtc:
  streams:
    cam1_RLC5810:
      - rtsp://admin:password@192.168.88.12:554/h264Preview_01_main
    cam1_RLC5810_sub:
      - rtsp://admin:password@192.168.88.12:554/h264Preview_01_sub
    cam2_RLC5820:
      - rtsp://admin:password@192.168.88.13:554/h264Preview_01_main
    cam2_RLC5820_sub:
      - rtsp://admin:password@192.168.88.13:554/h264Preview_01_sub

cameras:
  cam1_RLC5810:
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://127.0.0.1:8554/cam1_RLC5810?video=copy&audio=aac
          input_args: preset-rtsp-restream
          roles:
            - record
        - path: rtsp://127.0.0.1:8554/cam1_RLC5810_sub?video=copy
          input_args: preset-rtsp-restream
          roles:
            - detect
    detect:
      enabled: false # <---- disable detection until ...
      width: 1280
      height: 720

  cam2_RLC5820:
    enabled: true
    ffmpeg:
      inputs:
        - path: rtsp://127.0.0.1:8554/cam2_RLC5820?video=copy&audio=aac
          input_args: preset-rtsp-restream
          roles:
            - record
        - path: rtsp://127.0.0.1:8554/cam2_RLC5820_sub?video=copy
          input_args: preset-rtsp-restream
          roles:
            - detect
    detect:
      enabled: false # <---- disable detection until ...
      width: 1280
      height: 720


version: 0.15-1

To enable video recording, for each camera, it will be possible to set rules on the continuous stream, alerts, and detections. One of the strengths, I think, is the ability to configure these independently. In theory, a system will take care of cleaning up if the disk starts to get full. It is then by selecting a camera and clicking on History that you will be able to view its history, rather than going through the Review icon as one might intuitively think.

cameras:
  cam1_RLC5810:
    enabled: true
    ffmpeg:
      ...
    detect:
      ...
    record:
      enabled: true
      retain:
        days: 5
        mode: all        # here we record everything 
      alerts:
        retain:
          days: 60
          mode: motion   # for alerts it's only motions
      detections:
        retain:
          days: 60
          mode: motion

Now, we can look at event activation for specific recordings with the implementation of motion detection and object detection. To start with motion detection, which actually involves changes in the image, we will configure the sensitivity in several ways. I refer you to the documentation for that. Generally, tuning the settings is not always straightforward because there are many factors related to lighting that need to be taken into account to avoid triggering an alert every time a cloud passes by. This is the advantage of turning to AI-based object detection models: they allow for content analysis of the image rather than just simple pixel changes.

detectors:
...
motion:
  threshold: 30      # 0 to 255 value - lower is more sensitive
  contour_area: 30   # 0 to 50 value - lower is more sensitive
  lightning_threshold: 0.7  # 1 to 1 value - lower is more sens

go2rtc:
  streams:

...

cameras:
  cam1_RLC5810:
    ...
    detect:
      enabled: true
      width: 640
      height: 480

    record:
      enabled: true
      ...

Finally, it is possible to activate pattern detections such as person or vehicle recognition to trigger alerts.

motion:
...

objects:
  track:
   - person
   - car

review:
  alerts:
    labels:
      - car
      - person
...

Under these conditions, on my RPI5, without a TPU, I am experiencing a machine load of around 15-20%, which seems to be quite acceptable.

Opinion

After this setup, which I must admit was somewhat tedious, as the documentation is quite dense but not very user-friendly, I believe this solution is promising. It lacks a robust configuration interface with presets for market cameras and alert trigger settings. This is something you’ll find in commercial solutions, which really simplify the process. If I consider the time spent on the entire setup, I could have actually purchased a Unify solution with a good ROI, but of course, there’s the pleasure of discovering other solutions that, hopefully, will evolve over time towards a more end-user-friendly approach. Now, the real test will be to put this into production on the ground. I still need to find the best way to send alerts via email, for example, and make them accessible without being on the local network.

In the end, the simplest solution for the end user to access alerts and live stream viewing from a mobile app is to rely on the Reolink app. Frigate NVR will be used in my case for long-term storage for post-event review. Depending on the software’s evolution, it could become the main solution in the future.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.