// DevOps

Full Jitsi Meet recording pipeline: from Jibri to Notion and HLS via Caddy

Published on 2025-12-25

Jitsi Meet works out of the box very well for video conferencing. The Jitsi Meet + Jibri bundle allows you to record meetings — and many installations stop there.

But as soon as Jitsi is used not occasionally, but as part of a working process, questions arise very quickly:

  • Where should recordings be stored centrally?
  • How can links be automatically published for the team?
  • How can we get rid of heavy MP4 files and move to streaming playback?
  • How can recordings be served over HTTPS without exposing the directory structure?
  • How can all of this be done automatically, without manual administrator involvement?

Below is a full production pipeline with code: from Jibri finalizing a recording to publishing in Notion and asynchronous MP4→HLS transcoding with delivery via Caddy.


Baseline architecture

Components

  • Jitsi Meet — conferences.
  • Jibri — recording (audio/video capture and saving to disk).
  • Recordings FS — filesystem with recordings.
  • Notion DB — catalog of meetings and links.
  • ffmpeg worker — transcoding to HLS.
  • Caddy — HTTPS static delivery (MP4/HLS), BasicAuth, no listing.

Basic file structure

recordings/
└── <room-id>/
    └── meeting.mp4

Key idea of the pipeline

Jibri only produces MP4 recordings. Everything else is external automation: finalize → publish → async HLS → update Notion → cleanup.

This simplifies maintenance and provides idempotency: any step can be safely repeated.


Notion as a meeting catalog

Database schema (minimum)

Create a Notion database with the following properties:

  • Name (title)
  • Date (date)
  • Recording URL (url)
  • (optional) Status (select: recorded/processing/published/error)
  • (optional) Room (rich text)
  • (optional) Provider (select: mp4/hls)

We will write records there via the Notion API.


Environment variables and secrets

To avoid hardcoding anything in scripts, we use an env file.

/etc/jitsi/recording-pipeline.env:

# Notion
NOTION_TOKEN="secret_xxx"                 # internal integration token
NOTION_DATABASE_ID="xxxxxxxxxxxxxxxxxxxx" # database id

# Public base URL where Caddy serves recordings
PUBLIC_BASE_URL="https://rec.example.com"

# Where recordings are stored on disk
RECORDINGS_ROOT="/recordings"

# Optional: set a static tag/prefix
NOTION_NAME_PREFIX="[Jitsi]"

# Logging
LOG_DIR="/var/log/jitsi-recording-pipeline"

# BasicAuth is handled by Caddy. If you still want to embed user:pass in URL (not recommended),
# you can do it by setting:
# PUBLIC_URL_AUTH="user:pass@"
PUBLIC_URL_AUTH=""

# Concurrency / locking
LOCK_DIR="/var/lock/jitsi-recording-pipeline"

Create directories:

sudo mkdir -p /var/log/jitsi-recording-pipeline /var/lock/jitsi-recording-pipeline
sudo chmod 750 /var/log/jitsi-recording-pipeline /var/lock/jitsi-recording-pipeline

1) finalize.sh: publish MP4 to Notion right after recording

What finalize does

  • Finds the MP4 in the recording directory.
  • Builds a public link to the MP4 (via Caddy).
  • Creates a page/row in the Notion DB.
  • Writes the link (MP4), date, room-id there.
  • (optional) sets status to recorded.

Code finalize.sh

File: /usr/local/bin/jitsi-finalize.sh

#!/usr/bin/env bash
set -euo pipefail

# Jibri usually passes the path to the directory with the recording.
# We make the script as tolerant as possible: accept either a directory or a file.
INPUT_PATH="${1:-}"

if [[ -z "${INPUT_PATH}" ]]; then
  echo "Usage: $0 <recording_dir_or_file>" >&2
  exit 1
fi

# Load env
ENV_FILE="/etc/jitsi/recording-pipeline.env"
if [[ -f "$ENV_FILE" ]]; then
  # shellcheck disable=SC1090
  source "$ENV_FILE"
else
  echo "Env file not found: $ENV_FILE" >&2
  exit 1
fi

mkdir -p "$LOG_DIR"
LOG_FILE="$LOG_DIR/finalize.log"

log() { printf '%s %s\n' "$(date -Is)" "$*" | tee -a "$LOG_FILE" >&2; }

# Resolve directory
REC_DIR="$INPUT_PATH"
if [[ -f "$INPUT_PATH" ]]; then
  REC_DIR="$(dirname "$INPUT_PATH")"
fi

if [[ ! -d "$REC_DIR" ]]; then
  log "ERROR: recording dir not found: $REC_DIR"
  exit 1
fi

# Determine room id from path (last component)
ROOM_ID="$(basename "$REC_DIR")"

# Find mp4 (take the largest as the "main" one, in case there are several)
MP4_FILE="$(find "$REC_DIR" -maxdepth 1 -type f -name '*.mp4' -printf '%s\t%p\n' 2>/dev/null | sort -nr | head -n1 | cut -f2- || true)"

if [[ -z "$MP4_FILE" ]]; then
  log "ERROR: mp4 not found in $REC_DIR"
  exit 1
fi

MP4_BASENAME="$(basename "$MP4_FILE")"

# Build public URL
# If PUBLIC_URL_AUTH is empty — just https://host/...
PUBLIC_URL="${PUBLIC_BASE_URL}/${ROOM_ID}/${MP4_BASENAME}"
if [[ -n "${PUBLIC_URL_AUTH}" ]]; then
  # insert user:pass@ after https://
  PUBLIC_URL="$(echo "$PUBLIC_URL" | sed -E "s#^https://#https://${PUBLIC_URL_AUTH}#")"
fi

# Meeting title: can be improved if you extract the name from Jitsi metadata/JSON
MEETING_TITLE="${NOTION_NAME_PREFIX} ${ROOM_ID}"

# Date: use mp4 mtime as meeting date (a practical default)
MEETING_DATE="$(date -u -r "$MP4_FILE" +"%Y-%m-%dT%H:%M:%SZ")"

log "Finalize room=$ROOM_ID file=$MP4_BASENAME url=$PUBLIC_URL date=$MEETING_DATE"

# Create page in Notion DB
# Requirements: curl + jq
if ! command -v jq >/dev/null 2>&1; then
  log "ERROR: jq not installed"
  exit 1
fi

PAYLOAD="$(jq -n \
  --arg db "$NOTION_DATABASE_ID" \
  --arg title "$MEETING_TITLE" \
  --arg date "$MEETING_DATE" \
  --arg url "$PUBLIC_URL" \
  --arg room "$ROOM_ID" \
  '{
    "parent": { "database_id": $db },
    "properties": {
      "Name": { "title": [ { "text": { "content": $title } } ] },
      "Date": { "date": { "start": $date } },
      "Recording URL": { "url": $url },
      "Room": { "rich_text": [ { "text": { "content": $room } } ] },
      "Status": { "select": { "name": "recorded" } },
      "Provider": { "select": { "name": "mp4" } }
    }
  }'
)"

RESP="$(curl -sS -X POST "https://api.notion.com/v1/pages" \
  -H "Authorization: Bearer ${NOTION_TOKEN}" \
  -H "Content-Type: application/json" \
  -H "Notion-Version: 2022-06-28" \
  --data "$PAYLOAD"
)"

PAGE_ID="$(echo "$RESP" | jq -r '.id // empty')"
if [[ -z "$PAGE_ID" ]]; then
  log "ERROR: Notion create page failed: $(echo "$RESP" | jq -c '.')"
  exit 1
fi

# Store page id near the recording to allow later update without search
echo "$PAGE_ID" > "${REC_DIR}/.notion_id"
log "OK: notion page created id=$PAGE_ID stored at ${REC_DIR}/.notion_id"

Permissions:

sudo chmod +x /usr/local/bin/jitsi-finalize.sh

Connecting finalize to Jibri (general idea)

Depending on the Jitsi/Jibri package or distribution, integration points differ, but the idea is the same:

  • Jibri calls your script after finishing a recording and passes the directory path.

If you already have a Jitsi-provided finalize.sh, a typical pattern is:

  • keep the standard finalization (if needed),
  • add your hook.

Conceptual example:

# somewhere in jibri finalize pipeline
/usr/local/bin/jitsi-finalize.sh "/recordings/<room-id>"

2) Serving files via Caddy: HTTPS, BasicAuth, no listing

Requirements

  • Direct links must work:

    • https://rec.example.com/<room-id>/meeting.mp4
    • https://rec.example.com/<room-id>/v0/master.m3u8
  • Directory listing must be forbidden:

    • https://rec.example.com/ must not show the tree.
  • Access is protected by BasicAuth.

Caddyfile (example)

/etc/caddy/Caddyfile:

rec.example.com {

  # Root with recordings (mounted/available as /recordings)
  root * /recordings

  encode zstd gzip

  # Important: do not show directory listing
  file_server {
    browse off
  }

  # BasicAuth (caddy hash-password --algorithm bcrypt)
  basicauth /* {
    admin $2a$12$REPLACE_WITH_BCRYPT_HASH
  }

  # More correct headers for HLS
  @hls {
    path *.m3u8 *.ts
  }
  header @hls Content-Type application/octet-stream

  # Secure headers (minimum)
  header {
    X-Content-Type-Options "nosniff"
    Referrer-Policy "no-referrer"
  }

  # Limit methods (optional, but nice)
  @notGet {
    not method GET HEAD
  }
  respond @notGet 405
}

Generate bcrypt hash:

caddy hash-password --algorithm bcrypt --plaintext 'S3curePassw0rd'

3) HLS worker: ffmpeg → HLS → update Notion → delete MP4

Why a separate worker

Transcoding is CPU-heavy and may take a long time. Therefore:

  • finalize publishes MP4 immediately.
  • the worker periodically “catches up” and converts to HLS.
  • after successfully updating Notion, MP4 can be deleted.

3.1) Helper utility: updating a Notion page

We make a small function in the bash script to avoid duplicating curl.


3.2) Code hls-code.sh

File: /usr/local/bin/jitsi-hls-worker.sh

#!/usr/bin/env bash
set -euo pipefail

ENV_FILE="/etc/jitsi/recording-pipeline.env"
if [[ -f "$ENV_FILE" ]]; then
  # shellcheck disable=SC1090
  source "$ENV_FILE"
else
  echo "Env file not found: $ENV_FILE" >&2
  exit 1
fi

mkdir -p "$LOG_DIR" "$LOCK_DIR"
LOG_FILE="$LOG_DIR/hls-worker.log"

log() { printf '%s %s\n' "$(date -Is)" "$*" | tee -a "$LOG_FILE" >&2; }

need_bin() {
  command -v "$1" >/dev/null 2>&1 || { log "ERROR: missing binary: $1"; exit 1; }
}
need_bin find
need_bin jq
need_bin curl
need_bin ffmpeg
need_bin flock

# prevent parallel runs
LOCK_FILE="$LOCK_DIR/hls-worker.lock"
exec 9>"$LOCK_FILE"
if ! flock -n 9; then
  log "Another worker is running, exit."
  exit 0
fi

notion_update() {
  local page_id="$1"
  local url="$2"
  local provider="$3"
  local status="$4"

  local payload
  payload="$(jq -n \
    --arg url "$url" \
    --arg provider "$provider" \
    --arg status "$status" \
    '{
      "properties": {
        "Recording URL": { "url": $url },
        "Provider": { "select": { "name": $provider } },
        "Status": { "select": { "name": $status } }
      }
    }'
  )"

  local resp
  resp="$(curl -sS -X PATCH "https://api.notion.com/v1/pages/${page_id}" \
    -H "Authorization: Bearer ${NOTION_TOKEN}" \
    -H "Content-Type: application/json" \
    -H "Notion-Version: 2022-06-28" \
    --data "$payload"
  )"

  # Notion usually returns a page object. If there is an error — "object":"error"
  local obj
  obj="$(echo "$resp" | jq -r '.object // empty')"
  if [[ "$obj" == "error" ]]; then
    log "ERROR: Notion update failed: $(echo "$resp" | jq -c '.')"
    return 1
  fi

  return 0
}

make_hls() {
  local mp4="$1"
  local outdir="$2"

  mkdir -p "$outdir"

  # Single profile (example: 480p). Can be extended to ABR below.
  # Important: use TS segments (wider support).
  ffmpeg -hide_banner -y \
    -i "$mp4" \
    -vf "scale=-2:480" \
    -c:v h264 -profile:v main -preset veryfast -crf 23 \
    -c:a aac -b:a 128k -ac 2 \
    -f hls \
    -hls_time 6 \
    -hls_list_size 0 \
    -hls_segment_filename "${outdir}/seg_%06d.ts" \
    "${outdir}/stream.m3u8"

  # master.m3u8 as entry point (even if only one profile)
  cat > "${outdir}/master.m3u8" <<'EOF'
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:BANDWIDTH=1200000,RESOLUTION=854x480
stream.m3u8
EOF
}

# Iterate over directories recordings/<room-id>
# Room = first-level directory
while IFS= read -r -d '' rec_dir; do
  room_id="$(basename "$rec_dir")"

  mp4_file="$(find "$rec_dir" -maxdepth 1 -type f -name '*.mp4' -printf '%s\t%p\n' 2>/dev/null | sort -nr | head -n1 | cut -f2- || true)"
  notion_id_file="${rec_dir}/.notion_id"

  # If no notion_id — skip: finalize has not run yet or recording is "not ours"
  if [[ ! -f "$notion_id_file" ]]; then
    [[ -n "$mp4_file" ]] && log "Skip room=$room_id: missing .notion_id"
    continue
  fi

  page_id="$(tr -d '\n\r' < "$notion_id_file" || true)"
  if [[ -z "$page_id" ]]; then
    log "Skip room=$room_id: empty .notion_id"
    continue
  fi

  hls_dir="${rec_dir}/v0"
  master="${hls_dir}/master.m3u8"

  # If HLS already exists — just ensure the link and remove mp4 (if still present)
  if [[ -f "$master" ]]; then
    hls_url="${PUBLIC_BASE_URL}/${room_id}/v0/master.m3u8"
    if [[ -n "${PUBLIC_URL_AUTH}" ]]; then
      hls_url="$(echo "$hls_url" | sed -E "s#^https://#https://${PUBLIC_URL_AUTH}#")"
    fi

    if notion_update "$page_id" "$hls_url" "hls" "published"; then
      if [[ -n "$mp4_file" ]]; then
        log "HLS exists. Notion updated. Deleting mp4 room=$room_id file=$(basename "$mp4_file")"
        rm -f -- "$mp4_file"
      else
        log "HLS exists. Notion ok. No mp4 to delete room=$room_id"
      fi
    else
      log "HLS exists but Notion update failed. Keep mp4 room=$room_id"
    fi
    continue
  fi

  # If no mp4 — do nothing
  if [[ -z "$mp4_file" ]]; then
    continue
  fi

  log "Process room=$room_id mp4=$(basename "$mp4_file")"

  # Set status to processing (optional)
  notion_update "$page_id" "${PUBLIC_BASE_URL}/${room_id}/$(basename "$mp4_file")" "mp4" "processing" || true

  # Transcoding
  if make_hls "$mp4_file" "$hls_dir"; then
    hls_url="${PUBLIC_BASE_URL}/${room_id}/v0/master.m3u8"
    if [[ -n "${PUBLIC_URL_AUTH}" ]]; then
      hls_url="$(echo "$hls_url" | sed -E "s#^https://#https://${PUBLIC_URL_AUTH}#")"
    fi

    # Update Notion. Only after success — delete MP4.
    if notion_update "$page_id" "$hls_url" "hls" "published"; then
      log "Notion updated to HLS. Deleting mp4 room=$room_id"
      rm -f -- "$mp4_file"
    else
      log "ERROR: HLS created but Notion update failed. Keep mp4 room=$room_id"
      # Keep HLS: next run will just update Notion and delete mp4 later
    fi
  else
    log "ERROR: ffmpeg failed room=$room_id"
    notion_update "$page_id" "${PUBLIC_BASE_URL}/${room_id}/$(basename "$mp4_file")" "mp4" "error" || true
  fi

done < <(find "$RECORDINGS_ROOT" -mindepth 1 -maxdepth 1 -type d -print0)

log "Worker run complete."

Permissions:

sudo chmod +x /usr/local/bin/jitsi-hls-worker.sh

4) Cron: run every 20 minutes, but not from 10:00 to 18:00 Moscow time

We already mentioned this scenario: the server runs in UTC (example Thu Dec 25 03:41:02 UTC 2025), but the window is needed in Moscow time.

If your cron implementation supports CRON_TZ (most modern ones do), you can make the schedule be calculated in the Moscow time zone, regardless of server timezone.

/etc/cron.d/jitsi-hls-worker:

SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

# Schedule is calculated in Moscow time
CRON_TZ=Europe/Moscow

# Every 20 minutes, but only outside 10:00-17:59 (i.e., allowed 18:00-09:59)
*/20 0-9,18-23 * * * root /usr/local/bin/jitsi-hls-worker.sh >> /var/log/jitsi-recording-pipeline/hls-cron.log 2>&1

Option B: via systemd timer (if you want it “properly”)

If you prefer strict operations, you can move the worker to a systemd timer and control logs via journalctl. But you asked for cron — so cron remains the main option.


5) Cron logs and diagnostics

We already redirect output to files:

  • /var/log/jitsi-recording-pipeline/hls-cron.log
  • /var/log/jitsi-recording-pipeline/hls-worker.log
  • /var/log/jitsi-recording-pipeline/finalize.log

View “live” logs:

tail -f /var/log/jitsi-recording-pipeline/hls-cron.log

If you need to check whether cron triggers at all:

  • Debian/Ubuntu often log to:

    • /var/log/syslog (by CRON)
  • RHEL/CentOS — to:

    • /var/log/cron

Examples:

grep CRON /var/log/syslog | tail -n 50
# or
tail -n 50 /var/log/cron

6) Improvements on top of the base scheme (with code)

6.1) Protection from “partially created HLS”

If ffmpeg crashes in the middle, segments may remain in v0/. A good practice:

  • write to a temporary folder v0.tmp,
  • after success, atomically rename to v0.

Example (in make_hls):

tmp="${outdir}.tmp"
rm -rf "$tmp"
mkdir -p "$tmp"

# generate into tmp
# ...
# after success:
rm -rf "$outdir"
mv "$tmp" "$outdir"

6.2) Adaptive HLS (ABR) — multiple profiles

If you want it “like grown-ups” (360p/480p/720p), ffmpeg can be run with multiple streams. Conceptual example (simplified):

ffmpeg -i input.mp4 \
  -filter_complex \
  "[0:v]split=3[v1][v2][v3]; \
   [v1]scale=-2:360[v1out]; \
   [v2]scale=-2:480[v2out]; \
   [v3]scale=-2:720[v3out]" \
  -map [v1out] -map 0:a -c:v:0 h264 -b:v:0 800k  -c:a:0 aac -b:a:0 96k \
  -map [v2out] -map 0:a -c:v:1 h264 -b:v:1 1200k -c:a:1 aac -b:a:1 128k \
  -map [v3out] -map 0:a -c:v:2 h264 -b:v:2 2500k -c:a:2 aac -b:a:2 128k \
  -f hls \
  -hls_time 6 \
  -hls_playlist_type vod \
  -hls_flags independent_segments \
  -master_pl_name master.m3u8 \
  -var_stream_map "v:0,a:0 v:1,a:1 v:2,a:2" \
  -hls_segment_filename "v%v/seg_%06d.ts" \
  "v%v/stream.m3u8"

Notion still stores a single link: to master.m3u8.

6.3) Separating “archive” and “publication”

After publishing HLS, you can move MP4 to S3/MinIO (cheap archive), and keep only HLS on disk. This is done by a separate job and does not break the current scheme.


Final architecture

[Jitsi Meet]
     |
     v
[Jibri] --(writes MP4)--> recordings/<room-id>/meeting.mp4
     |
     +--> jitsi-finalize.sh (on finalize)
            |
            +--> create Notion row (MP4 URL)
            +--> save .notion_id

cron (CRON_TZ=Europe/Moscow, */20 outside 10-18)
     |
     v
jitsi-hls-worker.sh
     |
     +--> ffmpeg MP4 → HLS (v0/master.m3u8 + segments)
     +--> update Notion URL to HLS
     +--> delete MP4 only after Notion update success

[Caddy]
     |
     +--> HTTPS + BasicAuth
     +--> static files from /recordings
     +--> no directory listing

Conclusion

The point of this pipeline is that you turn “recording a meeting on a server” into a productized, operationally stable process:

  • MP4 appears immediately and is accessible by link (minimal delay for the team),
  • heavy transcoding runs in the background,
  • Notion becomes a catalog and “dashboard”,
  • Caddy provides secure delivery without extra services,
  • idempotency ensures that Notion/ffmpeg failures do not lead to data loss.

// Reviews

Related reviews

// Contact

Need help?

Get in touch with me and I'll help solve the problem