Shared Guide

Operator Manual

Day-to-day operating procedures for the Varta anti-drone detection platform. This manual covers system startup, alert handling, deployment, and operational checklists applicable across all product tiers.

1. System Startup

Power-On Sequence

  1. Apply power to the sensor unit. Wait for the status LED to show solid amber (boot in progress).
  2. After 30-60 seconds the LED transitions to solid green, indicating the OS has loaded and Varta services are initializing.
  3. On units with an OLED display, the boot summary screen appears once all services report healthy.

Service Verification

Confirm all core services are running before beginning operations:

sudo systemctl status varta-sensor
sudo systemctl status varta-mqtt
sudo systemctl status varta-watchdog

All three services must report active (running). If any service shows failed or inactive, consult the Troubleshooting Guide.

Sensor Health Check

Run the built-in self-test to verify hardware connectivity and baseline calibration:

varta-cli self-test --verbose
  • SDR connection: Confirms USB device enumeration and sample rate negotiation.
  • Noise floor baseline: Measures ambient RF level across monitored bands. Flags if noise floor exceeds expected thresholds.
  • GPS lock (if equipped): Reports fix quality, satellite count, and positional accuracy.
  • MQTT broker reachability: Tests TLS handshake and authentication against the configured broker endpoint.

2. Operational Modes

Standalone

Standalone Mode

The sensor operates independently with no network dependency. Detections are logged locally and alerts are delivered via GPIO (buzzer, LED) and OLED display.

varta-cli start --mode standalone
  • Ideal for rapid deployment or degraded-comms environments.
  • Detection logs stored at /var/log/varta/detections.jsonl.
  • Logs can be exported later via USB or SSH retrieval.

Networked

Networked / MQTT Mode

Detections are published to the MQTT broker in real time. The Varta Max command node fuses data from all connected sensors into a unified tactical picture.

varta-cli start --mode networked \
  --broker mqtt.varta.local:8883 \
  --sensor-id alpha-01
  • Requires valid TLS certificates and fleet pairing token.
  • Heartbeat interval: 10 seconds (configurable).
  • Automatic reconnect with exponential backoff on broker loss.

Sensor Node

Sensor Node Mode

The unit acts as a subordinate node in a multi-sensor array managed by Varta Max. Node placement, scan schedules, and detection parameters are pushed from the command node.

varta-cli start --mode sensor-node \
  --command-host max.varta.local
  • Configuration is centrally managed; local overrides are disabled.
  • Supports DF/TDOA triangulation when three or more nodes are active.
  • Status and health telemetry reported on the fleet/health MQTT topic.

3. Alert Handling

Threat Levels

Varta uses a five-level threat classification. Each level maps to a specific response posture:

Level Classification Description Typical Response
1 Informational Transient RF anomaly, low confidence. Likely benign interference. Log only. No operator action required.
2 Low Repeating signal pattern consistent with known drone protocols, below confirmation threshold. Monitor. Flag for review at shift end.
3 Medium Confirmed drone protocol signature with moderate confidence (60-80%). Single sensor corroboration. Active monitoring. Verify with secondary sensor or visual check.
4 High High-confidence detection (>80%) corroborated by multiple sensors or classification model. Alert supervisor. Initiate response protocol. Log operator assessment.
5 Critical Multi-sensor confirmed, high-confidence, drone within protected perimeter or exhibiting hostile behavior pattern. Immediate escalation. Activate countermeasures per ROE. Notify chain of command.

Alert Response Procedures

  1. Acknowledge the alert within 30 seconds via the dashboard or OLED confirm button.
  2. Assess the detection: review confidence score, signal provenance, frequency band, and sensor source.
  3. Verify using an independent method (second sensor, visual/optical, RemoteID lookup) when threat level is 3 or above.
  4. Classify the alert as confirmed threat, false positive, or inconclusive. Log the classification with operator notes.
  5. Escalate if the threat level warrants it per the escalation matrix below.

Escalation Matrix

  • Level 1-2: Operator handles autonomously. Review at shift debrief.
  • Level 3: Notify shift supervisor within 5 minutes. Supervisor decides escalation.
  • Level 4: Immediate supervisor notification. Activate site response team standby.
  • Level 5: Immediate notification to site commander and response team. Countermeasure authorization requested per standing rules of engagement.

4. Dashboard Usage

Tactical Map

Tactical Map View

The primary operational display. Shows sensor positions, detection bearings, and threat tracks overlaid on the site map.

  • Blue icons: Active sensors reporting healthy.
  • Amber icons: Sensors in degraded state (partial failure or stale heartbeat).
  • Red arcs: Active detection bearings with threat level color coding.
  • Track lines: Fused track history when DF/TDOA triangulation is active.

Use scroll-wheel to zoom. Click any sensor or track for detail panel.

Live Feed

Detection Feed

Chronological stream of all detection events across the sensor fleet. Each entry shows:

  • Timestamp (UTC) and sensor ID
  • Detected frequency band and protocol classification
  • Confidence score (0-100) and threat level (1-5)
  • Provenance chain: which detection stages contributed to the classification

Use the filter bar to narrow by sensor, threat level, time range, or classification status.

Fleet Health

Sensor Status Panel

Real-time health overview of every sensor in the fleet.

  • Uptime: Time since last restart.
  • Last heartbeat: Seconds since last MQTT heartbeat. Stale threshold: 30s.
  • CPU / Memory / Temp: Resource utilization gauges. Amber at 80%, red at 95%.
  • SDR status: Lock state, gain setting, noise floor reading.
  • GPS quality: Fix type, satellite count, HDOP value.

5. Detection Verification

Confirming Detections

Every detection above threat level 2 should be verified before escalation. Verification methods in order of preference:

  1. Multi-sensor corroboration: Check whether two or more sensors detected the same event within a 5-second window. Cross-sensor hits significantly increase confidence.
  2. DF/TDOA fix: If three or more sensors have bearings, the fused track provides a geolocation. Verify the position makes physical sense (altitude, speed, trajectory).
  3. Visual or optical confirmation: If the site has cameras or line-of-sight, attempt visual acquisition on the reported bearing.
  4. RemoteID lookup: Query the RemoteID feed for matching UAS serial or session ID.

Interpreting Confidence Scores

  • 90-100: High confidence. Strong protocol match with clean I/Q classification. Treat as confirmed unless contradicted by visual evidence.
  • 70-89: Moderate confidence. Good signal match but some ambiguity (partial protocol decode, moderate SNR). Verification recommended.
  • 50-69: Low-moderate confidence. Pattern is suggestive but not conclusive. May be interference or non-drone emitter. Require independent verification before escalation.
  • Below 50: Low confidence. Logged for analysis but not actionable without corroboration. Review during shift debrief.

Reviewing Provenance

Each detection carries a provenance chain showing which processing stages contributed to the final classification:

{
  "provenance": [
    {"stage": "sweep",    "result": "hit",  "band": "2.4GHz", "power_dbm": -42},
    {"stage": "iq_capture","result": "pass", "snr_db": 18.3,  "duration_ms": 200},
    {"stage": "ml_classify","result": "DJI_Mavic3", "confidence": 0.87},
    {"stage": "protocol_decode","result": "partial", "fields_decoded": 6}
  ]
}

A detection that passes all four stages is significantly more reliable than one that skipped I/Q capture or had a partial protocol decode. Use the provenance to gauge how much weight to give the detection.

6. Deployment Procedures

Site Survey

Before deploying sensors, conduct a site survey to assess RF environment and physical layout:

  1. RF background scan: Run a 15-minute spectrum sweep at each candidate sensor position using varta-cli survey --duration 900. Record the noise floor per band.
  2. Line of sight: Map obstructions (buildings, terrain, vegetation) that could shadow RF detection or block DF bearings.
  3. Infrastructure check: Confirm power availability, network connectivity (wired or wireless), and physical mounting options at each position.
  4. Threat axis assessment: Identify the most likely approach corridors for UAS threats and orient sensor coverage accordingly.

Sensor Placement

  • Spacing: For DF/TDOA triangulation, sensors should be spaced 200-500 meters apart with overlapping coverage.
  • Height: Mount sensors at 3-10 meters elevation with clear horizon. Higher is generally better for detection range but may increase wind exposure.
  • Orientation: Ensure antenna boresight covers the primary threat axis. For omnidirectional antennas, placement symmetry is more important than orientation.
  • Avoid co-location with high-power emitters (cellular towers, radar installations, industrial RF sources) which will degrade sensitivity.

Baseline Calibration

After physical installation, run the baseline calibration procedure:

varta-cli calibrate --baseline --duration 1800
# 30-minute observation window, no known drones in area
# Captures ambient RF profile for false-positive suppression
  • Calibration must be performed during normal site activity (not during quiet hours) to capture representative interference.
  • Repeat calibration if the RF environment changes significantly (new equipment installed, seasonal foliage changes, construction activity).
  • Calibration data is stored at /var/lib/varta/baseline/ and versioned by timestamp.

7. Daily Operations Checklist

Pre-Shift

Pre-Shift Tasks

  1. Review the previous shift's detection log and any open incidents.
  2. Verify all sensors show green status on the Sensor Status Panel.
  3. Confirm MQTT broker connectivity from the dashboard health indicator.
  4. Check sensor CPU temperatures are within normal range (<75 C).
  5. Run a quick self-test on any sensor that was serviced or restarted:
    varta-cli self-test --sensor alpha-01
  6. Verify GPS lock on all sensors with location-dependent features.
  7. Confirm shift handover notes from outgoing operator are acknowledged.

During Operations

Operational Tasks

  1. Monitor the detection feed continuously. Acknowledge alerts within 30 seconds.
  2. Log all operator assessments for threat level 3+ detections.
  3. Perform a fleet health check every 60 minutes. Note any amber or red status sensors.
  4. If a sensor goes stale (no heartbeat >60s), attempt remote restart:
    varta-cli restart --sensor alpha-01 --remote
  5. Record any environmental changes that could affect detection (weather shifts, new RF sources, site activity changes).

Post-Shift

Post-Shift Tasks

  1. Export the shift detection log:
    varta-cli export --shift --format json \
      --output /var/log/varta/shift_$(date +%Y%m%d_%H%M).json
  2. Review false positive rate for the shift. Flag anomalies for calibration review.
  3. Document any sensor issues, workarounds, or pending maintenance in the shift log.
  4. Brief the incoming operator on active incidents, sensor status, and any standing instructions.
  5. Verify all detection classifications are complete (no unresolved alerts).

8. Emergency Procedures

Sensor Failure

If a sensor stops reporting or enters a fault state:

  1. Check the Sensor Status Panel for diagnostic information (last heartbeat, error code).
  2. Attempt a remote restart via the command node:
    varta-cli restart --sensor <sensor-id> --remote
  3. If remote restart fails, dispatch field technician for physical inspection. Common causes: SD card corruption, power supply fault, SDR USB disconnect.
  4. Assess coverage gap. If the failed sensor covers a critical threat axis, consider repositioning an adjacent sensor or deploying a spare unit.
  5. Log the failure event, cause (if determined), and resolution in the maintenance record.

Communications Loss

If the MQTT broker becomes unreachable or network connectivity is lost:

  1. Sensors automatically fall back to standalone mode. Local detection and alerting continue via GPIO and OLED.
  2. Check network infrastructure: switches, routers, access points, cable integrity.
  3. Verify broker status on the command node:
    sudo systemctl status mosquitto
    mosquitto_sub -h localhost -t '$SYS/broker/uptime' -C 1
  4. If broker is down, restart:
    sudo systemctl restart mosquitto
    # Wait 15 seconds, then verify sensors reconnect
    varta-cli fleet-status
  5. During comms loss, operators at remote sensor locations must follow standalone alert procedures and record detections manually.

High-Threat Response

When a threat level 5 detection is confirmed:

  1. Immediate actions (0-60 seconds):
    • Acknowledge the alert on the dashboard.
    • Verbally notify the site commander or duty officer.
    • Activate the site alarm if authorized per standing orders.
  2. Assessment (1-3 minutes):
    • Confirm multi-sensor corroboration. Check DF/TDOA track for position, heading, speed.
    • Attempt visual acquisition. Task cameras if available.
    • Query RemoteID for UAS identification.
  3. Response (per ROE):
    • Request countermeasure authorization through the chain of command.
    • Maintain continuous tracking and update the tactical picture.
    • Log all actions, decisions, and communications with timestamps.
  4. Post-incident:
    • Preserve all detection data, logs, and recordings.
    • Complete incident report within 4 hours.
    • Conduct after-action review within 24 hours.