Result Quality Checks
This guide covers how to interpret detection confidence scores, verify result provenance, investigate false positives and false negatives, and generate quality reports across the Varta anti-drone platform.
1. Understanding Confidence Scores
Every detection result carries a composite confidence score between 0.0 and 1.0. The score is computed from multiple weighted factors extracted during signal analysis.
Base Factor Weights
| Factor | Weight | Description |
|---|---|---|
| Frequency match | 0.35 | Center frequency proximity to known drone control/video bands |
| Modulation type | 0.25 | Detected modulation matches expected scheme (OFDM, FHSS, DSSS) |
| Bandwidth | 0.20 | Occupied bandwidth consistency with known drone profiles |
| Hop rate | 0.15 | Frequency hopping cadence match for FHSS protocols |
| Symbol rate | 0.05 | Symbol rate alignment with known datalink specifications |
Bonus Modifiers
- +0.15 Cyclostationary detection — Positive cyclostationary feature extraction confirms periodic signal structure consistent with a digital datalink.
- +0.25 Multi-band correlation — Signal detected simultaneously on control and video downlink bands from the same emitter.
SNR Gate
When the measured signal-to-noise ratio is below 10 dB, the composite score is hard-capped at 0.35 regardless of factor weights or bonuses. This prevents low-SNR signals from producing high-confidence detections.
snr_db field in the detection payload to confirm.# Example: confidence breakdown in a detection result
{
"confidence": 0.82,
"confidence_breakdown": {
"frequency_match": 0.35,
"modulation_match": 0.22,
"bandwidth_match": 0.10,
"hop_rate_match": 0.0,
"symbol_rate_match": 0.0,
"cyclostationary_bonus": 0.15,
"multiband_bonus": 0.0
},
"snr_db": 18.4,
"snr_gate_applied": false
}
2. Detection Provenance
Every detection includes provenance metadata that identifies exactly which software versions, signature databases, and configurations produced the result. This is critical for reproducibility and audit trails.
Provenance Fields
| Field | Description |
|---|---|
signature_db_version | Version hash of the drone signature database used for matching. Changes when new signatures are added or existing ones are updated. |
band_config_version | Version of the frequency band configuration that defines which bands are scanned and their parameters. |
analyzer_version | Semantic version of the signal analyzer module (e.g., 2.4.1). Determines which detection algorithms are active. |
sensor_firmware_hash | SHA-256 hash of the sensor firmware. Used to verify the sensor is running the expected firmware build. |
Verifying Provenance via API
Query the detection endpoint and inspect the provenance object in each result:
curl http://localhost:8080/api/detections/latest | python -m json.tool
Expected provenance block:
{
"provenance": {
"signature_db_version": "sig-v3.12.0-a4f8c2e",
"band_config_version": "bands-v2.1.0",
"analyzer_version": "2.4.1",
"sensor_firmware_hash": "e3b0c44298fc1c14..."
}
}
To verify a specific sensor's firmware against the expected hash:
curl http://localhost:8080/api/sensors/{sensor_id}/firmware-check
signature_db_version or analyzer_version differ between sensors in a multi-sensor deployment, fusion results may be inconsistent. Ensure all sensors are updated to the same release before comparing cross-sensor detections.3. False Positive Review
False positives (FP) are detections that trigger on non-drone RF sources. Understanding common FP sources and the benign rejection pipeline is essential for operational trust.
Common FP Sources
- WiFi access points — 2.4 GHz and 5.8 GHz WiFi can trigger frequency and bandwidth matches. Usually rejected by modulation mismatch but high-power APs near the sensor may pass initial screening.
- Bluetooth / BLE devices — FHSS pattern in the 2.4 GHz ISM band can produce hop rate matches. Typically rejected by bandwidth check (1 MHz vs expected 10+ MHz).
- Microwave ovens — Wideband noise bursts at 2.45 GHz. These produce short-duration detections that are filtered by the persistence gate (minimum 200 ms continuous signal).
- Surveillance cameras — Analog video transmitters on 5.8 GHz can mimic drone video downlinks. Distinguished by constant-on pattern vs burst transmission.
- Amateur radio — Narrowband signals near 433 MHz or 900 MHz may match some legacy drone control frequencies.
Benign Rejection Pipeline
Before a detection reaches the output, it passes through the benign rejection pipeline:
- Known-source filter — Signals matching registered benign source profiles (site-specific WiFi MACs, known fixed transmitters) are suppressed.
- Persistence gate — Signals shorter than 200 ms are dropped.
- Bandwidth sanity check — Signals with bandwidth outside the expected range for any known drone protocol are flagged.
- Modulation classifier — CNN-based modulation classification rejects non-drone modulation types.
Reviewing Suppressed Detections
Suppressed (rejected) detections are logged for audit purposes. To review them:
# View suppressed detections from the last hour
curl "http://localhost:8080/api/detections?status=suppressed&since=1h"
# View rejection reason for a specific detection
curl http://localhost:8080/api/detections/{detection_id}/rejection-reason
POST /api/config/benign-sources with the source's frequency, bandwidth, and location.Filing a False Positive Report
When you confirm a detection is a false positive:
- Note the
detection_idandtimestampfrom the detection payload. - Record the suspected real source (WiFi AP model, Bluetooth device, etc.).
- Submit via the API or the Max UI:
curl -X POST http://localhost:8080/api/quality/fp-report \
-H "Content-Type: application/json" \
-d '{
"detection_id": "det-2026-03-01-00142",
"actual_source": "Ubiquiti UAP-AC-PRO WiFi AP",
"notes": "Mounted 3m from sensor antenna"
}'
4. False Negative Investigation
A false negative occurs when a drone is present but no detection is produced. Systematic investigation narrows the root cause.
Common Causes
- SNR too low — The drone signal is below the noise floor at the sensor location. Check
snr_dbin raw scan data. Below 6 dB, detection probability drops sharply. - Frequency not in band config — The drone operates on a frequency that is not included in the active band configuration. Verify with
GET /api/config/bands. - Signature not in database — New or rare drone model without a matching signature entry. The CNN classifier may still catch it, but heuristic scoring will be low.
- DJI Mini series — DJI Mini 2/3/4 use OcuSync with very low power and narrow bandwidth that can evade cyclostationary detection. Known limitation documented in release notes.
- Sensor saturation — Strong nearby transmitters can saturate the ADC, masking weaker drone signals. Check
adc_clip_countin sensor health.
Diagnostic Steps
- Pull the raw scan data for the time window of the missed detection:
curl "http://localhost:8080/api/scans?from=2026-03-01T10:00:00Z&to=2026-03-01T10:30:00Z" \
--output scan_dump.json
- Check if the expected frequency band was being scanned:
curl http://localhost:8080/api/config/bands | python -m json.tool
- Verify the signature database includes the drone model:
curl "http://localhost:8080/api/signatures?search=dji+mini+4"
- Check sensor health during the time window for ADC clipping or calibration warnings:
curl "http://localhost:8080/api/sensors/{sensor_id}/health?from=2026-03-01T10:00:00Z"
5. CNN Classifier Quality
The signal-type CNN classifier provides automated modulation recognition across 14 signal classes. It operates on spectrogram images extracted from I/Q captures.
Performance Summary
- Holdout accuracy: 90.3% across 14 classes
- Noise false positive rate: reduced from 89.7% to 9.6% after dedicated noise class training
- Classes: DJI-OcuSync, DJI-Lightbridge, FrSky-ACCST, Futaba-FASST, Spektrum-DSMX, ExpressLRS, Crossfire, analog-video-5.8G, WiFi-drone, custom-FHSS, custom-DSSS, LTE-relay, noise, unknown
Checking Deployed Model Version
The CNN model can be deployed as ONNX (CPU inference) or TensorRT (GPU-accelerated on Jetson). To check which is active:
curl http://localhost:8080/api/classifiers/cnn/status
# Example response
{
"model_id": "varta-cnn-v3.2.1",
"runtime": "tensorrt",
"input_shape": [1, 1, 128, 128],
"classes": 14,
"inference_ms_p95": 4.2,
"onnx_path": "/opt/varta/models/cnn_v3.2.1.onnx",
"tensorrt_path": "/opt/varta/models/cnn_v3.2.1.engine",
"loaded_at": "2026-03-01T06:00:12Z"
}
ONNX vs TensorRT Paths
- ONNX (CPU): Used on Mini v2 (Pi 5) and any deployment without a supported GPU. Inference latency is typically 35–80 ms depending on hardware.
- TensorRT (GPU): Used on Pro (Jetson Orin) deployments. Inference latency is 3–5 ms. Requires TensorRT engine rebuild when the model version changes.
varta-tools rebuild-engine --model cnn_v3.2.1.onnx on each Jetson sensor. ONNX deployments require no additional step.6. Acoustic Detection Quality
Acoustic detection supplements RF detection for scenarios where RF signatures are weak, jammed, or absent (e.g., autonomous waypoint flights with no active control link).
Performance Summary
- Overall accuracy: 88.9%
- Shahed-series recall: 92.3% — Distinctive turbine signature provides strong acoustic fingerprint.
- Quadcopter recall: 90.9% — Multi-rotor blade-pass frequency is reliable in low-wind conditions.
- Fixed-wing recall: 81.2% — Propeller noise is more easily masked by ambient wind.
Microphone Placement Effects
- Mount microphone arrays at least 2m above ground to reduce ground-reflection interference.
- Avoid placement near HVAC units, generators, or road traffic — continuous broadband noise degrades SNR.
- Wind screens are mandatory for outdoor deployment. Without them, wind noise above 15 km/h saturates the low-frequency bins.
- For multi-mic arrays (bearing estimation), maintain calibrated inter-element spacing within 1mm tolerance.
Background Noise Impact
Acoustic detection performance degrades with ambient noise level:
| Ambient Level | Detection Range | Accuracy Impact |
|---|---|---|
| < 45 dBA (rural) | 300–500m | Nominal — full accuracy |
| 45–60 dBA (suburban) | 150–300m | ~5% accuracy reduction |
| 60–75 dBA (urban) | 50–150m | ~15% accuracy reduction |
| > 75 dBA (industrial) | < 50m | Acoustic detection unreliable; RF-only recommended |
7. Calibration Verification
Each sensor maintains a per-sensor calibration profile that accounts for antenna gain patterns, cable losses, and local RF environment baselines. Uncalibrated sensors produce degraded confidence scores.
Calibration Profiles
- Calibration profiles are stored on-sensor at
/opt/varta/config/calibration.jsonand synced to Max on registration. - Profiles include: antenna gain table, cable loss compensation, noise floor baseline per band, and DF array phase offsets (if applicable).
- Calibration is performed during initial deployment and should be re-run after any hardware change (antenna replacement, cable rerouting, firmware update).
Uncalibrated Penalty
When a sensor's calibration profile is missing or expired (older than 90 days), a 50% penalty is applied to all confidence scores from that sensor. This is a deliberate safety mechanism to prevent uncalibrated sensors from producing high-confidence false results.
# Calibration status in detection output
{
"sensor_id": "pro-04",
"calibration_status": "expired",
"calibration_age_days": 112,
"confidence_penalty": 0.50,
"original_confidence": 0.78,
"penalized_confidence": 0.39
}
Running a Calibration Check
To verify calibration status across your sensor fleet:
# Check calibration status for all sensors
curl http://localhost:8080/api/sensors/calibration-status
# Example response
{
"sensors": [
{"id": "pro-01", "calibrated": true, "age_days": 23, "status": "valid"},
{"id": "pro-02", "calibrated": true, "age_days": 87, "status": "expiring_soon"},
{"id": "pro-03", "calibrated": false, "age_days": null, "status": "uncalibrated"},
{"id": "pro-04", "calibrated": true, "age_days": 112, "status": "expired"}
]
}
expiring_soon (75–90 days) should be scheduled for recalibration during the next maintenance window. The penalty is not applied until the 90-day threshold is crossed.Triggering Recalibration
Initiate a calibration run on a specific sensor:
curl -X POST http://localhost:8080/api/sensors/{sensor_id}/calibrate \
-H "Content-Type: application/json" \
-d '{"mode": "full", "duration_minutes": 15}'
The sensor will enter calibration mode, scan all configured bands, establish noise floor baselines, and update its calibration profile. The sensor is temporarily offline during calibration.
8. Generating Quality Reports
Quality reports aggregate detection data over time to identify trends in confidence scores, false positive rates, and sensor health.
Detection History API
Retrieve paginated detection history for analysis:
# Fetch last 100 detections
curl "http://localhost:8080/api/detections?limit=100"
# Fetch detections in a time range with confidence filter
curl "http://localhost:8080/api/detections?from=2026-03-01T00:00:00Z&to=2026-03-02T00:00:00Z&min_confidence=0.5"
# Fetch detections grouped by sensor
curl "http://localhost:8080/api/detections/by-sensor?from=2026-03-01T00:00:00Z"
CSV Export from Max
The Max UI provides a one-click CSV export from the detection history view. The export includes all detection fields, provenance metadata, and confidence breakdowns.
For programmatic CSV export:
# Export detections as CSV
curl "http://localhost:8080/api/detections/export?format=csv&from=2026-03-01&to=2026-03-07" \
--output detections_week.csv
# Export with specific columns
curl "http://localhost:8080/api/detections/export?format=csv&fields=timestamp,confidence,class,sensor_id,snr_db" \
--output detections_summary.csv
Confidence Distribution Analysis
Use the statistics endpoint to analyze confidence score distributions over a given period:
curl "http://localhost:8080/api/quality/confidence-stats?from=2026-03-01&to=2026-03-07"
# Example response
{
"period": "2026-03-01 to 2026-03-07",
"total_detections": 847,
"confidence_distribution": {
"0.0-0.2": 12,
"0.2-0.4": 38,
"0.4-0.6": 156,
"0.6-0.8": 412,
"0.8-1.0": 229
},
"mean_confidence": 0.67,
"median_confidence": 0.71,
"false_positive_reports": 14,
"fp_rate_percent": 1.65,
"snr_gate_triggered_count": 38
}
Periodic Quality Review
Recommended quality review cadence:
- Daily: Check
/api/quality/confidence-statsfor anomalies in FP rate or mean confidence shifts. - Weekly: Export CSV and review low-confidence detections. Cross-reference with any known drone activity or FP reports.
- Monthly: Full calibration status audit, model version consistency check across fleet, and signature database update verification.