Support Guide
Troubleshooting
Diagnostic procedures and resolution guides for common issues across all Varta product tiers. Work through the relevant section step-by-step before contacting support.
1. SDR Connection Issues
Covers PlutoSDR, HAMGEEK, and Fish Ball SDR devices failing to enumerate or connect.
Symptoms
- SDR device not detected after plugging in USB
- Sensor status shows
NO_SDRorDISCONNECTED - USB enumeration errors in system logs
Diagnostic Steps
Step 1 — Verify USB enumeration
lsusb | grep -iE "analog|pluto|hamgeek" dmesg | grep -i pluto
If the device does not appear in lsusb output, try a different USB port or cable. USB 3.0 ports are required for full-bandwidth operation.
Step 2 — Check IIO subsystem
iio_info -s
This should list all available IIO contexts. For network-attached PlutoSDR units:
iio_info -u ip:192.168.2.1
Step 3 — Verify network link (network-mode SDRs)
ip link show ping -c 3 192.168.2.1
Step 4 — Driver check
modinfo industrialio dmesg | tail -30
Note: PlutoSDR firmware v0.38+ is required for Varta compatibility. Check firmware with iio_attr -d ad9361-phy fw_version.
2. MQTT Connectivity
Issues connecting to the Varta MQTT broker, including TLS and mTLS authentication failures.
Symptoms
- Sensor nodes fail to publish or subscribe
- TLS handshake errors in broker logs
- Certificate expiry or CN mismatch warnings
Diagnostic Steps
Step 1 — Test broker reachability
mosquitto_sub -h broker.local -p 8883 -t '#' --cafile /etc/varta/certs/ca.pem -v
If connection is refused, verify the broker process is running:
systemctl status mosquitto journalctl -u mosquitto --since "10 min ago"
Step 2 — Check certificate expiry
openssl x509 -in /etc/varta/certs/client.pem -noout -dates openssl x509 -in /etc/varta/certs/ca.pem -noout -dates
Step 3 — Verify mTLS handshake
openssl s_client -connect broker.local:8883 \ -CAfile /etc/varta/certs/ca.pem \ -cert /etc/varta/certs/client.pem \ -key /etc/varta/certs/client.key
Step 4 — Inspect broker ACLs
Ensure the sensor's client ID has publish/subscribe permissions on the required topics (varta/sensors/+/detections, varta/health/+).
Warning: Never use -t '#' subscriptions in production. This is for diagnostics only and can overwhelm the client with traffic on busy deployments.
3. Detection Quality
Addressing high false positive rates, missed detections, and low confidence scores.
Symptoms
- Confidence scores consistently below 0.6
- Known drone flights not generating alerts
- Excessive false positives from Wi-Fi routers, Bluetooth, or other RF sources
Diagnostic Steps
Step 1 — Check baseline calibration
curl -s http://localhost:8080/api/v1/sensor/baseline | python3 -m json.tool
If baseline_age_seconds exceeds 86400 (24 hours), recalibrate:
curl -X POST http://localhost:8080/api/v1/sensor/recalibrate
Step 2 — Verify CNN model version
curl -s http://localhost:8080/api/v1/model/info
Compare the model_version field against the latest release. Outdated models miss newer drone protocols.
Step 3 — Check signature database
ls -la /opt/varta/data/signatures/ cat /opt/varta/data/signatures/version.json
Signature database updates are distributed via the Varta update channel. Ensure the node has pulled the latest definitions.
Step 4 — Review detection thresholds
In /etc/varta/detection.yaml, verify that confidence_threshold and snr_minimum are appropriate for the deployment environment. Urban environments typically require higher thresholds (0.75+) to suppress noise.
Note: After recalibration, allow 5–10 minutes for the baseline to stabilize before evaluating detection quality.
4. Sensor Health
Resolving DEGRADED and FAULT sensor states, stale baselines, and calibration failures.
Symptoms
- Sensor dashboard shows
DEGRADEDorFAULTstate - Health heartbeats stop arriving at the coordinator
- Calibration routines fail or timeout
Diagnostic Steps
Step 1 — Query the health API
curl -s http://localhost:8080/api/v1/health | python3 -m json.tool
Key fields to inspect: state, uptime_seconds, last_calibration, error_count.
Step 2 — Check for stale baseline
A baseline older than 24 hours in a changing RF environment will cause drift. The health API reports baseline_stale: true when this condition is detected.
Step 3 — Restart the sensor service
sudo systemctl restart varta-sensor journalctl -u varta-sensor -f
Watch for initialization errors during startup. A clean start should report Sensor initialized, state=NOMINAL within 30 seconds.
Step 4 — Force recalibration
curl -X POST http://localhost:8080/api/v1/sensor/recalibrate \
-H "Content-Type: application/json" \
-d '{"force": true, "duration_seconds": 120}'
Warning: Forced recalibration temporarily disables detection. Coordinate with the operations team before running this on a live deployment.
5. Acoustic Subsystem
Troubleshooting microphone detection, audio pipeline errors, and acoustic classification accuracy.
Symptoms
- Acoustic sensor reports
NO_AUDIO_DEVICE - Audio pipeline crashes or produces silence
- Classification confidence is consistently low despite audible drone activity
Diagnostic Steps
Step 1 — List audio capture devices
arecord -l
Verify the expected microphone array appears in the device list. If missing, check USB connections and ALSA configuration.
Step 2 — Test audio capture
arecord -D hw:0,0 -f S16_LE -r 48000 -c 1 -d 5 /tmp/test_capture.wav aplay /tmp/test_capture.wav
Verify the recording contains actual audio, not silence or noise artifacts.
Step 3 — Verify sample rate configuration
The acoustic model expects 48 kHz input. Check /etc/varta/acoustic.yaml:
cat /etc/varta/acoustic.yaml | grep sample_rate
Mismatched sample rates cause the classifier to produce garbage outputs.
Step 4 — Check model files
ls -la /opt/varta/models/acoustic/ md5sum /opt/varta/models/acoustic/classifier_v*.onnx
Compare checksums against the release manifest to rule out corrupted model files.
Step 5 — Review pipeline logs
journalctl -u varta-acoustic --since "30 min ago" --no-pager
Note: Outdoor deployments should use weatherproof microphone enclosures. Wind noise degrades classification accuracy significantly without proper windscreens.
6. Network & Time Sync
NTP, PTP, and GPS time synchronization failures that affect TDOA eligibility and event correlation.
Symptoms
- TDOA geolocation returns wildly inaccurate positions
- Sensor flagged as
TDOA_INELIGIBLEdue to clock drift - Event timestamps misaligned across sensors
Diagnostic Steps
Step 1 — Check chrony status
chronyc sources -v chronyc tracking
The System time offset should be under 1 ms for TDOA eligibility. If offset exceeds 10 ms, investigate the time source.
Step 2 — Verify GPS daemon (if GPS-disciplined)
systemctl status gpsd gpspipe -w -n 5
Confirm the GPS receiver has a 3D fix. Indoor installations may need an external antenna.
Step 3 — Check PTP status (if using PTP)
pmc -u -b 0 'GET CURRENT_DATA_SET' journalctl -u ptp4l --since "10 min ago"
Step 4 — Query the time sync API
curl -s http://localhost:8080/api/v1/time/status | python3 -m json.tool
Fields: sync_source, offset_us, tdoa_eligible, last_sync.
Step 5 — Force NTP resync
sudo chronyc makestep chronyc sources
Warning: Stepping the clock on a running system can cause log discontinuities and transient detection anomalies. Prefer slewing (chronyc burst) when the offset is small.
7. Performance Issues
Diagnosing high CPU/GPU usage, inference latency spikes, and memory pressure.
Symptoms
- Detection latency exceeds SLA thresholds (>500 ms)
- System load average consistently above CPU core count
- OOM kills in system journal
Diagnostic Steps
Step 1 — Check system resource usage
htop free -h df -h /opt/varta
Step 2 — Jetson-specific GPU monitoring
tegrastats --interval 1000
Monitor GPU utilization, memory bandwidth, and thermal throttling. If the GPU temperature exceeds 85°C, check cooling and airflow.
Step 3 — Check inference latency
curl -s http://localhost:8080/api/v1/metrics | grep inference_latency
The p99_latency_ms value should remain below 200 ms for Varta Pro and 400 ms for Varta Mini.
Step 4 — Identify memory pressure
journalctl -k | grep -i "oom\|out of memory" cat /proc/meminfo | head -5
Step 5 — Model optimization check
Ensure TensorRT-optimized models are deployed on Jetson platforms. Non-optimized ONNX models run 3–5x slower:
ls -la /opt/varta/models/*.trt cat /opt/varta/config/inference.yaml | grep engine_type
Note: On Varta Mini (Raspberry Pi), inference is CPU-only. Limit concurrent detection channels to 2 for acceptable latency.
8. Log Collection
How to gather diagnostic logs before contacting Varta support. A complete log bundle accelerates resolution.
What to Collect
System journal (last 2 hours)
journalctl --since "2 hours ago" --no-pager > /tmp/varta-journal.log
Application logs
cp /var/log/varta/*.log /tmp/varta-app-logs/ tar czf /tmp/varta-app-logs.tar.gz /tmp/varta-app-logs/
Sensor health snapshot
curl -s http://localhost:8080/api/v1/health > /tmp/varta-health.json curl -s http://localhost:8080/api/v1/metrics > /tmp/varta-metrics.json
I/Q capture export (if RF issue)
curl -X POST http://localhost:8080/api/v1/capture/export \
-H "Content-Type: application/json" \
-d '{"duration_seconds": 10, "output": "/tmp/iq_capture.bin"}'
System info
uname -a > /tmp/varta-sysinfo.txt cat /etc/varta/version >> /tmp/varta-sysinfo.txt lsusb >> /tmp/varta-sysinfo.txt ip addr >> /tmp/varta-sysinfo.txt
Bundle everything
tar czf /tmp/varta-diagnostics-$(hostname)-$(date +%Y%m%d).tar.gz \ /tmp/varta-journal.log \ /tmp/varta-app-logs.tar.gz \ /tmp/varta-health.json \ /tmp/varta-metrics.json \ /tmp/varta-sysinfo.txt
Note: I/Q captures can be large (50+ MB per second of data). Only include them when investigating RF detection issues. Upload the diagnostics bundle to the support portal or email it to support@varta-systems.com.