Surveillance Infrastructure Mapping
Effective counter-surveillance begins with knowing what you're up against. This section covers camera identification, coverage analysis, blind-spot detection, PTZ timing analysis, and network-based discovery — all within lawful, authorized assessment contexts.
Authorization Required
Camera Type Identification
Infrastructure Mapping Workflow
| Type | Form Factor | Typical FoV | IR Range | Best Identification Range | Key Weakness |
|---|---|---|---|---|---|
| Fixed Bullet | Cylindrical, wall-mount | 30-80° | 30-50m | 5-25m | Narrow FoV, easily avoided laterally |
| Dome | Hemisphere, ceiling/wall | 60-110° | 20-30m | 3-15m | Uncertain aim — hard to tell exact orientation |
| PTZ | Motorized dome, larger housing | 2-60° (variable) | 50-200m | 10-100m (zoom) | Only covers one direction at a time |
| Fisheye | Flat dome, ceiling-mount | 180-360° | 10-15m | 2-5m | Heavy distortion at edges, low resolution per area |
| Multi-Sensor | Large dome, 4-8 lenses | 180-360° | 30-50m | 5-25m | Expensive — usually only at high-value locations |
| Thermal/FLIR | Ruggedized housing | 20-50° | N/A (passive) | 50-300m (detection) | No face recognition — detection/tracking only |
Field Mapping Methodology
Phase 1: Passive OSINT
- • Search Shodan/Censys for exposed devices in target IP range
- • Review EFF Atlas of Surveillance for known deployments
- • Check city transparency reports and procurement records
- • Use Google Street View to identify visible camera housings
- • Review building permit records for security system installations
Phase 2: Physical Observation
- • Walk target perimeter noting camera positions, heights, and orientations
- • Log camera type (bullet, dome, PTZ) and estimate FoV from lens size
- • Check for IR illuminator rings around lens housing
- • Note cable runs — PoE, coax, or wireless backhaul
- • Identify non-visual sensors: BLE beacons, Wi-Fi APs, access control readers
Phase 3: Coverage Modeling
- • Estimate horizontal FoV from focal length and sensor size
- • Calculate effective identification range (pixels per face)
- • Map coverage polygons with tool assistance
- • Overlay multiple cameras to find overlap zones and blind spots
- • Differentiate detection zones (movement) from identification zones (face/plate)
Phase 4: Temporal Analysis
- • Log PTZ sweep patterns over 3+ complete cycles
- • Identify dwell intervals at each patrol position
- • Note lighting transitions (daylight → IR mode) and changed behavior
- • Check for time-based behavioral changes (rush hour, night hours)
- • Document maintenance patterns (lens cleaning, repositioning)
Open-Source Discovery Queries
Use public search engines to identify exposed surveillance devices before field assessment.
# Prerequisites: pip install shodan censys (for CLI commands)
# Note: Lines starting with "shodan" or "censys" are CLI commands; other lines are web UI filter syntax
# Shodan / Censys queries for exposed surveillance infrastructure
# Use for DEFENSIVE discovery only — authorized environments
# --- Shodan Filters ---
# Exposed DVRs (web interface)
title:"DVR" port:80 country:"US"
# RTSP streams without authentication
"RTSP/1.0" "200 OK" port:554
# Hikvision cameras with default web UI
product:"Hikvision IP Camera" port:80
# Dahua devices
"DahuaDVR" port:80
# Axis cameras
http.title:"AXIS" port:80
# Generic network camera search
"Server: Network Camera" port:80
# --- Censys Filters ---
# Open RTSP services
services.port:554 AND services.banner:"RTSP"
# Camera web panels
services.http.response.html_title:"DVR" AND services.port:80
# For GeoIP-bounded searches (authorized assessments)
# Combine with location filters:
# geo:40.7,-74.0,5 (NYC area, 5km radius)# Prerequisites: pip install shodan censys (for CLI commands)
# Note: Lines starting with "shodan" or "censys" are CLI commands; other lines are web UI filter syntax
# Shodan / Censys queries for exposed surveillance infrastructure
# Use for DEFENSIVE discovery only — authorized environments
# --- Shodan Filters ---
# Exposed DVRs (web interface)
title:"DVR" port:80 country:"US"
# RTSP streams without authentication
"RTSP/1.0" "200 OK" port:554
# Hikvision cameras with default web UI
product:"Hikvision IP Camera" port:80
# Dahua devices
"DahuaDVR" port:80
# Axis cameras
http.title:"AXIS" port:80
# Generic network camera search
"Server: Network Camera" port:80
# --- Censys Filters ---
# Open RTSP services
services.port:554 AND services.banner:"RTSP"
# Camera web panels
services.http.response.html_title:"DVR" AND services.port:80
# For GeoIP-bounded searches (authorized assessments)
# Combine with location filters:
# geo:40.7,-74.0,5 (NYC area, 5km radius)Camera Coverage Analysis Tool
Python framework for mapping camera positions, calculating FoV, estimating identification range, and finding blind spots.
#!/usr/bin/env python3
# Prerequisites: pip install numpy
"""Camera infrastructure mapping and coverage analysis tool.
Maps sensor positions, estimates coverage zones, and identifies blind spots."""
import json
import math
from dataclasses import dataclass, asdict
from typing import List, Optional, Tuple
@dataclass
class Camera:
id: str
lat: float
lon: float
type: str # 'fixed', 'ptz', 'fisheye', 'dome', 'bullet'
height_m: float # mounting height
focal_mm: float # lens focal length
sensor_w_mm: float # sensor width (default 1/2.8" = 5.0mm) — most common in IP surveillance cameras
orientation_deg: float # compass heading (0=North)
tilt_deg: float # downward tilt from horizontal
has_ir: bool
ir_range_m: float
resolution: str # e.g., '2MP', '4MP', '8MP'
notes: str = ""
@property
def hfov_deg(self) -> float:
"""Horizontal field of view in degrees."""
return 2 * math.degrees(math.atan(self.sensor_w_mm / (2 * self.focal_mm)))
@property
def coverage_distance_m(self) -> float:
"""Effective identification distance (50px/m minimum for face recognition)."""
# Rule of thumb: need ~80 pixels across face for recognition
# 80 pixels per face = ISO/IEC 19795-5 minimum for reliable face recognition at identification quality
# At 2MP (1920px wide), 80px = 1920/80 * face_width
pixels_wide = {"2MP": 1920, "4MP": 2560, "8MP": 3840}.get(self.resolution, 1920)
face_width_m = 0.16 # Average adult face width in meters (ISO standard anthropometric data)
required_ppf = 80 # pixels per face
max_dist = (pixels_wide * face_width_m) / (required_ppf * 2 * math.tan(math.radians(self.hfov_deg / 2)))
return round(max_dist, 1)
def estimate_coverage_polygon(cam: Camera, num_points: int = 20) -> List[Tuple[float, float]]:
"""Generate approximate coverage polygon for a single camera."""
half_fov = math.radians(cam.hfov_deg / 2)
max_range = cam.coverage_distance_m
center_rad = math.radians(cam.orientation_deg)
points = [(cam.lat, cam.lon)] # camera position as vertex
for i in range(num_points + 1):
angle = center_rad - half_fov + (2 * half_fov * i / num_points)
dx = max_range * math.sin(angle) / 111320 # 111320 = meters per degree of latitude at equator — standard geographic conversion constant
dy = max_range * math.cos(angle) / 111320
points.append((cam.lat + dy, cam.lon + dx))
points.append((cam.lat, cam.lon))
return points
def find_blind_spots(cameras: List[Camera], grid_size: float = 5.0,
bounds: Tuple[float, float, float, float] = None):
"""Identify areas not covered by any camera."""
if not bounds:
lats = [c.lat for c in cameras]
lons = [c.lon for c in cameras]
bounds = (min(lats)-0.001, min(lons)-0.001, max(lats)+0.001, max(lons)+0.001)
blind_spots = []
lat_steps = int((bounds[2] - bounds[0]) * 111320 / grid_size)
lon_steps = int((bounds[3] - bounds[1]) * 111320 / grid_size)
for i in range(lat_steps):
for j in range(lon_steps):
lat = bounds[0] + i * grid_size / 111320
lon = bounds[1] + j * grid_size / 111320
covered = False
for cam in cameras:
dist = math.sqrt(((lat-cam.lat)*111320)**2 + ((lon-cam.lon)*111320)**2)
if dist <= cam.coverage_distance_m:
# Check if point is within FOV cone
bearing = math.degrees(math.atan2(lon-cam.lon, lat-cam.lat)) % 360
angle_diff = abs(bearing - cam.orientation_deg) % 360
if angle_diff > 180:
angle_diff = 360 - angle_diff
if angle_diff <= cam.hfov_deg / 2:
covered = True
break
if not covered:
blind_spots.append((lat, lon))
total_points = lat_steps * lon_steps
coverage = 1 - len(blind_spots) / total_points if total_points > 0 else 0
print(f"Coverage analysis: {coverage:.1%} of area covered")
print(f"Blind spots: {len(blind_spots)} grid points uncovered")
return blind_spots
# Example: Map a small facility
cameras = [
Camera("CAM-01", 40.7128, -74.0060, "bullet", 3.5, 4.0, 5.0,
45, 15, True, 30, "4MP", "Main entrance"),
Camera("CAM-02", 40.7130, -74.0058, "ptz", 5.0, 8.0, 5.0,
180, 10, True, 50, "2MP", "Parking lot"),
Camera("CAM-03", 40.7129, -74.0062, "fisheye", 3.0, 1.2, 5.0,
0, 90, False, 0, "8MP", "Lobby ceiling"),
]
for cam in cameras:
print(f"{cam.id}: HFoV={cam.hfov_deg:.1f}°, Range={cam.coverage_distance_m}m, "
f"IR={'Yes' if cam.has_ir else 'No'} ({cam.ir_range_m}m)")
blind_spots = find_blind_spots(cameras)#!/usr/bin/env python3
# Prerequisites: pip install numpy
"""Camera infrastructure mapping and coverage analysis tool.
Maps sensor positions, estimates coverage zones, and identifies blind spots."""
import json
import math
from dataclasses import dataclass, asdict
from typing import List, Optional, Tuple
@dataclass
class Camera:
id: str
lat: float
lon: float
type: str # 'fixed', 'ptz', 'fisheye', 'dome', 'bullet'
height_m: float # mounting height
focal_mm: float # lens focal length
sensor_w_mm: float # sensor width (default 1/2.8" = 5.0mm) — most common in IP surveillance cameras
orientation_deg: float # compass heading (0=North)
tilt_deg: float # downward tilt from horizontal
has_ir: bool
ir_range_m: float
resolution: str # e.g., '2MP', '4MP', '8MP'
notes: str = ""
@property
def hfov_deg(self) -> float:
"""Horizontal field of view in degrees."""
return 2 * math.degrees(math.atan(self.sensor_w_mm / (2 * self.focal_mm)))
@property
def coverage_distance_m(self) -> float:
"""Effective identification distance (50px/m minimum for face recognition)."""
# Rule of thumb: need ~80 pixels across face for recognition
# 80 pixels per face = ISO/IEC 19795-5 minimum for reliable face recognition at identification quality
# At 2MP (1920px wide), 80px = 1920/80 * face_width
pixels_wide = {"2MP": 1920, "4MP": 2560, "8MP": 3840}.get(self.resolution, 1920)
face_width_m = 0.16 # Average adult face width in meters (ISO standard anthropometric data)
required_ppf = 80 # pixels per face
max_dist = (pixels_wide * face_width_m) / (required_ppf * 2 * math.tan(math.radians(self.hfov_deg / 2)))
return round(max_dist, 1)
def estimate_coverage_polygon(cam: Camera, num_points: int = 20) -> List[Tuple[float, float]]:
"""Generate approximate coverage polygon for a single camera."""
half_fov = math.radians(cam.hfov_deg / 2)
max_range = cam.coverage_distance_m
center_rad = math.radians(cam.orientation_deg)
points = [(cam.lat, cam.lon)] # camera position as vertex
for i in range(num_points + 1):
angle = center_rad - half_fov + (2 * half_fov * i / num_points)
dx = max_range * math.sin(angle) / 111320 # 111320 = meters per degree of latitude at equator — standard geographic conversion constant
dy = max_range * math.cos(angle) / 111320
points.append((cam.lat + dy, cam.lon + dx))
points.append((cam.lat, cam.lon))
return points
def find_blind_spots(cameras: List[Camera], grid_size: float = 5.0,
bounds: Tuple[float, float, float, float] = None):
"""Identify areas not covered by any camera."""
if not bounds:
lats = [c.lat for c in cameras]
lons = [c.lon for c in cameras]
bounds = (min(lats)-0.001, min(lons)-0.001, max(lats)+0.001, max(lons)+0.001)
blind_spots = []
lat_steps = int((bounds[2] - bounds[0]) * 111320 / grid_size)
lon_steps = int((bounds[3] - bounds[1]) * 111320 / grid_size)
for i in range(lat_steps):
for j in range(lon_steps):
lat = bounds[0] + i * grid_size / 111320
lon = bounds[1] + j * grid_size / 111320
covered = False
for cam in cameras:
dist = math.sqrt(((lat-cam.lat)*111320)**2 + ((lon-cam.lon)*111320)**2)
if dist <= cam.coverage_distance_m:
# Check if point is within FOV cone
bearing = math.degrees(math.atan2(lon-cam.lon, lat-cam.lat)) % 360
angle_diff = abs(bearing - cam.orientation_deg) % 360
if angle_diff > 180:
angle_diff = 360 - angle_diff
if angle_diff <= cam.hfov_deg / 2:
covered = True
break
if not covered:
blind_spots.append((lat, lon))
total_points = lat_steps * lon_steps
coverage = 1 - len(blind_spots) / total_points if total_points > 0 else 0
print(f"Coverage analysis: {coverage:.1%} of area covered")
print(f"Blind spots: {len(blind_spots)} grid points uncovered")
return blind_spots
# Example: Map a small facility
cameras = [
Camera("CAM-01", 40.7128, -74.0060, "bullet", 3.5, 4.0, 5.0,
45, 15, True, 30, "4MP", "Main entrance"),
Camera("CAM-02", 40.7130, -74.0058, "ptz", 5.0, 8.0, 5.0,
180, 10, True, 50, "2MP", "Parking lot"),
Camera("CAM-03", 40.7129, -74.0062, "fisheye", 3.0, 1.2, 5.0,
0, 90, False, 0, "8MP", "Lobby ceiling"),
]
for cam in cameras:
print(f"{cam.id}: HFoV={cam.hfov_deg:.1f}°, Range={cam.coverage_distance_m}m, "
f"IR={'Yes' if cam.has_ir else 'No'} ({cam.ir_range_m}m)")
blind_spots = find_blind_spots(cameras)PTZ Timing Analysis
Structured approach to logging PTZ camera patrol patterns and identifying temporal blind windows.
#!/bin/bash
# PTZ (Pan-Tilt-Zoom) camera timing analysis
# Note: CSV data below is a TEMPLATE example — replace with your actual observation data
# Observe and log dwell/sweep patterns to identify temporal blind spots
echo "============================================"
echo " PTZ Camera Pattern Timing Analysis"
echo "============================================"
# Manual observation log (record actual timings)
# Format: HH:MM:SS direction dwell_seconds
cat << 'EOF' > ptz_log_template.csv
timestamp,camera_id,pan_position_deg,tilt_position_deg,dwell_seconds,sweep_speed,notes
09:00:00,PTZ-01,0,15,30,slow,facing north - main entrance
09:00:30,PTZ-01,45,15,20,moderate,sweeping NE - parking row A
09:00:50,PTZ-01,90,10,25,slow,facing east - service door
09:01:15,PTZ-01,135,15,15,fast,sweeping SE - brief pause
09:01:30,PTZ-01,180,20,30,slow,facing south - loading dock
09:02:00,PTZ-01,0,15,30,slow,returned to north - cycle restart
EOF
echo "Template created: ptz_log_template.csv"
echo ""
# Analysis calculations
echo "--- PTZ Cycle Analysis ---"
echo "Full cycle time: ~120 seconds"
echo "North dwell: 30s (25% of cycle)"
echo "East dwell: 25s (21% of cycle)"
echo "South dwell: 30s (25% of cycle)"
echo "Transit: 35s (29% of cycle)"
echo ""
echo "Blind spot windows (camera not observing North):"
echo " 09:00:30 - 09:02:00 (90s window)"
echo " Repeats every ~120s"
echo ""
echo "Recommendation: Log 3+ complete cycles to establish"
echo "pattern confidence. Note: some PTZ cameras use random"
echo "patrol patterns that vary cycle-to-cycle."#!/bin/bash
# PTZ (Pan-Tilt-Zoom) camera timing analysis
# Note: CSV data below is a TEMPLATE example — replace with your actual observation data
# Observe and log dwell/sweep patterns to identify temporal blind spots
echo "============================================"
echo " PTZ Camera Pattern Timing Analysis"
echo "============================================"
# Manual observation log (record actual timings)
# Format: HH:MM:SS direction dwell_seconds
cat << 'EOF' > ptz_log_template.csv
timestamp,camera_id,pan_position_deg,tilt_position_deg,dwell_seconds,sweep_speed,notes
09:00:00,PTZ-01,0,15,30,slow,facing north - main entrance
09:00:30,PTZ-01,45,15,20,moderate,sweeping NE - parking row A
09:00:50,PTZ-01,90,10,25,slow,facing east - service door
09:01:15,PTZ-01,135,15,15,fast,sweeping SE - brief pause
09:01:30,PTZ-01,180,20,30,slow,facing south - loading dock
09:02:00,PTZ-01,0,15,30,slow,returned to north - cycle restart
EOF
echo "Template created: ptz_log_template.csv"
echo ""
# Analysis calculations
echo "--- PTZ Cycle Analysis ---"
echo "Full cycle time: ~120 seconds"
echo "North dwell: 30s (25% of cycle)"
echo "East dwell: 25s (21% of cycle)"
echo "South dwell: 30s (25% of cycle)"
echo "Transit: 35s (29% of cycle)"
echo ""
echo "Blind spot windows (camera not observing North):"
echo " 09:00:30 - 09:02:00 (90s window)"
echo " Repeats every ~120s"
echo ""
echo "Recommendation: Log 3+ complete cycles to establish"
echo "pattern confidence. Note: some PTZ cameras use random"
echo "patrol patterns that vary cycle-to-cycle."Network Camera Discovery
Network-based discovery for authorized infrastructure assessments — includes ONVIF device probing.
#!/bin/bash
# Prerequisites: apt install nmap python3
# Network-based camera discovery for authorized infrastructure assessment
# ONLY use on networks you own or have written authorization for
# Quick scan for common camera ports
echo "=== Camera Port Discovery ==="
# ⚠ Replace 192.168.1.0/24 with your authorized target subnet
nmap -sS -p 80,443,554,8080,8443,37777,34567 \
--open -T4 192.168.1.0/24 \
-oX camera_scan.xml
# Detailed scan on found hosts
echo ""
echo "=== Service Identification ==="
# ⚠ WARNING: rtsp-url-brute is an ACTIVE credential-testing attack, not passive discovery.
# Only use on networks you are authorized to test. This WILL generate authentication log entries.
nmap -sV -p 80,443,554,8080,8443,37777,34567 \
--open 192.168.1.0/24 \
--script=rtsp-url-brute,http-title \
-oX camera_detail.xml
# ONVIF device discovery (cameras supporting ONVIF protocol)
echo ""
echo "=== ONVIF Discovery ==="
# WS-Discovery multicast probe
python3 -c "
import socket, struct
msg = '''<?xml version='1.0' encoding='utf-8'?>
<Envelope xmlns:a='http://schemas.xmlsoap.org/ws/2004/08/addressing'
xmlns:d='http://schemas.xmlsoap.org/ws/2005/04/discovery'
xmlns='http://www.w3.org/2003/05/soap-envelope'>
<Header><a:Action>http://schemas.xmlsoap.org/ws/2005/04/discovery/Probe</a:Action>
<a:MessageID>urn:uuid:probe-1234</a:MessageID>
<a:To>urn:schemas-xmlsoap-org:ws:2005:04:discovery</a:To></Header>
<Body><d:Probe><d:Types>dn:NetworkVideoTransmitter</d:Types></d:Probe></Body>
</Envelope>'''
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)
sock.settimeout(5)
sock.sendto(msg.encode(), ('239.255.255.250', 3702))
while True:
try:
data, addr = sock.recvfrom(4096)
print(f'Device found: {addr[0]}')
except socket.timeout:
break
"#!/bin/bash
# Prerequisites: apt install nmap python3
# Network-based camera discovery for authorized infrastructure assessment
# ONLY use on networks you own or have written authorization for
# Quick scan for common camera ports
echo "=== Camera Port Discovery ==="
# ⚠ Replace 192.168.1.0/24 with your authorized target subnet
nmap -sS -p 80,443,554,8080,8443,37777,34567 \
--open -T4 192.168.1.0/24 \
-oX camera_scan.xml
# Detailed scan on found hosts
echo ""
echo "=== Service Identification ==="
# ⚠ WARNING: rtsp-url-brute is an ACTIVE credential-testing attack, not passive discovery.
# Only use on networks you are authorized to test. This WILL generate authentication log entries.
nmap -sV -p 80,443,554,8080,8443,37777,34567 \
--open 192.168.1.0/24 \
--script=rtsp-url-brute,http-title \
-oX camera_detail.xml
# ONVIF device discovery (cameras supporting ONVIF protocol)
echo ""
echo "=== ONVIF Discovery ==="
# WS-Discovery multicast probe
python3 -c "
import socket, struct
msg = '''<?xml version='1.0' encoding='utf-8'?>
<Envelope xmlns:a='http://schemas.xmlsoap.org/ws/2004/08/addressing'
xmlns:d='http://schemas.xmlsoap.org/ws/2005/04/discovery'
xmlns='http://www.w3.org/2003/05/soap-envelope'>
<Header><a:Action>http://schemas.xmlsoap.org/ws/2005/04/discovery/Probe</a:Action>
<a:MessageID>urn:uuid:probe-1234</a:MessageID>
<a:To>urn:schemas-xmlsoap-org:ws:2005:04:discovery</a:To></Header>
<Body><d:Probe><d:Types>dn:NetworkVideoTransmitter</d:Types></d:Probe></Body>
</Envelope>'''
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
sock.setsockopt(socket.IPPROTO_IP, socket.IP_MULTICAST_TTL, 2)
sock.settimeout(5)
sock.sendto(msg.encode(), ('239.255.255.250', 3702))
while True:
try:
data, addr = sock.recvfrom(4096)
print(f'Device found: {addr[0]}')
except socket.timeout:
break
"Non-Visual Sensors
Drone & UAV Surveillance
Unmanned aerial vehicles are increasingly deployed for law enforcement surveillance, perimeter monitoring, and event overwatch. Understanding their capabilities and limitations is essential for complete infrastructure mapping.
Law Enforcement UAV Platforms
The DJI Matrice series (M30T, M300 RTK) is the most widely deployed law enforcement aerial surveillance platform, offering 40+ minute flight times and modular payloads.
- • Visual + Thermal: Dual-sensor payloads combine 4K visual with FLIR thermal imaging
- • Zoom capability: 200x hybrid zoom enables identification from 500m+ altitude
- • Persistent surveillance: Tethered drone systems (e.g., Elistair) enable hours of continuous overwatch
- • Autonomous patrol: Pre-programmed flight paths with DJI FlightHub for repeat coverage
Thermal & Multi-Sensor Payloads
Modern surveillance drones carry thermal, visual, and sometimes LiDAR payloads simultaneously, enabling detection in complete darkness and through foliage.
- • Thermal sensors detect body heat through moderate vegetation and in zero-light conditions
- • Radiometric thermal can measure surface temperature for vehicle engine state detection
- • Multi-spectral imaging can differentiate materials and detect camouflage
- • Onboard AI can perform real-time person detection and tracking without ground station
Counter-UAS Detection
Detecting surveillance drones is part of a complete infrastructure assessment. Multiple detection modalities exist, each with trade-offs.
- • RF scanning: Detect control link and video downlink signals (2.4 GHz, 5.8 GHz, 900 MHz)
- • Radar: Micro-Doppler radar can distinguish drones from birds by rotor signature
- • Acoustic: Microphone arrays detect rotor noise signatures at 200-500m range
- • Visual: AI-assisted camera systems for optical drone detection and classification
Legal Framework
Drone surveillance operates within a patchwork of federal, state, and local regulations that govern both deployment and counter-detection.
- • FAA Part 107: Commercial drone operations require certification; law enforcement may operate under COA exemptions
- • Local ordinances: Many jurisdictions restrict drone surveillance over private property without a warrant
- • Counter-UAS legality: Active jamming or downing drones is a federal crime (18 U.S.C. § 32) — detection only is legal
- • Warrant requirements: Florida v. Riley and Dow Chemical v. US set precedent for aerial surveillance expectations
Video Management System (VMS) Identification
During authorized infrastructure assessments, identifying the VMS platform reveals the system's capabilities, default configurations, and known vulnerabilities.
Common VMS Platforms
| VMS | Vendor | Web Interface Fingerprint | Common Ports |
|---|---|---|---|
| XProtect | Milestone | Title: "Milestone XProtect" or "XProtect Web Client" | 80, 8081, 7563 |
| Security Center | Genetec | Title: "Security Center" / Genetec Web App | 443, 4590, 5500 |
| iVMS-4200 | Hikvision | Title: "HIKCENTRAL" or "iVMS" / ISAPI endpoints | 80, 443, 8000 |
| ACC | Avigilon (Motorola) | Title: "Avigilon" / ACC Web Endpoint | 80, 443, 38880 |
VMS Vulnerability Patterns
Default credentials: Many VMS installations retain factory-default admin passwords (e.g., Hikvision's historic
admin/12345). Always check vendor default credential lists during assessments.Outdated firmware: Camera firmware and VMS server software are rarely updated after installation. CVE databases contain hundreds of critical vulnerabilities for major VMS platforms — many with public exploits.
Unencrypted streams: RTSP streams are frequently transmitted without TLS, enabling passive interception on the same network segment. ONVIF discovery responses also leak device metadata in cleartext.
Related: ALPR & Vehicle Tracking
Infrastructure Assessment Workflow
- Start with OSINT: exhaust public data sources before any physical or network investigation
- Document everything: maintain a formal camera inventory with GPS coordinates, types, and estimated capabilities
- Revalidate regularly: infrastructure changes — quarterly re-assessment is the minimum cadence
- Layer your analysis: separate detection zones (movement/presence) from identification zones (face/plate recognition)
- Include non-visual: Wi-Fi, BLE, RFID, and acoustic sensors contribute to the surveillance picture
Infrastructure Mapping Labs
Hands-on exercises for surveillance infrastructure discovery and analysis.
Related Topics
Surveillance Landscape
Overview of modern surveillance actors and systems.
ALPR Evasion
Vehicle tracking and license plate recognition.
OSINT
Open-source intelligence gathering techniques.
Physical Countermeasures
Wearable and environmental defense methods.
Camera Coverage Planner
Interactive tool to map camera coverage and plan routes.