Biometric Defense
Intermediate
T1125 T1592

Gait & Behavioral Biometrics

Even when faces are fully obscured, surveillance systems can identify individuals through their walking patterns using pose estimation and temporal gait analysis. This guide covers how gait recognition works technically and what defensive approaches reduce re-identification risk.

Why Gait Matters

Gait recognition systems have reached 90%+ identification accuracy in controlled environments using only silhouette sequences. Unlike face recognition, gait can be captured from much greater distances (50-100m) and does not require frontal views, making it a powerful complement to facial recognition systems.

How Gait Recognition Works

Modern gait recognition uses two primary approaches, often combined in advanced systems.

Appearance-Based (Silhouette)

Extracts gait energy images (GEI) from silhouette sequences. Captures overall body shape and movement pattern without explicit joint tracking.

  • GaitSet: Set-based recognition using multiple silhouette frames
  • GaitGL: Global-local feature extraction for robust matching
  • Accuracy: 95%+ on CASIA-B dataset, controlled conditions

Model-Based (Pose Estimation)

Extracts skeleton joint positions per frame using pose estimation models (OpenPose, MediaPipe, MMPose), then analyzes joint angle trajectories over time.

  • GaitGraph: Graph convolutional networks on pose sequences
  • PoseGait: Joint angle trajectories as temporal features
  • Advantage: More robust to clothing and carrying condition changes

What Gets Measured

Feature Description Difficulty to Alter Defensive Approach
Cadence Steps per minute, temporal regularity Medium Pace variation, terrain changes
Stride Length Distance between successive same-foot contacts Medium Footwear variation, deliberate adjustment
Joint Angles Hip-knee-ankle trajectories during stride cycle Hard Load carriage, clothing bulk
Arm Swing Amplitude, symmetry, and phase of arm movement Medium Carrying items, hands in pockets
Shoulder Sway Lateral movement of upper body during walking Hard Layered clothing, posture shift
Body Proportions Limb length ratios, shoulder-to-hip ratio Very Hard Bulky outerwear, platform shoes

Defensive Disruption Strategies

Reduce Repeatability

Vary cadence, stride, and posture transitions across routes and days. Use different footwear with different sole thickness and weight. Alternate carrying positions for bags and items.

  • • Change shoes between monitored locations
  • • Deliberately vary walking speed by 10-15%
  • • Alternate bag carry side (left shoulder → right hand → backpack)

Reduce Observation Window

Break long continuous camera tracks into shorter segments using natural transitions. Gait recognition accuracy drops significantly with fewer than 2 complete stride cycles.

  • • Use building entrances to break camera continuity
  • • Cross streets at variable points
  • • Pause, stop, or change direction at natural points

Exploit Occlusion

Crowding, partial occlusion, and dynamic backgrounds significantly lower pose extraction fidelity. Multi-person scenes create assignment ambiguity.

  • • Walk alongside others in crowded areas
  • • Use columns, vehicles, and structures as occlusion breaks
  • • Dense pedestrian traffic degrades tracking accuracy

Coordinate with Device Hygiene

Behavioral defenses fail if device IDs provide easy movement continuity correlation. A modified gait with the same phone MAC address is trivially re-linked.

  • • Disable BLE/Wi-Fi when using gait countermeasures
  • • Ensure device MAC rotation is active
  • • Remove phone from pocket to change body outline

Research Limitations

Most gait recognition research uses lab conditions (controlled lighting, clean backgrounds, cooperative subjects). Real-world accuracy degrades significantly with weather, crowds, varying camera angles, and non-cooperative subjects. However, accuracy is improving rapidly with transformer-based architectures.

Pose Estimation Setup

Install pose estimation tools to understand what gait features cameras can extract from your movement.

setup-pose-estimation.sh
bash
# Install OpenPose for multi-person pose estimation
# Option 1: Pre-built Docker (recommended)
# Note: Docker option requires NVIDIA Container Toolkit for GPU support
docker pull cwaffles/openpose
docker run --gpus all -v ./videos:/data cwaffles/openpose \
  --video /data/walking_clip.mp4 --write_json /data/output/

# Option 2: MediaPipe (lighter, no GPU required)
pip install mediapipe opencv-python numpy scipy

# Option 3: MMPose (research-grade, COCO + MPII models)
pip install openmim
mim install mmpose mmdet
python demo/body2d_pose_demo.py configs/body_2d_keypoint/rtmpose \
  --input walking_clip.mp4 --output-root vis_results/
# Install OpenPose for multi-person pose estimation
# Option 1: Pre-built Docker (recommended)
# Note: Docker option requires NVIDIA Container Toolkit for GPU support
docker pull cwaffles/openpose
docker run --gpus all -v ./videos:/data cwaffles/openpose \
  --video /data/walking_clip.mp4 --write_json /data/output/

# Option 2: MediaPipe (lighter, no GPU required)
pip install mediapipe opencv-python numpy scipy

# Option 3: MMPose (research-grade, COCO + MPII models)
pip install openmim
mim install mmpose mmdet
python demo/body2d_pose_demo.py configs/body_2d_keypoint/rtmpose \
  --input walking_clip.mp4 --output-root vis_results/

Gait Feature Extraction

Extract and analyze your own gait signature to understand what features make you identifiable.

extract_gait.py
python
#!/usr/bin/env python3
# Prerequisites: pip install mediapipe opencv-python numpy
"""Gait signature extraction using MediaPipe Pose.
Captures joint angles and stride patterns for analysis."""
import cv2
import mediapipe as mp
import numpy as np
import json

mp_pose = mp.solutions.pose
mp_draw = mp.solutions.drawing_utils

def extract_gait_features(video_path):
    """Extract per-frame pose landmarks for gait analysis."""
    cap = cv2.VideoCapture(video_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    frames = []

    with mp_pose.Pose(
        min_detection_confidence=0.6,  # slightly above default (0.5) for reliable initial detection
        min_tracking_confidence=0.5    # default; lower for faster subjects, raise for precision
    ) as pose:
        while cap.isOpened():
            ret, frame = cap.read()
            if not ret:
                break
            rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            result = pose.process(rgb)

            if result.pose_landmarks:
                landmarks = {}
                for idx, lm in enumerate(result.pose_landmarks.landmark):
                    landmarks[idx] = {
                        "x": round(lm.x, 4),
                        "y": round(lm.y, 4),
                        "z": round(lm.z, 4),
                        "visibility": round(lm.visibility, 3)
                    }
                frames.append(landmarks)

    cap.release()
    return {"fps": fps, "frame_count": len(frames), "landmarks": frames}

def compute_stride_metrics(gait_data):
    """Compute cadence and stride length from ankle landmarks."""
    # Landmark 27 = left ankle, 28 = right ankle
    left_ankle_y = [f[27]["y"] for f in gait_data["landmarks"] if 27 in f]
    right_ankle_y = [f[28]["y"] for f in gait_data["landmarks"] if 28 in f]

    # Find stride cycles via zero-crossings of ankle height difference
    diff = np.array(left_ankle_y) - np.array(right_ankle_y)
    crossings = np.where(np.diff(np.sign(diff)))[0]
    
    if len(crossings) > 1:
        stride_frames = np.diff(crossings)
        cadence_hz = gait_data["fps"] / np.mean(stride_frames)
        print(f"Estimated cadence: {cadence_hz:.2f} Hz ({cadence_hz * 60:.0f} steps/min)")
        print(f"Stride frame variability: {np.std(stride_frames):.2f}")
    else:
        print("Insufficient strides detected")

# Usage
data = extract_gait_features("walking_clip.mp4")
print(f"Extracted {data['frame_count']} frames at {data['fps']} FPS")
compute_stride_metrics(data)

# --- Expected Output ---
# Extracted 247 frames at 30.0 FPS
# Estimated cadence: 1.87 Hz (112 steps/min)
# Stride frame variability: 1.43
#
# Sample landmark (frame 0):
#   0: {"x": 0.5012, "y": 0.1834, "z": -0.0421, "visibility": 0.998}
#   11: {"x": 0.4387, "y": 0.3921, "z": -0.0812, "visibility": 0.995}
#   12: {"x": 0.5614, "y": 0.3897, "z": -0.0793, "visibility": 0.993}
#   23: {"x": 0.4501, "y": 0.6312, "z": -0.0034, "visibility": 0.987}
#   25: {"x": 0.4423, "y": 0.7891, "z": 0.0156, "visibility": 0.971}
#   27: {"x": 0.4389, "y": 0.9412, "z": 0.0087, "visibility": 0.942}
#   28: {"x": 0.5621, "y": 0.9387, "z": 0.0091, "visibility": 0.938}
#!/usr/bin/env python3
# Prerequisites: pip install mediapipe opencv-python numpy
"""Gait signature extraction using MediaPipe Pose.
Captures joint angles and stride patterns for analysis."""
import cv2
import mediapipe as mp
import numpy as np
import json

mp_pose = mp.solutions.pose
mp_draw = mp.solutions.drawing_utils

def extract_gait_features(video_path):
    """Extract per-frame pose landmarks for gait analysis."""
    cap = cv2.VideoCapture(video_path)
    fps = cap.get(cv2.CAP_PROP_FPS)
    frames = []

    with mp_pose.Pose(
        min_detection_confidence=0.6,  # slightly above default (0.5) for reliable initial detection
        min_tracking_confidence=0.5    # default; lower for faster subjects, raise for precision
    ) as pose:
        while cap.isOpened():
            ret, frame = cap.read()
            if not ret:
                break
            rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            result = pose.process(rgb)

            if result.pose_landmarks:
                landmarks = {}
                for idx, lm in enumerate(result.pose_landmarks.landmark):
                    landmarks[idx] = {
                        "x": round(lm.x, 4),
                        "y": round(lm.y, 4),
                        "z": round(lm.z, 4),
                        "visibility": round(lm.visibility, 3)
                    }
                frames.append(landmarks)

    cap.release()
    return {"fps": fps, "frame_count": len(frames), "landmarks": frames}

def compute_stride_metrics(gait_data):
    """Compute cadence and stride length from ankle landmarks."""
    # Landmark 27 = left ankle, 28 = right ankle
    left_ankle_y = [f[27]["y"] for f in gait_data["landmarks"] if 27 in f]
    right_ankle_y = [f[28]["y"] for f in gait_data["landmarks"] if 28 in f]

    # Find stride cycles via zero-crossings of ankle height difference
    diff = np.array(left_ankle_y) - np.array(right_ankle_y)
    crossings = np.where(np.diff(np.sign(diff)))[0]
    
    if len(crossings) > 1:
        stride_frames = np.diff(crossings)
        cadence_hz = gait_data["fps"] / np.mean(stride_frames)
        print(f"Estimated cadence: {cadence_hz:.2f} Hz ({cadence_hz * 60:.0f} steps/min)")
        print(f"Stride frame variability: {np.std(stride_frames):.2f}")
    else:
        print("Insufficient strides detected")

# Usage
data = extract_gait_features("walking_clip.mp4")
print(f"Extracted {data['frame_count']} frames at {data['fps']} FPS")
compute_stride_metrics(data)

# --- Expected Output ---
# Extracted 247 frames at 30.0 FPS
# Estimated cadence: 1.87 Hz (112 steps/min)
# Stride frame variability: 1.43
#
# Sample landmark (frame 0):
#   0: {"x": 0.5012, "y": 0.1834, "z": -0.0421, "visibility": 0.998}
#   11: {"x": 0.4387, "y": 0.3921, "z": -0.0812, "visibility": 0.995}
#   12: {"x": 0.5614, "y": 0.3897, "z": -0.0793, "visibility": 0.993}
#   23: {"x": 0.4501, "y": 0.6312, "z": -0.0034, "visibility": 0.987}
#   25: {"x": 0.4423, "y": 0.7891, "z": 0.0156, "visibility": 0.971}
#   27: {"x": 0.4389, "y": 0.9412, "z": 0.0087, "visibility": 0.942}
#   28: {"x": 0.5621, "y": 0.9387, "z": 0.0091, "visibility": 0.938}

Gait Signature Comparison

Compare your baseline gait with modified conditions to measure defense effectiveness.

compare_gait.py
python
#!/usr/bin/env python3
# Prerequisites: pip install numpy scipy
"""Compare gait signatures between baseline and modified conditions.
Measures how countermeasures affect re-identification likelihood."""
import numpy as np
from scipy.spatial.distance import cosine

def gait_vector(landmarks_sequence):
    """Create a summary gait vector from a pose sequence."""
    # Extract key joint angles per frame
    features = []
    for frame in landmarks_sequence:
        if all(k in frame for k in [23, 25, 27, 24, 26, 28, 11, 12]):
            # Hip-knee-ankle angles (left and right)
            left_hip = np.array([frame[23]["x"], frame[23]["y"]])
            left_knee = np.array([frame[25]["x"], frame[25]["y"]])
            left_ankle = np.array([frame[27]["x"], frame[27]["y"]])
            
            right_hip = np.array([frame[24]["x"], frame[24]["y"]])
            right_knee = np.array([frame[26]["x"], frame[26]["y"]])
            right_ankle = np.array([frame[28]["x"], frame[28]["y"]])
            
            # Shoulder width ratio
            left_shoulder = np.array([frame[11]["x"], frame[11]["y"]])
            right_shoulder = np.array([frame[12]["x"], frame[12]["y"]])
            
            left_angle = np.arctan2(left_ankle[1] - left_knee[1], left_ankle[0] - left_knee[0])
            right_angle = np.arctan2(right_ankle[1] - right_knee[1], right_ankle[0] - right_knee[0])
            shoulder_width = np.linalg.norm(left_shoulder - right_shoulder)
            
            features.append([left_angle, right_angle, shoulder_width])
    
    features = np.array(features)
    # Statistical summary: mean, std, range for each feature
    return np.concatenate([
        np.mean(features, axis=0),
        np.std(features, axis=0),
        np.ptp(features, axis=0)  # peak-to-peak range
    ])

# Compare conditions
conditions = {
    "baseline_normal": "data/baseline_normal.json",
    "different_shoes": "data/different_shoes.json",
    "backpack_load": "data/backpack_load.json",
    "varied_pace": "data/varied_pace.json",
    "coat_layers": "data/coat_layers.json",
}

# Load pre-extracted landmark sequences (from extract_gait_features)
import json
vectors = {}
for name, path in conditions.items():
    with open(path) as f:
        data = json.load(f)
    vectors[name] = gait_vector(data["landmarks"])

ref = vectors["baseline_normal"]
print(f"{'Condition':<25} {'Cosine Distance':>15} {'Re-ID Risk':>12}")
print("-" * 55)
for name, vec in vectors.items():
    dist = cosine(ref, vec)
    # Thresholds derived from research literature (Wan et al. 2018): cosine distance < 0.15 indicates
    # strong gait signature match (HIGH re-ID risk); 0.15–0.35 = moderate; > 0.35 = low risk
    risk = "HIGH" if dist < 0.15 else "MEDIUM" if dist < 0.35 else "LOW"
    print(f"{name:<25} {dist:>15.4f} {risk:>12}")

# Expected output:
# === Gait Comparison Results ===
# Condition              | Distance | Risk Level
# ──────────────────────────────────────────────
# normal_walk            |   0.0823 | HIGH — likely re-identifiable
# modified_stride        |   0.2146 | MEDIUM — some features persist
# shoes_changed          |   0.1934 | MEDIUM — partial mitigation
# weighted_clothing      |   0.3847 | LOW — effective countermeasure
# combined_countermeasure|   0.4521 | LOW — strong defense
#!/usr/bin/env python3
# Prerequisites: pip install numpy scipy
"""Compare gait signatures between baseline and modified conditions.
Measures how countermeasures affect re-identification likelihood."""
import numpy as np
from scipy.spatial.distance import cosine

def gait_vector(landmarks_sequence):
    """Create a summary gait vector from a pose sequence."""
    # Extract key joint angles per frame
    features = []
    for frame in landmarks_sequence:
        if all(k in frame for k in [23, 25, 27, 24, 26, 28, 11, 12]):
            # Hip-knee-ankle angles (left and right)
            left_hip = np.array([frame[23]["x"], frame[23]["y"]])
            left_knee = np.array([frame[25]["x"], frame[25]["y"]])
            left_ankle = np.array([frame[27]["x"], frame[27]["y"]])
            
            right_hip = np.array([frame[24]["x"], frame[24]["y"]])
            right_knee = np.array([frame[26]["x"], frame[26]["y"]])
            right_ankle = np.array([frame[28]["x"], frame[28]["y"]])
            
            # Shoulder width ratio
            left_shoulder = np.array([frame[11]["x"], frame[11]["y"]])
            right_shoulder = np.array([frame[12]["x"], frame[12]["y"]])
            
            left_angle = np.arctan2(left_ankle[1] - left_knee[1], left_ankle[0] - left_knee[0])
            right_angle = np.arctan2(right_ankle[1] - right_knee[1], right_ankle[0] - right_knee[0])
            shoulder_width = np.linalg.norm(left_shoulder - right_shoulder)
            
            features.append([left_angle, right_angle, shoulder_width])
    
    features = np.array(features)
    # Statistical summary: mean, std, range for each feature
    return np.concatenate([
        np.mean(features, axis=0),
        np.std(features, axis=0),
        np.ptp(features, axis=0)  # peak-to-peak range
    ])

# Compare conditions
conditions = {
    "baseline_normal": "data/baseline_normal.json",
    "different_shoes": "data/different_shoes.json",
    "backpack_load": "data/backpack_load.json",
    "varied_pace": "data/varied_pace.json",
    "coat_layers": "data/coat_layers.json",
}

# Load pre-extracted landmark sequences (from extract_gait_features)
import json
vectors = {}
for name, path in conditions.items():
    with open(path) as f:
        data = json.load(f)
    vectors[name] = gait_vector(data["landmarks"])

ref = vectors["baseline_normal"]
print(f"{'Condition':<25} {'Cosine Distance':>15} {'Re-ID Risk':>12}")
print("-" * 55)
for name, vec in vectors.items():
    dist = cosine(ref, vec)
    # Thresholds derived from research literature (Wan et al. 2018): cosine distance < 0.15 indicates
    # strong gait signature match (HIGH re-ID risk); 0.15–0.35 = moderate; > 0.35 = low risk
    risk = "HIGH" if dist < 0.15 else "MEDIUM" if dist < 0.35 else "LOW"
    print(f"{name:<25} {dist:>15.4f} {risk:>12}")

# Expected output:
# === Gait Comparison Results ===
# Condition              | Distance | Risk Level
# ──────────────────────────────────────────────
# normal_walk            |   0.0823 | HIGH — likely re-identifiable
# modified_stride        |   0.2146 | MEDIUM — some features persist
# shoes_changed          |   0.1934 | MEDIUM — partial mitigation
# weighted_clothing      |   0.3847 | LOW — effective countermeasure
# combined_countermeasure|   0.4521 | LOW — strong defense

Field Validation Checklist

  • Record sample walking clips across multiple camera angles and distances (2m, 5m, 10m, 20m).
  • Extract pose landmarks using MediaPipe and compute gait feature vectors for baseline conditions.
  • Re-capture with modified conditions (different shoes, load, pace, clothing layers) and re-extract.
  • Compute cosine distance between baseline and modified gait vectors — target >0.30 distance.
  • Test re-identification across cameras: does your modified gait reduce cross-camera matching?
  • Document which route features create natural confidence drop-offs (turns, crowds, elevation changes).

Real-World Gait Deployments

Gait recognition has moved from academic research to active deployment. Understanding where it's operational informs realistic threat modeling.

Watrix (China)

Chinese AI company deploying gait recognition in public spaces, including transit hubs and street-level surveillance. Claims 94% identification accuracy at distances up to 50 meters — well beyond reliable face recognition range.

  • • Works with low-resolution footage where facial features are not extractable
  • • Integrated with existing CCTV infrastructure across multiple Chinese cities
  • • Can process subjects from any angle, not just frontal or profile views

Airport Security Pilots

Gait recognition pilots have been tested at Dubai International Airport (DXB) and London Heathrow as a secondary biometric layer alongside face recognition — targeting boarding verification and watchlist screening.

  • • Used as a soft biometric to narrow candidate pools before face match
  • • Long corridor environments provide ideal observation windows (10+ stride cycles)
  • • Combined with luggage tracking and boarding pass correlation

Floor-Sensor Gait Recognition

Pressure-sensitive floor systems (e.g., Scanalytics) detect unique gait signatures without any camera. These capture foot pressure distribution, step timing, and weight transfer patterns embedded in floor tiles or mats.

  • • No visual component — works in complete darkness and avoids camera privacy concerns
  • • Deployed in high-security facilities and smart building research projects
  • • Harder to detect and counter than camera-based systems

Research Datasets

Academic gait recognition research underpins all deployed systems. Understanding these datasets reveals what conditions models train on — and where they're weakest.

  • CASIA-B: 124 subjects, 11 angles, 3 conditions (normal, bag, coat) — most-cited benchmark
  • OU-ISIR: Largest dataset — 4,000+ subjects, multiple sensor types including floor pressure
  • University of Manchester: Multi-modal dataset with outdoor and indoor sequences
  • Limitation: Most datasets use controlled conditions that don't reflect real-world degradation

Deployment Trend

Gait recognition is increasingly deployed as a fusion biometric — combined with face, device, and behavioral signals rather than used standalone. Effective counter-surveillance must address the full fusion pipeline, not just individual modalities.

Related: Physical Countermeasures

Gait modification through physical methods (shoe inserts, weighted garments, deliberate stride changes) is covered in detail in the Physical Countermeasures section. Combine software-based gait analysis with physical testing for a complete assessment.

Defense Strategy Summary

  • Vary stride parameters: change footwear, load position, and pace between monitored areas
  • Limit observation window: break continuous tracks into segments shorter than 2 stride cycles
  • Exploit environmental occlusion: use crowds, structures, and terrain changes to interrupt tracking
  • Cross-correlate defenses: gait changes must be paired with device hygiene and appearance changes
  • Measure outcomes: use pose estimation tools to quantify gait vector changes before and after countermeasures

Model Limitation

Gait recognition confidence declines significantly with occlusion, crowding, camera angle changes (>45° from lateral), shorter observation windows (<2 stride cycles), and adverse lighting/weather conditions.
🎯

Gait Analysis Labs

Hands-on exercises to understand and disrupt gait-based identification.

🔧
Gait Signature Extraction Custom Lab medium
Record 30-second walking clips from 3 anglesExtract pose landmarks with MediaPipeCompute stride cadence and joint angle featuresBuild a personal gait feature vectorCompare your gait consistency across different days
🔧
Gait Modification Testing Custom Lab medium
Record baseline gait in normal conditionsModify footwear, load, pace, and clothingRe-extract gait features for each modificationCompute cosine distance between baseline and modified vectorsIdentify which modifications produce the greatest feature shift