Autonomous Vehicle Security
Advanced
Autonomous vehicles add new attack surfaces including sensor systems, ML models, and V2X communications. Adversarial attacks can cause dangerous misclassifications.
Sensor Attack Surfaces
LiDAR Attacks
- • Spoofing fake objects
- • Blinding with lasers
- • Relay attacks
- • Point cloud manipulation
Camera Attacks
- • Adversarial patches
- • Traffic sign modification
- • Projected patterns
- • Blinding/saturation
Radar Attacks
- • Ghost vehicle injection
- • Distance spoofing
- • Jamming
- • Replay attacks
GPS Spoofing
- • Location manipulation
- • Route deviation
- • Time attacks
- • Map confusion
Adversarial ML Examples
python
# Adversarial patch generation (conceptual)
# Stop sign misclassified as speed limit
import torch
from torchvision import models
# Load target model (e.g., traffic sign classifier)
model = models.resnet50(pretrained=True)
# Generate adversarial patch
# Optimize patch to cause misclassification
# Physical-world constraints:
# - Print resolution
# - Viewing angle invariance
# - Lighting conditions
# Real-world examples:
# - Stickers on stop signs
# - Patterns on road surface
# - Modified lane markings
# - Projected images at nightSafety Critical
Autonomous vehicle attacks can have life-threatening consequences. Research should
only be conducted in controlled environments with proper safety measures.