Multimodal AI · Point Clouds · UAV-Based Perception · Applied AI Engineering · Ongoing
MultiLiDAR UAV Sensing
Heterogeneous LiDAR-Based Perception for Robust UAV-Scale Object Detection and Tracking
Overview
This project studies robust UAV-scale object perception from heterogeneous LiDAR streams. The pipeline processes sparse and asynchronous point-cloud measurements, accumulates temporal evidence, extracts candidate object clusters, and supports reliability-aware tracking and fusion under missing or degraded sensor observations.
Problem Statement
LiDAR-based airborne object perception is difficult because targets may generate very sparse returns, individual frames can be nearly empty, sensors have different fields of view and timestamp behavior, and naive point concatenation can fail under misalignment or degraded measurements.
Motivation
Real-world UAV sensing systems need perception pipelines that can remain useful when measurements are sparse, asynchronous, or sensor-dependent. MultiLiDAR sensing provides complementary coverage, but requires careful temporal accumulation, candidate generation, and reliability-aware fusion rather than simple raw-data merging.
Methodology
The pipeline is designed as a practical multimodal AI system for sparse point-cloud perception and tracking. At a high level, it includes:
- Per-sensor point filtering and timestamp organization.
- Temporal accumulation over short windows.
- Candidate extraction using clustering and density-based grouping.
- Track-centric or candidate-centric fusion across heterogeneous LiDAR streams.
- Reliability-aware updates under sparse, missing, or degraded measurements.
- Evaluation through detection/tracking metrics and qualitative trajectory plots.
Key Contributions
- Heterogeneous LiDAR pipeline for sparse UAV-scale object perception.
- Temporal point-cloud accumulation for recovering weak evidence.
- Cluster-based candidate extraction from sparse returns.
- Reliability-aware fusion/tracking under missing or degraded observations.
- Applied AI engineering pipeline for preprocessing, evaluation, visualization, and failure analysis.
Experimental Setup
Experiments use multi-sensor point-cloud sequences with heterogeneous LiDAR streams. Evaluation focuses on localization/tracking quality, recall under sparse observations, robustness to missing measurements, and qualitative trajectory behavior.
Quantitative Results
Quantitative results will be added after the evaluation table is finalized.
My Role
My contributions include pipeline design, point-cloud preprocessing, temporal accumulation, candidate extraction, fusion/tracking logic, evaluation scripts, visualization, and failure-mode analysis.
Related Work / Status
Ongoing research and applied AI engineering project connected to my broader work on reliable multimodal perception. This page does not claim a publication status.