Abstract
With major aerospace companies showing interest in certifying UAV systems for civilian airspace, their use in commercial remote sensing applications like traffic monitoring, map refinement, agricultural data collection, etc., are on the rise. But ambitious requirements like real-time geo-referencing of data, support for multiple sensor angle-of-views, smaller UAV size and cheaper investment cost have lead to challenges in platform stability, sensor noise reduction and increased onboard processing. Especially in small UAVs the geo-referencing of data collected is only as good as the quality of their localization sensors. This drives a need for developing methods that pickup spatial features from the captured video/image and aid in geo-referencing. This paper presents one such method to identify road segments and intersections based on traffic flow and compares well with the accuracy of manual observation. Two test video datasets, one each from moving and stationary platforms were used. The results obtained show a promising average percentage difference of 7.01% and 2.48% for the road segment extraction process using moving and stationary platform respectively. For the intersection identification process, the moving platform shows an accuracy of 75% where as the stationary platform data reaches an accuracy of 100%.