Rainfall observation using surveillance audio

Xing Wang, Meizhen Wang, Xuejun Liu, Thomas Glade, Mingzheng Chen, Yujia Xie, Hao Yuan, Yang Chen

Veröffentlichungen: Beitrag in FachzeitschriftArtikelPeer Reviewed


Environmental audio recordings are a rich and underexploited source of rainfall events. Widespread surveillance cameras continuously record rainfall information, which provides a basis for the possibility of rainfall monitoring. In this study, using surveillance audio as input, an automatic rainfall level classification system was built. Through rainfall level definitions, the rainfall observation task can be transformed into an audio classification task. Three 2-D baseline convolutional neural networks (CNNs) were proposed as the classifiers. In view of the classifier training and testing, a new dataset named Rainfall Audio_XZ (RA_XZ) was generated based on the surveillance audio data. Because of the significant impact of the input features on the classification performance, 16 kinds of feature aggregation schemes were constructed and investigated to find the best representation of rainfall. The experimental results demonstrate that the proposed CNN (7-stack CNN) achieves 81.67% accuracy in rainfall level classification in the RA_XZ dataset and 93.38% accuracy in environmental sound classification in the UrbanSound8k dataset, which is optimal performance compared to that of some existing relevant algorithms. Furthermore, audio aggregation strategies that facilitate the representation and classification of rainfall events are investigated, which has important implications for audio and speech classification systems. In summary, our study provides an effective supplement to traditional rainfall monitoring technologies. (C) 2021 Elsevier Ltd. All rights reserved.

FachzeitschriftApplied Acoustics
PublikationsstatusVeröffentlicht - 15 Jan. 2022

ÖFOS 2012

  • 103002 Akustik
  • 105408 Physische Geographie