Our research group is again one of the organizing team of the DCASE 2017 challenge, and I am acting as the task coordinator for three challenge tasks: Acoustic scene classification, Detection of rare sound events, and Sound event detection in real life audio.
We are currently conducting a listening experiment for acoustic scene recognition. This test is supporting the result analysis of DCASE2016 challange task for acoustic scene recognition.
First DCASE2016 workshop was help 3rd of September 2016 at Budapest, Hungary. The workshop collected many international researchers working on computational analysis of sound events and scene analysis from both academia and industry. Proceedings of the presented works are now published.
Our research group was part of an organizing team of the DCASE (Detection and Classification of Acoustic Scenes and Events) 2016 challenge, and I acted as one of the task coordinator for two challenge tasks: Acoustic scene classification and Sound event detection in real life audio.
Automatic sound event detection aims at processing the continuous acoustic signal and converting it into symbolic descriptions of the corresponding sound events present at the auditory scene. Sound event detection can be utilized in a variety of applications, including context-based indexing and retrieval in multimedia databases, unobtrusive monitoring in health care, and surveillance.
Context recognition can be defined as the process of automatically determining the context around a device. Information about the context will enable wearable devices to provide better service to users' needs, e.g., by adjusting the mode of operation accordingly.
Auditory scene synthesis aims to create a new arbitrary long and representative audio ambiance for a location by using a small amount of audio recorded at the exact location. By adding this audio ambiance for the specific location in virtual location-exploration services (e.g. Google Street view) would enhance the user experience, giving the service a more 'real' feeling.
Understanding the timbre of musical instruments or drums are an important issue for automatic music transcription, music information retrieval and computational auditory scene analysis.