Our research group is once again one of the organizing teams of the DCASE 2018 challenge, and I am acting as the task coordinator for acoustic scene classification challenge task.
We have launched a Kaggle in Class competition on acoustic scene classification as part of TUT course. Competition is open for everybody whether you are participating the TUT course or not.
dcase_util, a python library for DCASE researcher to streamline the research code, make it a bit more readable and easier to maintain, has been released.
Second DCASE2017 workshop is help 16 - 17 November 2017, Munich, Germany. The workshop collects many international researchers working on computational analysis of sound events and scene analysis from both academia and industry. Proceedings of the presented works are now published.
Automatic sound event detection aims at processing the continuous acoustic signal and converting it into symbolic descriptions of the corresponding sound events present at the auditory scene. Sound event detection can be utilized in a variety of applications, including context-based indexing and retrieval in multimedia databases, unobtrusive monitoring in health care, and surveillance.
Context recognition can be defined as the process of automatically determining the context around a device. Information about the context will enable wearable devices to provide better service to users' needs, e.g., by adjusting the mode of operation accordingly.
Auditory scene synthesis aims to create a new arbitrary long and representative audio ambiance for a location by using a small amount of audio recorded at the exact location. By adding this audio ambiance for the specific location in virtual location-exploration services (e.g. Google Street view) would enhance the user experience, giving the service a more 'real' feeling.
Understanding the timbre of musical instruments or drums are an important issue for automatic music transcription, music information retrieval and computational auditory scene analysis.