All researchers that are interested or working on aspects of environmental sound classification and detection are invited to share ideas, questions and opinions with their peers through the DCASE community.
This area of research is now developing at a rapid pace, and special sessions on the topic are commonly encountered at international signal processing conferences such as ICASSP or EUSIPCO, as well as special issues in renown journals like IEEE TASLP. As organizers of DCASE Challenge and Workshop, we want to contribute to this dynamic field of research by bringing together researchers from academic and industrial background.
The DCASE community offers a platform for discussion of the different perspectives and approaches, from algorithm development to practical applications and their commercial value. Through this, it aims to support the development of computational scene and event analysis methods by providing public datasets and the opportunity to continuously compare different approaches on the same datasets, using consistent performance measures.
The first DCASE challenge was organized by Queen Mary University of London (QMUL) in 2013, creating an opening into the sphere of public evaluations for everyday sounds. After a 2 year hiatus, the initiative to organize the challenge was revived, and the second edition of the challenge was announced for 2016. DCASE 2016 was organized by Tampere University of Technology (TUT) in collaboration with QMUL, University of Surrey and IRCCyN, and presented participants with more complex tasks and audio data. In conjunction with the challenge, a one day dedicate workshop was organized in Budapest, Hungary, as satellite event to EUSIPCO 2016. We were happy to see so many people at the DCASE 2016 Workshop, there were animated discussions and friendly atmosphere.
The adventure continues with DCASE 2017 Challenge and DCASE 2017 Workshop! We hope to see many of you again, and to welcome new people, to continue debating datasets, methods and performance measurements.
Join the discussion group for announcements from organizers of DCASE Challenge and DCASE Workshop, and to communicate with other members through the email list. We encourage everyone to open discussions on related topics and answer others' questions.
Research groups are welcome to propose and take responsibility for organizing tasks in future DCASE Challenges. Your suggestions and preferences will be taken into account when planning the next challenge. Please contact DCASE organizers for more information.
To contact the DCASE organizers, you can send an email to email@example.com.
The steering group provides advice on the challenge organization and moderates task proposals for future challenges.
DCASE 2016 Challenge
The second DCASE Challenge was organized between 8th February 2016 - 7th July 2016.
The challenge was organized by Tampere University of Technology in collaboration with the Centre for Digital Music from Queen Mary University of London, University of Surrey, and IRCCYN, and it was an official IEEE Audio and Acoustic Signal Processing (AASP) challenge.
Results of the challenge were presented at the DCASE 2016 Workshop, in which selected peer-reviewed publications on challenge submission were also presented.
- Task 1, Acoustic scene classification
- Task 2, Sound event detection in synthetic audio
- Task 3, Sound event detection in real life audio
- Task 4, Domestic audio tagging
Full results for all tasks can be found on the challenge website.
DCASE 2016 Workshop
The first DCASE Workshop was organized in conjunction with DCASE 2016 Challenge. The workshop took place in Budapest on 3rd of September 2016 and had a number of 70 participants.
The technical program included two invited speakers: Prof. Gael Richard from Telecom Paris Tech and Dr. Sacha Krstulovic from Audio Analytic, as well as oral and poster presentations of accepted papers. The oral presentations from the workshop are available online. The full workshop proceedings are available here.
DCASE 2013 Challenge
The first DCASE challenge campaign was organized between 31st March 2013 - 14th April 2013. The challenge was organised by the Centre for Digital Music and by IRCAM, under the auspices of the Audio and Acoustic Signal Processing (AASP) technical committee of the IEEE Signal Processing Society.
Results were presented at a special session in WASPAA 2013; participants were also invited to present a poster at a special session.
- Task 1, Acoustic scene classification
- Task 2, Sound event detection, Office Live
- Task 3, Sound event detection, Office Synthetic
The outcomes of the DCASE 2013 challenge are now fully described in the following open-access journal article:
D. Stowell, D. Giannoulis, E. Benetos, M. Lagrange, and M.D. Plumbley. Detection and classification of acoustic scenes and events. Multimedia, IEEE Transactions on, 17(10):1733–1746, Oct 2015.
Detection and Classification of Acoustic Scenes and Events
For intelligent systems to make best use of the audio modality, it is important that they can recognise not just speech and music, which have been researched as specific tasks, but also general sounds in everyday environments. To stimulate research in this field we conducted a public research challenge: the IEEE Audio and Acoustic Signal Processing Technical Committee challenge on Detection and Classification of Acoustic Scenes and Events (DCASE). In this paper we report on the state of the art in automatically classifying audio scenes, and automatically detecting and classifying audio events. We survey prior work as well as the state of the art represented by the submissions to the challenge from various research groups. We also provide detail on the organisation of the challenge, so that our experience as challenge hosts may be useful to those organising challenges in similar domains. We created new audio datasets and baseline systems for the challenge: these, as well as some submitted systems, are publicly available under open licenses, to serve as benchmark for further research in general-purpose machine listening.
Event detection;Licenses;Microphones;Music;Speech;Speech recognition