Searching audio collections using high-level musical descriptors is a difficult problem, due to the lack of reliable methods for extracting melody, harmony, rhythm, and other such descriptors from unstructured audio signals.
Our goal is to develop an approach to melody-based retrieval in audio collections that would supports audio, as well as symbolic queries.
Our algorithm is based on a melodic mid-level representation and locality sensitive hashing. It supports audio, as well as symbolic queries and ranks results according to melodic similarity to the query. We introduce a beat-synchronous mid-level melodic representation consisting of salient melodic lines, which are extracted from the analyzed audio signal. We propose the use of a two-dimensional shift-invariant transform to extract shift-invariant melodic fragments from the melodic representation and demonstrate how such fragments can be indexed and stored in a song database. An efficient search algorithm based on locality-sensitive hashing is used to perform retrieval according to similarity of melodic fragments. On the cover song detection task, good results are achieved for audio, as well as for symbolic queries, while fast retrieval performance makes the proposed system suitable for retrieval in large databases.
For more details, see:
- M. Marolt (2008). A Mid-Level Representation for Melody-based Retrieval in Audio Collections, IEEE Transactions on Multimedia, Dec. 2008, vol. 10, no. 8, pp. 1617-1625
- M. Marolt (2007). Performing Query-by-Melody on Audio Collections, Presented at: Presented at 154th Meeting of Acoustical Society of America, New Orleans, USA.
- M. Marolt (2006). A mid-level melody-based representation for calculating audio similarity, In: ISMIR 2006 : proceedings, Victoria: Department of Computer Science, University of Victoria, cop. 2006, pp. 280-285.