Semantic Music Discovery

By combining sources of social information (web pages, preference ratings, social tags) with content-based music analysis, we can annotate novel songs with semantically meaningful words. Then, given a text-based query, we can then retrieve a list of relevant song. This is how we power the CAL Music Discovery Engine, Meerkat Internet Radio Player, the Artist “Image” Browser, and the new MegsRadio.


Content-Based Multimedia Retrieval

The Internet is full of useful multimedia content (images, videos, sounds) so it is important that we be able to find it in a reliable and efficient manner. However, relative to text documents, it is difficult to index multimedia documents with words. We study techniques that allow automatically annotate multimedia documents so that they can be found using a typical search engines (e.g., Google, Yahoo!, Bing).


Music Mashup Creation

Artists like Girl Talk and Kutiman create music mashups by piecing together multiple layers of music samples. However, much of their work involves the manual selection and alignment of the samples so that they form a musically coherent sound. We are developing technologies like the BeatSyncMashCoder to help with the semi-automatic creation of music mashups.   


Annotation Games

We can collect a large number of music tags by deploying casual, web-based video games. As players interact with the game, they provide semantic information about music that can be used to both directly index songs and train our computer audition system. You can help us collect data for this project by playing Herd It on Facebook.


Music Boundary Detection

A musical boundary is a transition between two musical segments such as a verse and a chorus. Our goal is to automatically detect musical boundaries using a supervised learning framework.