Recent Changes

Wednesday, July 10

Wednesday, March 13

  1. page Wrap-up Feature Learning edited ... Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation …
    ...
    Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation progressed, it became clear that there were two camps of individuals content to voice their thoughts in the forum: those experienced in deep learning and those skeptical of it. These two groups comprised maybe 40% of the total number of attendees, so most didn't actually contribute to conversation. At least one other participant suggested to have a tutorial on Deep Learning at next year's ISMIR, which might make sense given what seems to be a predisposition toward more of a Q&A format of experts and interested individuals.
    Meta-Conclusion
    ...
    together possible lecturers
    it works body wraps reviews.
    lecturers.
    (view changes)

Friday, March 8

  1. page Wrap-up Feature Learning edited ... Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation …
    ...
    Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation progressed, it became clear that there were two camps of individuals content to voice their thoughts in the forum: those experienced in deep learning and those skeptical of it. These two groups comprised maybe 40% of the total number of attendees, so most didn't actually contribute to conversation. At least one other participant suggested to have a tutorial on Deep Learning at next year's ISMIR, which might make sense given what seems to be a predisposition toward more of a Q&A format of experts and interested individuals.
    Meta-Conclusion
    ...
    together possible lecturers.lecturers
    it works body wraps reviews.

    (view changes)

Thursday, December 20

  1. page Program suggestions edited ... 12th October: The event was a real blast, and participation exceeded even our wildest hopes (a…
    ...
    12th October: The event was a real blast, and participation exceeded even our wildest hopes (and boy, they were wild). Thanks to all involved. Here's a quick snap of the event's program at 4pm, immediately after the program building session and before the first session batch. All empty slots quickly filled up in the remaining of the afternoon, to the point of lacking space at the end of the day. Watch this space for more pictures of the complete events, and updates about session debriefs, reviews, etc. JJ Aucouturier
    {schedule.jpg} Picture by Mohamed Sordo (@neomoha)
    UPDATE 19th December.
    Links to abstracts published after the event, on ismir.net: http://ismir2012.ismir.net/event/programme/#LBD
    **Using Linked Open Data for Novel Artist Recommendations**
    Stephan Baumann and Rafael Schirru
    German Research Center for Artificial Intelligence
    **Chordify: Chord transcription for the masses**
    W. Bas de Haas1,3, José Pedro Magalhães2,3, Dion ten Heggeler3, Gijs Bekenkamp3, Tijmen Ruizendaal3
    1Department of Information and Computing Sciences, Utrecht University,2Department of Computer Science, University of Oxford, 3Chordify
    **A Music similarity game prototype using the CASIMIR API**
    Daniel Wolff1, Guillaume Bellec2
    1City University London, School of Informatics, Department of Computing, 2ENSTA ParisTech
    **Notes from the ISMIR12 Late-Breaking session on evaluation in music information retrieval**
    Geoffroy Peeters1, Julián Urbano2, Gareth J. F. Jones3
    1STMS IRCAM-CNRS-UPMC, 2University Carlos III of Madrid, 3Dublin City University
    **Infrastructures and Interfaces for data collection in MIR**
    Tillman Weyde and Daniel Wolff
    Department of Computing, City University London
    **Music Imagery IR: Bringing the song on your mind back to your ears**
    Sebastian Stober1, Jessica Thompson2
    1Data & Knowledge Engineering Group, Otto-von-Guericke-Universitat Magdeburg,2Bregman Music and Auditory Research Studio, Dartmouth College
    **Late-break session on Music Structure Analysis**
    Bruno Rocha1, Jordan B. L. Smith2, Geoffroy Peeters3, Joe Cheri Ross4, Oriol Nieto5, Jan Van Balen6
    1University of Amsterdam, 2Queen Mary University of London, 3IRCAM-CNRS STMS,4Indian Institute of Technology, Bombay, 5New York University, 6Utrecht University
    **MIReS Roadmap: Challenges for Discussion**
    MIReS consortium
    **Shared Open Vocabularies and Semantic Media**
    Gyorgy Fazekas, Sebastian Ewert, Alo Allik, Simon Dixon, Mark Sandler
    Centre for Digital Music, Queen Mary University of London
    **Teaching MIR: educational resourcs related to MIR**
    Emilia Gómez
    Music Technology Group, Universitat Pompeu Fabra
    **Past, Present and Future in Ethnomusicology: the computational challenge**
    Sergio Oramas1, Olmo Cornelis2
    1Polytechnic University of Madrid, 2University College Ghent

    MIR and Impact Factor
    How can we improve MIR-related journals' Impact Factor.
    (view changes)

Sunday, December 2

  1. page Program suggestions edited ... As said by Fabien, and Julian, there a whole session on evalaution Friday morning including a …
    ...
    As said by Fabien, and Julian, there a whole session on evalaution Friday morning including a round-table (links here: http://ismir2012.ismir.net/event/satellite-events#eval); in this link you find the preliminary set of questions that will be adressed by the panel-members; if you're thing about other important topics to be discussed please indicate it here, drop me an email, or just tell me; we could then discuss about this during the panel and during the late-breaking session. Geoffroy
    The CASIMIR API with an example Game and Survey
    ...
    Wolff@City University Guillaume Bellec Tillman Weyde
    Infrastructures and Interfaces for Data Collection in Music Information Retrieval
    ...
    gathered using games with a purpose.Games With A Purpose. We use
    ...
    this ISMIR. But the effects on the collected data, when using games with their specific motivating mechanisms insteadWe would like to discuss possible infrastructures and standardised building blocks that help improve data collection and support sharing and extensibility of surveys is still unclear. Danieldatasets. Tillman WeydeDaniel Wolff@City University
    We would like to see further presentations and discussions on the topic of collecting user ground truth data, including all relevant disciplines and evaluations like MIREX which already have defined some standards.
    Melody Extraction
    (view changes)
    2:59 am

Thursday, November 29

  1. page Program suggestions edited ... The CASIMIR API with an example Game and Survey We would like to present a game with a purpos…
    ...
    The CASIMIR API with an example Game and Survey
    We would like to present a game with a purpose and a general web API we currently develop for collecting music similarity data. Here, the API defines the data formats and the tasks to be solved by the user. It also takes charge of storage of survey and user data and statistical selection of samples. Still parameters like genre combinations can be set. The API provides information and audio urls to clips from the MagnaTagATune and Million song datasets. It is currently tested within a music comparison survey. Our intention is to encourage further surveys where efforts can be focused on the user interface design, leaving the sample management to the API. Daniel Wolff@City University
    Comparing GamesInfrastructures and SurveysInterfaces for Data Collection in Music Information Retrieval
    Supervised training of MIR models to ground truth data is a common technique in our community. Yet, only few data is out there on users perception given a certain bit of audio / music. How can we get such data for song similarity or emotion? How can we make it accessible and results comparable to others? Recently such information has been gathered using games with a purpose. We use such data in our paper at this ISMIR. But the effects on the collected data, when using games with their specific motivating mechanisms instead of surveys is still unclear. Daniel Wolff@City University
    We would like to see further presentations and discussions on the topic of collecting user ground truth data, including all relevant disciplines and evaluations like MIREX which already have defined some standards.
    (view changes)

Wednesday, October 24

  1. page Wrap-up Feature Learning edited ... Course of Discussion Jan tried to start the discussion with a one-minute demo on how unsuperv…
    ...
    Course of Discussion
    Jan tried to start the discussion with a one-minute demo on how unsupervised learning found reliable audio features for speech and music detection, but encountered a technical problem.
    ...
    more experienced participants.participants, relating unsupervised learning to manifold learning, density estimation or data compression.
    It was
    ...
    "real" data waswere also discussed
    ...
    in deep learning. In contrast to manually designed systems, (deep) feature learning as well.saves us from having to find good parameters (e.g., filter coefficients) for a chosen architecture, though.
    After 30 minutes, J.-J. Aucouturier promptly ended the session by ringing a bell. Most of the participants left the room, and a small group of 5-6 people (including the moderators) stood in a circle and continued for a while. At the end, Geoffroy Peeters asked whether there was a toolbox for Deep Learning, and we pointed him to deeplearning.net.
    Conclusion
    ...
    into a Q+AQ&A session on
    Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation progressed, it became clear that there were two camps of individuals content to voice their thoughts in the forum: those experienced in deep learning and those skeptical of it. These two groups comprised maybe 40% of the total number of attendees, so most didn't actually contribute to conversation. At least one other participant suggested to have a tutorial on Deep Learning at next year's ISMIR, which might make sense given what seems to be a predisposition toward more of a Q&A format of experts and interested individuals.
    Meta-Conclusion
    (view changes)
    6:17 am

Monday, October 22

  1. page Wrap-up Feature Learning edited ... After 30 minutes, J.-J. Aucouturier promptly ended the session by ringing a bell. Most of the …
    ...
    After 30 minutes, J.-J. Aucouturier promptly ended the session by ringing a bell. Most of the participants left the room, and a small group of 5-6 people (including the moderators) stood in a circle and continued for a while. At the end, Geoffroy Peeters asked whether there was a toolbox for Deep Learning, and we pointed him to deeplearning.net.
    Conclusion
    ...
    short session.
    At
    Even though, given the technical nature of the topic, it probably would have been beneficial to start with a 2-3 minute, high-level review of the main concepts with illustrations to quickly put everyone on at least a basic foundation. This is probably good practice for all late-breaking sessions that assume or require some kind of prerequisite knowledge, as opposed to a topic that might be more accessible, e.g., Teaching MIR.
    Overall, participation was good, but perhaps not ideal for a barcamp session. As conversation progressed, it became clear that there were two camps of individuals content to voice their thoughts in the forum: those experienced in deep learning and those skeptical of it. These two groups comprised maybe 40% of the total number of attendees, so most didn't actually contribute to conversation. At
    least one
    ...
    next year's ISMIR. ThisISMIR, which might make sense given what seems liketo be a good idea given the numberpredisposition toward more of participants and how the session turned out to having experts answer questionsa Q&A format of non-experts.experts and interested individuals.
    Meta-Conclusion
    Late-breaking sessions have the potential to lay the foundation for a tutorial. Firstly, they serve as a test-bed for possible tutorial topics, and secondly, they also bring together possible lecturers.
    (view changes)
  2. page Wrap-up Feature Learning edited ... In the meantime, Eric attempted to get a sense of what the group hoped to accomplish in the se…
    ...
    In the meantime, Eric attempted to get a sense of what the group hoped to accomplish in the session. After some expected---and admittedly unproductive---banter between the resident "deep learners," a lively discussion emerged, sparked by a question about unsupervised training. The sentiment was expressed that it seems a bit like magic that a system might automatically learning anything on its own, and this was addressed by multiple (complementary or sometimes contrary) responses from the more experienced participants.
    It was at this point that the tone of the session started to take shape, where participants unfamiliar or unconvinced by deep learning (but otherwise self-confident) began asking questions about concepts they didn't understand or naming specific doubts regarding the viability of these methods. These questions were mainly answered, for better or worse, by a select few, however. Regardless, some other topics that came up during this stretch focused on steering one architecture toward different applications from unsupervised data, the influence of supervised fine-tuning, ground-truth data requirements, and the difference between types of supervision during training. The rationale behind autoencoders and the idea of intrinsic probability density of "real" data was also discussed briefly.
    ...
    technical problems fixed. This elicited a question fromfixed to illustrate some of the group regardingtopics that were covered. It proved to be particularly helpful to simply show a diagram of a multi-layer architecture, as this initiated some discussion around model selection. The conversation then focused on architectural design
    ...
    and generally asking aboutcalling into question the guesswork nature of model selection.neural networks. After a
    ...
    three-layer network. motivatingHe offered the ideanotion that domain
    ...
    decisions in MIR. manually designed MIR systems, so it could also steer our choice of networks in deep learning as well.
    After 30 minutes, J.-J. Aucouturier promptly ended the session by ringing a bell. Most of the participants left the room, and a small group of 5-6 people (including the moderators) stood in a circle and continued for a while. At the end, Geoffroy Peeters asked whether there was a toolbox for Deep Learning, and we pointed him to deeplearning.net.
    Conclusion
    (view changes)
  3. page Wrap-up Feature Learning edited ... Course of Discussion Jan tried to start the discussion with a one-minute demo on how unsuperv…
    ...
    Course of Discussion
    Jan tried to start the discussion with a one-minute demo on how unsupervised learning found reliable audio features for speech and music detection, but encountered a technical problem.
    Eric instead ...
    Very soon,
    In the meantime, Eric attempted to get a sense of what the group hoped to accomplish in the session. After some expected---and admittedly unproductive---banter between the resident "deep learners," a lively discussion emerged, mostly initiatedsparked by a question from one of the unexperienced participants, followedabout unsupervised training. The sentiment was expressed that it seems a bit like magic that a system might automatically learning anything on its own, and this was addressed by multiple
    ...
    sometimes contrary) answers ofresponses from the mostmore experienced participants. The following topics were touched
    It was at this point that the tone of the session started to take shape, where participants unfamiliar or unconvinced by deep learning
    (but never covered completely becauseotherwise self-confident) began asking questions about concepts they didn't understand or naming specific doubts regarding the viability of these methods. These questions were mainly answered, for better or worse, by a select few, however. Regardless, some other topics that came up during this stretch focused on steering one architecture toward different applications from unsupervised data, the time pressure):
    ...
    influence of supervised fine-tuning, ground-truth data requirements, and the difference between types of supervision during training. The rationale behind autoencoders and the idea of intrinsic probability density of "real" data was also discussed briefly.
    About 20
    ...
    the demo again,again with the
    ...
    fixed. This initiated new questionselicited a question from the participants:
    ...
    Eric took over
    group regarding architectural design of "deep networks" and how one might go about crafting one, what rationale factors into these decisions, and generally asking about the projectorguesswork nature of model selection. After a few responses from Janvarious folks in the room, Eric hopped on the projector and showed ...
    ...
    a digram of tempo estimation as a three-layer network. motivating the idea that domain knowledge already steers our architectural decisions in MIR.
    After 30 minutes, J.-J. Aucouturier promptly ended the session by ringing a bell. Most of the participants left the room, and a small group of 5-6 people (including the moderators) stood in a circle and continued for a while. At the end, Geoffroy Peeters asked whether there was a toolbox for Deep Learning, and we pointed him to deeplearning.net.
    Conclusion
    (view changes)

More