Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp019c67wq270
Title: | Automatic Music Transcription Using Feed-Forward Neural Networks |
Authors: | Smith, Matthew Clarence |
Advisors: | Engelhardt, Barbara |
Contributors: | McConnell, Mark |
Department: | Mathematics |
Class Year: | 2016 |
Abstract: | In this paper, we demonstrate a method by which music can be automatically transcribed from audio using feed-forward neural networks. That is, given an audio recording containing multiple instruments, a computer can automatically output the pitch and rhythm information for each instrument's part so that it could be written using standard musical staff notation. We approach this in two steps: First, we use a neural network to separate the composite audio file into separate audio files for each instrument. Then, using those, we utilize a second neural network to perform pitch detection. From that result, we can finally determine the timing for each note empirically. This is then sufficient to notate each instrument's part. We present our results, and discuss the successes and shortcomings of this approach. |
Extent: | 31 pages |
URI: | http://arks.princeton.edu/ark:/88435/dsp019c67wq270 |
Type of Material: | Princeton University Senior Theses |
Language: | en_US |
Appears in Collections: | Mathematics, 1934-2020 |
Files in This Item:
File | Size | Format | |
---|---|---|---|
SMITH_Matt_thesis.pdf | 809.17 kB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.