Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp019c67wq270
Full metadata record
DC FieldValueLanguage
dc.contributorMcConnell, Mark-
dc.contributor.advisorEngelhardt, Barbara-
dc.contributor.authorSmith, Matthew Clarence-
dc.date.accessioned2016-07-12T13:36:16Z-
dc.date.available2016-07-12T13:36:16Z-
dc.date.created2016-05-02-
dc.date.issued2016-07-12-
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp019c67wq270-
dc.description.abstractIn this paper, we demonstrate a method by which music can be automatically transcribed from audio using feed-forward neural networks. That is, given an audio recording containing multiple instruments, a computer can automatically output the pitch and rhythm information for each instrument's part so that it could be written using standard musical staff notation. We approach this in two steps: First, we use a neural network to separate the composite audio file into separate audio files for each instrument. Then, using those, we utilize a second neural network to perform pitch detection. From that result, we can finally determine the timing for each note empirically. This is then sufficient to notate each instrument's part. We present our results, and discuss the successes and shortcomings of this approach.en_US
dc.format.extent31 pages*
dc.language.isoen_USen_US
dc.titleAutomatic Music Transcription Using Feed-Forward Neural Networksen_US
dc.typePrinceton University Senior Theses-
pu.date.classyear2016en_US
pu.departmentMathematicsen_US
pu.pdf.coverpageSeniorThesisCoverPage-
Appears in Collections:Mathematics, 1934-2020

Files in This Item:
File SizeFormat 
SMITH_Matt_thesis.pdf809.17 kBAdobe PDF    Request a copy


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.