Skip navigation
Please use this identifier to cite or link to this item: http://arks.princeton.edu/ark:/88435/dsp016m311r62b
Full metadata record
DC FieldValueLanguage
dc.contributor.advisorAugust, David Ien_US
dc.contributor.authorJohnson, Nick P.en_US
dc.contributor.otherComputer Science Departmenten_US
dc.date.accessioned2015-06-23T19:41:26Z-
dc.date.available2015-06-23T19:41:26Z-
dc.date.issued2015en_US
dc.identifier.urihttp://arks.princeton.edu/ark:/88435/dsp016m311r62b-
dc.description.abstractNow that parallel architectures are common, software must exploit multiple cores to fully utilize hardware resources and achieve efficient execution. Restructuring applications for explicit parallelism requires developers to reason about low-level details to avoid concurrency bugs and achieve parallel performance. Automatic thread extraction relieves developers of this arduous task. This dissertation presents a compiler middle-ware for automatic thread extraction---analyses and transformations that allow the compiler to deliver parallel performance for sequentially-specified input programs. This middle-ware reconsiders the compilation infrastructure to present alternative technologies better suited to the needs of automatic thread extraction. Specifically, Collaborative Dependence Analysis: Since no analysis algorithm can precisely decide all analysis facts, this dissertation combines simple analysis algorithms into a collaborative ensemble algorithm: the ensemble algorithm can disprove dependence queries which no member disproves alone. Collaboration enables factored development, which prescribes the development of small, orthogonal analysis algorithms tailored to the needs of analysis clients. Results demonstrate independently-developed analysis algorithms collaborating to solve complex, multiple-logic queries. Further, results demonstrate a large impact on performance: for some benchmarks, analysis strength is the difference between 11x slowdown and 28x speedup on 50 cores. Scaling Analysis to Larger Scopes: The infrastructure builds around Parallel-Stage Decoupled Software Pipelining (PS-DSWP) thread extraction. PS-DSWP targets large, hot program scopes to overcome the non-recurring overheads of parallel execution. However, as scopes grow, the burden of analysis becomes prohibitive. This dissertation contributes a faster algorithm to compute a dependence graph to drive PS-DSWP. This algorithm identifies dependence edges which cannot affect PS-DSWP. It skips dependence analysis queries pertaining to unimportant edges, reducing analysis time—or allowing more expensive analysis algorithms—for the remaining, important queries. Evaluation demonstrates that the algorithm computes the DAGSCC twice as fast using half as many dependence analysis queries without sacrificing analysis precision. Incorporating Speculation into the Compilation Pipeline: A parallelization system may speculate various properties to enable thread extraction. This dissertation presents design patterns to simplify development of novel speculation types. The dissertation integrates these contributions into a robust and flexible middle-ware upon which many works are built.en_US
dc.language.isoenen_US
dc.publisherPrinceton, NJ : Princeton Universityen_US
dc.relation.isformatofThe Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the <a href=http://catalog.princeton.edu> library's main catalog </a>en_US
dc.subjectAnalysisen_US
dc.subjectCompileren_US
dc.subjectParallelizationen_US
dc.subjectProgramming Languageen_US
dc.subjectSpeculationen_US
dc.subject.classificationComputer scienceen_US
dc.titleStatic Dependence Analysis in an Infrastructure for Automatic Parallelizationen_US
dc.typeAcademic dissertations (Ph.D.)en_US
pu.projectgrantnumber690-2143en_US
Appears in Collections:Computer Science

Files in This Item:
File Description SizeFormat 
Johnson_princeton_0181D_11430.pdf2.53 MBAdobe PDFView/Download


Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.