Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01k0698b17d
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Arora, Sanjeev | - |
dc.contributor.author | Risteski, Andrej | - |
dc.contributor.other | Computer Science Department | - |
dc.date.accessioned | 2017-12-12T19:17:24Z | - |
dc.date.available | 2017-12-12T19:17:24Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01k0698b17d | - |
dc.description.abstract | A common theme in machine learning is succinct modeling of distributions over large domains. Probabilistic graphical models are one of the most expressive frameworks for doing this. The two major tasks involving graphical models are learning and inference. Learning is the task of calculating the best fit graphical model from raw data, while inference is the task of answering probabilistic queries for a known graphical model (for example what is the marginal distribution of one of the variables or what is the distribution of a subset of variables, after conditioning on the values of some other subset of variables). Learning can be thought of as finding a graphical model that “explains” the raw data, while the inference queries extract the “knowledge” the graphical model contains. This thesis introduces new provable techniques for performing both of these tasks, in the context of both latent-variable models -- in which a portion of the variables in the graphical model are not observed, as well as fully-observable undirected graphical models (Markov Random Fields). Chapters 2 and 3 will focus on learning latent-variable models, while Chapter 4 will focus on inference in Markov Random Fields. In Chapter 2, I will contribute the first provable results for analyzing variational Bayes: a family of alternating-minimization style algorithms which is very popular in practice for learning latent-variable models. Despite it's popularity with practitioners, the only theoretical guarantees prior to this work concerned convergence to local minima. We will prove that under reasonable assumptions, in the context of topic models, these algorithms will converge to the global minimum. Subsequently, in Chapter 3, we will use the method-of-moments along with new techniques in tensor decomposition and constrained matrix factorization to derive algorithms for learning noisy-OR networks -- the textbook example of a probabilistic model for causal relationships. Importantly, these techniques were only applicable to linear latent-variable models -- which noisy-OR is not. In Chapter 4, I will contribute a new understanding of a class of variational methods for calculating partition functions in Markov Random Fields. The key technical ingredient is a connection to convex programming hierarchies -- a recent area of interest in combinatorial optimization, along with approximations of the entropy of a distribution based on low-order moment information. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject | Bayesian | - |
dc.subject | inference | - |
dc.subject | learning | - |
dc.subject | method of moments | - |
dc.subject | provable | - |
dc.subject | variational | - |
dc.subject.classification | Computer science | - |
dc.title | New Techniques for Learning and Inference in Bayesian Models | - |
dc.type | Academic dissertations (Ph.D.) | - |
pu.projectgrantnumber | 690-2143 | - |
Appears in Collections: | Computer Science |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Risteski_princeton_0181D_12376.pdf | 1.51 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.