Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01sn00b141m
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.advisor | Norman, Kenneth A | - |
dc.contributor.advisor | Cohen, Jonathan D | - |
dc.contributor.author | Lositsky, Olga | - |
dc.contributor.other | Neuroscience Department | - |
dc.date.accessioned | 2017-09-22T14:44:56Z | - |
dc.date.available | 2017-09-22T14:44:56Z | - |
dc.date.issued | 2017 | - |
dc.identifier.uri | http://arks.princeton.edu/ark:/88435/dsp01sn00b141m | - |
dc.description.abstract | The principles that govern how memories are organized into categories, and how these categories then inform predictions for decision-making, constitute active areas of research. In this work, we investigated how people learn associations with different contexts (external cues that do not require responses but guide future behavior), how these associations are used to prepare decisions and how they influence time perception. The first study tested the hypothesis that people would interpret contextual changes in a story (including changes in spatial, social or temporal factors) as ‘event boundaries’, causing them to segment the story into distinct episodes. We predicted that more segmented intervals would appear longer in retrospect. Indeed, people’s retrospective duration estimates were highly correlated with the number of event boundaries in an interval. Using fMRI, we found that activity patterns in brain regions important for binding spatial and temporal elements into episodes changed more during intervals with overestimated durations. The second study investigated how people learn when contextual cues signal a change in the situation. We hypothesized that people merge predictions across contexts into a single cluster when they are similar enough, and only form separate predictions for each context when truly necessary, as predicted by Bayesian inference models. We manipulated the degree to which cues predicted similar response or stimulus probabilities and built a new drift diffusion model to measure how people’s expectations changed by cue. We found that people combined response and stimulus probabilities across cues when the probabilities were similar, despite maintaining entirely separate behavioral rules (stimulus-response associations) for each cue. Moreover, given the distortions in people’s probability memories (caused by merging), model fits showed that contextual predictions were used almost optimally in decisions. The third study tested whether contextual associations were always used to prepare the decision proactively, or whether they could inform the decision reactively. Simulations from the diffusion model reveal that across-trial probability learning, as well as working memory failures, must be quantified in order to isolate reactive strategies in behavior, and that response times provide a cleaner index of strategy than error rates. | - |
dc.language.iso | en | - |
dc.publisher | Princeton, NJ : Princeton University | - |
dc.relation.isformatof | The Mudd Manuscript Library retains one bound copy of each dissertation. Search for these copies in the library's main catalog: <a href=http://catalog.princeton.edu> catalog.princeton.edu </a> | - |
dc.subject | Cognitive control | - |
dc.subject | Context-dependent decision-making | - |
dc.subject | Long-term memory | - |
dc.subject | Probability learning | - |
dc.subject | Structure learning | - |
dc.subject | Time perception | - |
dc.subject.classification | Neurosciences | - |
dc.subject.classification | Cognitive psychology | - |
dc.title | Influence of contextual change on time perception, probability learning, and decision-making dynamics | - |
dc.type | Academic dissertations (Ph.D.) | - |
pu.projectgrantnumber | 690-2143 | - |
Appears in Collections: | Neuroscience |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Lositsky_princeton_0181D_12276.pdf | 14.37 MB | Adobe PDF | View/Download |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.