Please use this identifier to cite or link to this item:
http://arks.princeton.edu/ark:/88435/dsp01pn89d946z
Title: | Understanding the limits of Artificial Intelligence through Adversarial Examples |
Authors: | Ramchoreeter, Yowan |
Advisors: | Kornhauser, Alain |
Department: | Operations Research and Financial Engineering |
Certificate Program: | Applications of Computing Program |
Class Year: | 2019 |
Abstract: | For the first time in history, Artificial Intelligence (AI) has finally matched human intelligence in meaningful tasks such as image recognition. Led by the advent of deep learning and increased access to data, AI is irreversibly altering a myriad of fields, including transportation, finance, and health care. However, current AI implementations are all plagued by a common Achilles' heel - adversarial examples. An adversarial example is a carefully crafted input which, while having little to no effects on a human observer, is consistently misclassified by even state-of-the-art neural networks. Adversarial examples are therefore important because their existence raises serious questions about the legitimacy and robustness of large-scale applications of artificial intelligence - especially applications where countless lives are at stake. In this paper, we seek to better grasp the limits of current AI implementations by studying adversarial attacks in different settings. We hope that, by better understanding adversarial attacks, we will be able to contribute to making artificial intelligence more robust and secured. |
URI: | http://arks.princeton.edu/ark:/88435/dsp01pn89d946z |
Type of Material: | Princeton University Senior Theses |
Language: | en |
Appears in Collections: | Operations Research and Financial Engineering, 2000-2020 |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
RAMCHOREETER-YOWAN-THESIS.pdf | 4.51 MB | Adobe PDF | Request a copy |
Items in Dataspace are protected by copyright, with all rights reserved, unless otherwise indicated.