Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP
Venue: BlackboxNLP
Type: Workshop
Interpretability
Formal Languages
Architectures
Authors
Affiliation
Sophie Hao
Yale University
William Merrill
Yale University
Dana Angluin
Yale University
Robert Frank
Yale University
Noah Amsel
Yale University
Andrew Benz
Yale University
Simon Mendelsohn
Yale University
Published
November 1, 2018
Abstract
This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.