In this blog series, we summarise our favourite non-conventional machine learning and artificial intelligence papers

Distillation as a Defence to Adversarial Perturbations against Deep Neural Networks

N. Papernot et al. 2016

Papernot et al. introduce the concept of defensive distillation as a methodology for providing resistance against adversarial attacks for Deep Neural Networks. Distillation [see last weeks post for a review] is a technique for transferring predictive correlations and ambiguity contained in the logits of the final predictive layer of a neural network. The defensive adaptation of distillation is argued to work by expanding the radius by which a data point x must be translated in order for the predictive neural network to misclassify the point. The metric used to evaluate this is referred to as robustness. Generally, adversarial attacks depend on shifts applied to x, so naturally models that are more robust in this sense would be more resistant to adversarial attacks applied to the data.

The authors evaluate the ability of  defensive distillation to improve resilience to adversarial attacks for a given model trained on the MNIST and CIFAR10 datasets. For MNIST, the success-rate of adversarial attacks is reduced from 96% to 0.45% for a high distillation temperature of T=100. For CIFAR10, the success rate is reduced from 88% to 5% for a distillation temperature of T=100. For distillation with this temperature, the test accuracy is reduced by 0.6 for MNIST and by 1.4 for CIFAR10. The authors show a correlation between the increase of distillation temperature and reduction of adversarial attack success rate. Furthermore, this is done without a significant reduction in accuracy.

Our opinion: The success of distillation at defending against adversarial attacks is a surprising and useful result. However, it is important to explore the kinds of adversarial attacks that this methodology is limited to defending against.

Generating Sentences from a Continuous Space

Bowman, SR, Vilnis L, et al (2016).

The authors explore the introduction of a variational latent space when training language models built upon RNN based architectures (RNNLM). The architecture under consideration is an LSTM cell acting as an encoder, coupled into a VAE-like hidden layer with a diagonal gaussian prior, and then decoded using another LSTM cell. The authors introduce techniques such as KL cost annealing and word dropout in order to prevent the encoded representations of sentences from collapsing into the variational prior. A key insight is that global hidden variables, such as those captured in the latent space of autoencoders, need to be managed effectively when dealing with sequential inputs.

The authors test the performance of their LSTM-based VAE on the Penn Treebank dataset in order to measure the benefit of the introduction of a global latent variable over a basic RNNLM. The authors show that the introduction of a global latent variable helps when using beam search to impute missing words into sentences. The approach also helped improve performance against an adversarial classifier. The authors show that that diverse samples can be generated from their imposed latent space prior if word dropout is tuned properly, and that the posterior of the model is successful at identifying similar sentences in the corpus. Most interestingly, the authors show that new contextual sentences can be generated from the linear interpolation between the encoding of input sentences. For example, ‘I want to be with you’ is sampled from the interpolation between the encodings of `I want to talk to you’ and `she did n’t want to be with him`.

Our opinion: This paper delivers fundamental results for both the properties of structured latent spaces and the techniques required to train them. These insights more than make up for the lack of performance improvement generated from the presented approach.

Attend Before you Act: Leveraging human visual attention for continual learning

K. Khetarpal, D. Precup (2018).

In this paper, Khetarpal and Precup emulate the affects of visual attention upon agents within DeepMind Lab’s 3D static navigation maze task. Visual attention allows humans to selectively pay attention to certain parts of their visual input, gathering relevant information and ignoring disruptions, for an efficient representation. The authors create foveated visual input by applying real time saliency maps (using a spectral residue method) over the original image.

The authors explore make two significant contributions, firstly they note that visually attentive agents take slightly longer to train than their unimpaired counterparts. Secondly, they note that the visually attentive models perform better in transferring learning tasks, where noise is added to an environment.

Our opinion: We question the claim that testing an agent in the same game, but with more noise is a valid `transfer learning problem’. Whilst the results do show some promise, we believe better and dynamic salient maps could have be chosen, in particular, maps which are reactive to the current agent’s state. 

Combined Reinforcement Learning via Abstract Representations

Francois-Lavet et al. (2018)

In this paper, the authors introduce a new architecture for combining model-free and model-based approaches to reinforcement learning. Dubbed the `Combined Reinforcement via Abstract Representation’ (CRAR), this modular architecture exploits a lower dimensional space which is shared by an encoder (which learns the model) and a Q-network (which learns the policy). The authors visualise the abstract space by sampling transitions from random policies – which convincingly demonstrate that the agents learn the models.

To train the encoder to produce effective representations, auxiliary networks for Transition, Reward and Discount models are fed these representations and optimised during train time. These components can be considered the model-based learning element of the network and force the abstract state to represent important low-dimensional features of the space. On the other hand, during test time, only the encoder and Q network operate – the model free components.

Our opinion: This is a well thought out paper, which suggests a multitude of different applications and even includes an ablation study to validate the network. It is most certainly worth a read.

 

This post was written by Akbir Khan and Sean Billings — Research Engineers at the Spherical Defence Labs.