Research Projects

Fused-Layer CNN Accelerators

A new paper, "Fused-Layer CNN Accelerators", by Manoj Alwani, et al., is presented at the 49th IEEE/ACM International Symposium on Microarchitecture on Oct. 17, 2016. The proposed technique fuses the processing of multiple CNN layers by modifying the order in which the input data are brought on chip, enabling caching of intermediate data between the evaluation of adjacent CNN layers. As a result, the fused-layer accelerator can reduce the total data transfer significantly, e.g. by 95% for the first five convolutional layers of the VGGNet-E network on 362KB of on-chip storage.

Language Modeling a Billion Words

In this Torch7 blog post, we demonstrate how noise contrastive estimation can be used to train a multi-GPU recurrent neural network language model on the Google billion words dataset. Full documentation is provided for how to train and evaluate the model, and generate samples for qualitative analysis.

Data Loading Library

Researchers spend a significant amount of time preparing datasets for training and evaluation. This dataload package provides a simple framework for loading and iterating datasets. Function are provided for automatic loading of common datasets like the Google billion words, Twitter Sentiment140 and ImageNet.

Deep Neural Network Library

Element Research has open sourced the dpnn library. This provides a suite of documented and unit tested neural network components used for deep learning. These include modules like those that implement the REINFORCE learning algorithm, or the popular Inception module which won the ILSVRC 2014 Detection Challenge. For maximum interoperability, all components conform to Torch7's nn interface.

Thursday, October 8, 2015, 9:00am PDT / 12:00pm EDT

NVIDIA Webinar - Torch7: Applied Deep Learning for Vision and Natural Language

Presenter: Nicholas Léonard, Element Inc, Research Engineer

This webinar is targeted at machine learning enthusiasts and researchers and covers applying deep learning techniques on classifying images and building language models, including convolutional and recurrent neural networks. The session is driven in Torch: a scientific computing platform that has great toolboxes for deep learning and optimization among others, and fast CUDA backends with multi-GPU support. Watch here. Download slides here.

Recurrent Neural Network Library

Element Inc. has open sourced a Recurrent Neural Network (RNN) library that extends Torch7's neural network nn package. You can use it to build Simple RNNs, Long Short Term Memory RNNs, Bidirectional RNNs and Bidirectional LSTMs. These models are attracting a lot of attention in the deep learning community. They have the ability to model sequential data like text or videos.

The library provides a general framework for experimenting with different types of recurrent neural network models. The library is divided into modules that can be combined and stacked to create complex recurrent models. These modules are also highly unit tested, which means that they behave as expected.

Recurrent Models for Visual Attention

In this Torch7 post, we outline how to train a recurrent attention model using a reinforcement learning method called REINFORCE. Reproducing the original paper in Torch7, we were able to obtain 0.85% error on the MNIST dataset. Compared to the paper's reported 1.07% that is a significant margin. The post includes some nice visualizations of the learned attention model and detailed explanation of how to use the training and evaluation scripts.

Other Contributions

Element Inc. also actively contributes to many open source projects including torch7, nn and dp. These contributions can take the form of a small bug fix, the creation of a new module, or a complete training script. Contributing open source code to the Torch community is just our way of saying thanks to all the existing contributors.

The Torch community is very active, consisting of developers from Facebook, Google, Twitter, New York University and Purdue University. By working together, we accelerate our research, stimulate new ideas and have a lot of fun along the way.

Resources: