Towards Collaborative Intelligence Friendly Architectures for Deep Learning

Amir Erfan Eshratifar1, Amirhossein Esmaili1, Massoud Pedram2
1University of Southern California, 2USC


Abstract

Modern mobile devices are equipped with high performance hardware resources such as graphics processing units (GPUs), making the end-side intelligent services more feasible. Even recently, specialized silicons as neural engines are being used for mobile devices. However, most of mobile devices are still not capable of performing real-time inference using very deep models. Computations associated with deep models for today’s intelligent applications are typically performed solely on the cloud. This cloud-only approach requires significant amounts of raw data to be uploaded to the cloud over the mobile wireless network and imposes considerable computational and communication load on the cloud server. Recent studies have shown that the latency and energy consumption of deep neural networks in mobile applications can be notably reduced by splitting the workload between the mobile device and the cloud. In this approach, referred to as collaborative intelligence, intermediate features computed on the mobile device are offloaded to the cloud instead of the raw input data of the network, reducing the size of the data needed to be sent to the cloud. In this paper, we design a new collaborative intelligence friendly architecture by introducing a unit responsible for reducing the size of the feature data needed to be offloaded to the cloud to a greater extent, where this unit is placed after a selected layer of a deep model. This unit is referred to as the butterfly unit. The butterfly unit consists of the reduction unit and the restoration unit. The outputs of the reduction unit is offloaded to the cloud server on which the computations associated with the restoration unit and the rest of the inference network are performed. Both the reduction and restoration units use a convolutional layer as their main component. The inference outcomes are sent back to the mobile device. The new network architecture, including the introduced butterfly unit after a selected layer of the underlying deep model, is trained end-to-end. Our proposed method, across different wireless networks, achieves on average 53× improvements for end-to-end latency and 68× improvements for mobile energy consumption compared to the status quo cloud-only approach for ResNet-50, while the accuracy loss is less than 2%.