Machine Learning System Research after TensorFlow

Link to full Articles

A paper list about machine learning system and infrastructure published after TensorFlow. (will be updated)

The papers are mainly from system conferences, including OSDI, SOSP, NSDI, EUROSYS, ATC, SoCC. As machine learning system/infrastructure is a cross-domain research area, some research papers might publish in ASPLOS, ISCA, NeuralPS… This paper list focuses on system design for machine learning. The hardware design and algorithm design papers are excluded.

Please feel free to contact me for any suggestion/discussion. I am more than willing to chat with you. Email: xiaowencong[ at ]gmail[ dot ]com.

Deep learning

Training

Optimizing CNNs on Multicores for Scalability, Performance and Goodput (ASPLOS’17)

Poseidon: An Efficient Communication Architecture for Distributed Deep Learning on GPU Clusters (ATC’17)

Improving the Expressiveness of Deep Learning Frameworks with Recursion (EUROSYS’18)

Gist: Efficient Data Encoding for Deep Neural Network Training (ISCA’18)

Ray: A Distributed Framework for Emerging AI Applications (OSDI’18)

Parameter Hub: a Rack-Scale Parameter Server for Distributed Deep Neural Network Training (SoCC’18)

BML: A High-performance, Low-cost Gradient Synchronization Algorithm for DML Training (NeuralPS’18)

JANUS: Fast and Flexible Deep Learning via Symbolic Graph Execution of Imperative Programs (NSDI’19)

Is RPC Appropriate? Distributed Deep Learning on RDMA Can Be Faster (EUROSYS’19)

Inference

Clipper: A Low-Latency Online Prediction Serving System (NSDI’17)

Neurosurgeon: Collaborative Intelligence Between the Cloud and Mobile Edge (ASPLOS’17)

Low Latency RNN Inference with Cellular Batching (EUROSYS’18)

DeepCPU: Serving RNN-based Deep Learning Models 10x Faster (ATC’18)

Dynamic network

Dynamic Control Flow in Large-Scale Machine Learning (EUROSYS’18)

Cavs: An Efficient Runtime System for Dynamic Neural Networks (ATC’18)

Compiler

TVM: An Automated End-to-End Optimizing Compiler for Deep Learning (OSDI’18)

Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming (NeuralPS’18)

Debugging

DeepXplore: Automated Whitebox Testing of Deep Learning Systems (SOSP’17)

Traditional machine learning

Gaia: Geo-Distributed Machine Learning Approaching LAN Speeds (NSDI’17)

Tux²: Distributed Graph Computation for Machine Learning (NSDI’17)

SaberLDA: Sparsity-Aware Learning of Topic Models on GPUs (ASPLOS’17)

Proteus: agile ML elasticity through tiered reliability in dynamic resource markets (EUROSYS’17)

Litz: Elastic Framework for High-Performance Distributed Machine Learning (ATC’18)

PRETZEL: Opening the Black Box of Machine Learning Prediction Serving Systems (OSDI’18)

Orpheus: Efficient Distributed Machine Learning via System and Algorithm Co-design (SoCC’18)

Continuum: A Platform for Cost-Aware, Low-Latency Continual Learning (SoCC’18)

BLAS-on-flash: An Efficient Alternative for Large Scale ML Training and Inference? (NSDI’19)

Cluster management

SLAQ: Quality-Driven Scheduling for Distributed Machine Learning (SoCC’17)

Optimus: An Efficient Dynamic Resource Scheduler for Deep Learning Clusters (EUROSYS’18)

Model Governance: Reducing the Anarchy of Production ML (ATC’18)

Gandiva: Introspective Cluster Scheduling for Deep Learning (OSDI’18)

Tiresias: A GPU Cluster Manager for Distributed Deep Learning (NSDI’19)