C A M E L
THE FIRST INTERNATIONAL WORKSHOP

COMPUTER ARCHITECTURE FOR MACHINE LEARNING

Portland, OR, USA June 14, 2015

In conjunction with the 42nd International Symposium on Computer Architecture (ISCA-42)

Call for papers

The first workshop on “Computer Architecture for Machine Learning” (CAMEL) aims at exploring novel ideas in computer design and HW acceleration for machine learning.

Machine learning is a foundation for many applications like speech recognition, natural language processing, automatic translation, computer vision, search and recommendation engines, robotics etc. It is used for complex computing tasks where using explicitly programmed, rule-based algorithms is either infeasible, or very complex. ML algorithms used to make predictions or decisions based on large data, rather than following explicitly programmed instructions. ML algorithms are very compute and memory intensive. Data is large, and occasionally data is represented as sparse matrix or graphs. Building the general platform for ML is not trivial, since there are a huge variety of algorithms: support vector machines, decision trees, deep learning, Bayesian networks, Hidden Markov Model etc. Machine learning algorithms bring new challenges to the computer architecture such as:

  • Can we build a hardware which will significantly accelerate a large set of ML algorithms while being general enough?
  • What are the underlying basic blocks?

CAMEL invites research papers and talks on topics related to

  • HW acceleration for machine learning, deep learning and neural networks
  • HW acceleration for dense and sparse linear algebra, and large scale graph processing
  • HW for machine vision, speech recognition and natural language processing
  • ML workloads and benchmarks

Workshop will include both invited talks and peer-reviewed papers. Peer-reviewed papers will not be published in proceedings. Therefore, submitting to CAMEL will not preclude future publication opportunities. Papers and presentation slides will be made available online with the authors’ approval

Submisson Guideline

Submit a 2-page abstract in PDF format to organizers: boris.ginzburg@intel.com, ronny.ronen@intel.com, and temam@google.com by March 31, 2015. Please use the guidelines from the main conference http://cag.engr.uconn.edu/isca2014/submission.html. For additional information regarding paper submissions, please contact the organizers.

Important Dates

  • Abstract submission: March 31, 2015
  • Author notification: April 30, 2015
  • Final camera-ready paper: May 31, 2015
  • Workshop: June 14, 2015

Workshop Organizers

Boris Ginzburg, Intel: boris.ginzburg@intel.com
Ronny Ronen, Intel: ronny.ronen@intel.com
Olivier Temam, Google: temam@google.com
 

Preliminary Program

Time

Speaker

Title

09:00-9:05

Boris Ginsburg, Intel

Opening Remarks

Hardware Acceleration for Deep Learning

09:05-9:30

Amir Khosrowshahi, Nervana Systems

Computer Architectures for Deep Learning

9:30-10:00

Eric Chung,
Microsoft Research

Accelerating Deep Convolutional Neural Networks Using Specialized Hardware in the Datacenter

10:00-10:30

Vinayak Gokhale, Purdue University

A Hardware Accelerator for Convolutional Neural Networks

10:30-11:00

Srimat T. Chakradhar,
NEC Labs

TBA

11:00-11:30

Coffee

 

Machine Learning Workloads Analysis

11:30-12:00

Jonathan Pearce,
Intel Labs

You Have No (Predictive) Power Here, SPEC!

12:00-12:30

Scott Beamer,
UC Berkeley,

Graph Processing Bottlenecks

12:30-13:30

Lunch

 

Neuromorphic Engineering

13:30-14:00

James E. Smith
Univ. Wisconsin – Madison

Biologically Plausible Spiking Neural Networks

14:00-14:30

Giacomo Indiveri, Institute of Neuroinformatics, Univ. of Zurich and ETH Zurich

Neuromorphic circuits and for building autonomous cognitive systems

14:30-15:00

Yiran Chen,
Univ. of Pittsburgh

Hardware Acceleration for Neuromorphic Computing: An Evolving View

15:00-15:30

Mikko Lipasti,
Univ. of Wisconsin – Madison

Mimicking the Self-Organizing Properties of the Visual Cortex

15:30-16:00

Coffee

 

Hardware Acceleration for Machine Learning

16:00-16:30

Shai Fine, Intel

Machine Learning Building Blocks

16:30-17:00

Chunkun Bo,
University of Virginia

String Kernel Testing Acceleration using the Micron Automata Processor

17:00-17:30

Ran Ginosar, Technion

Accelerators for Machine Learning of Big Data