Logo: University of Southern California

Parlearning 2015

The 4th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics

May 29, 2015

Hyderabad, India

In Conjunction with IPDPS 2015

 

PROGRAM

8:30-8:40AM Opening remarks
8:40-9:30AM Keynote-01 David Bader  Massive-Scale Analytics [SLIDES]
9:30-10:00AM ParLearning-01 Tao Luo; Yin Liao; Yurong Chen; Jianguo Li; Victor Lee LFRTrainer: Large-scale Face Recognition Training System [SLIDES]
10:00-10:30AM Coffee Break
10:30-10:40AM Award Session
10:40-11:30AM Keynote-02 Yihua Huang Octopus: unified Programming Model and Platform for Big Data Machine Learning and Data Analytics [SLIDES]
11:30-12:00PM ParLearning-04 Charith D Wickramaarachchi; Charalampos Chelmis; Viktor K. Prasanna Empowering Fast Incremental Computation over Large Scale Dynamic Graph [SLIDES]
12:00-1:10PM Lunch
1:10-2:00PM Keynote-03 Ananth Kalyanaraman Problems, Challenges and Opportunities in Exploring the “Dark Matter” of Life Sciences: The Microbiome [SLIDES]
2:00-2:30PM ParLearning-05 Sai Rajeshwar; Sudheer Kumar; Vineeth Balasubramanian; Ravi Sankar Scaling up the training of Deep CNNs for Human Action Recognition
2:30-3:00PM ParLearning-02 Yusuke Nishioka; Kenjiro Taura Scalable Task-Parallel SGD on Matrix Factorization in Multicore Architectures
3:00-3:30PM Coffee Break
3:30-4:00PM ParLearning-06 Ravikant Dindokar; Neel Choudhury; Yogesh Simmhan Analysis of Subgraph-centric Distributed Shortest Path Algorithm [SLIDES]
4:00-4:30PM ParLearning-03 Bing Lin; Wenzhong Guo; Guolong Chen; Naixue Xiong; Rongrong Li Cost-Driven Scheduling for Deadline-Constrained workflow on Multi-Clouds
4:30-4:50PM Discussion & Closing

 

CALL FOR PAPERS

Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the times of "Big Data". The past ten years has seen the rise of multi-core and GPU based computing. In distributed computing, several frameworks such as Mahout, GraphLab and Spark continue to appear to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions would be characterized as scaling up X on Y, where potential choices for X and Y are provided below.  

Scaling up

  • recommender systems
  • gradient descent algorithms
  • deep learning
  • sampling/sketching techniques
  • clustering (agglomerative techniques, graph clustering, clustering heterogeneous data)
  • classification (SVM and other classifiers)
  • SVD
  • probabilistic inference (bayesian networks)
  • logical reasoning
  • graph algorithms and graph mining

On

  • Parallel architectures/frameworks (OpenMP, OpenCL, Intel TBB)
  • Distributed systems/frameworks (GraphLab, Hadoop, MPI, Spark etc.)

ORGANIZATION

  • Sutanay Choudhury, Pacific Northwest National Laboratory, USA
  • Arindam Pal, TCS Innovation Labs, India
  • Anand Panangadan, University of Southern California, USA
  • Yinglong Xia, IBM Research, USA

PROGRAM COMMITTEE

  • Virendra C. Bhavsar, University of New Brunswick, Canada
  • Danny Bickson, GraphLab Inc., USA
  • Peter Boncz, Vrije Universiteit, Netherlands
  • Vivek Datla, Pacific Northwest National Laboratory, USA
  • Zhihui Du, Tsinghua University, China
  • Dinesh Garg, IBM India Research Laboratory, India
  • Qirong Ho, Infocomm Research, A*STAR, Singapore
  • Yihua Huang, Nanjing University, China
  • Renato Porfirio Ishii, Federal University of Mato Grosso do Sul (UFMS), Brazil
  • Ananth Kalyanaraman, Washington State University, USA
  • Dionysis Logothetis, Telefonica, Spain
  • Debnath Mukherjee, TCS Innovation Labs, India
  • Huansheng Ning, Beihang University, China
  • Gautam Shroff, TCS Innovation Labs, India
  • Aniruddha Sinha, TCS Research, India
  • Neal Xiong, Georgia State University, USA
  • Jianting Zhang, City College of New York, USA
  • Wei Zhang, IBM Research, USA

PUBLICATIONS CHAIR

  • Naixue Xiong, Colorado Technical University

IMPORTANT DATES

  • Paper submission:  January 18, 2015 AoE January 25, 2015 AoE
  • Notification: February 14, 2015 February 23, 2015
  • Camera Ready:  February 28, 2015

ACCEPTED PAPERS

February 28, 2015: At least one author of each paper is expected to register for the conference (details at http://www.ipdps.org/)
February 28, 2015: Final manuscript must be uploaded including the paper id shown below (in the form ParLearning-0n). Camera-ready files are submitted directly to IEEE, not to EDAS. Details at http://www.ipdps.org/ipdps2015/2015_author_resources.html.

  • ParLearning-01 LFRTrainer: Large-scale Face Recognition Training System. Tao Luo; Yin Liao; Yurong Chen; Jianguo Li; Victor Lee
  • ParLearning-02 Scalable Task-Parallel SGD on Matrix Factorization in Multicore Architectures. Yusuke Nishioka; Kenjiro Taura
  • ParLearning-03 Cost-Driven Scheduling for Deadline-Constrained workflow on Multi-Clouds. Bing Lin; Wenzhong Guo; Guolong Chen; Naixue Xiong; Rongrong Li
  • ParLearning-04 Empowering Fast Incremental Computation over Large Scale Dynamic Graph. Charith D Wickramaarachchi; Charalampos Chelmis; Viktor K. Prasanna
  • ParLearning-05 Scaling up the training of Deep CNNs for Human Action Recognition. Sai Rajeshwar; Sudheer Kumar; Vineeth Balasubramanian; Ravi Sankar
  • ParLearning-06 Analysis of Subgraph-centric Distributed Shortest Path Algorithm. Ravikant Dindokar; Neel Choudhury; Yogesh Simmhan

BEST PAPER AWARD

The best paper award, sponsored by Pacific Northwest National Laboratory, goes to Scalable Task-Parallel SGD on Matrix Factorization in Multicore Architectures by Yusuke Nishioka and Kenjiro Taura. Congratulations!

PAPER GUIDELINES

Submitted manuscripts may not exceed 6 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IPDPS web page (www.ipdps.org).

All submissions must be uploaded electronically at https://edas.info/newPaper.php?c=19417

A best paper award, sponsored by Pacific Northwest National Laboratory, will be announced at the workshop.

Accepted papers with sufficient extensions will be recommended for publication in a journal (to be announced), subject to review by the journal editorial board.

TRAVEL AWARDS

Students with accepted papers have a chance to apply for a travel award. Please find details at www.ipdps.org.

PREVIOUS PARLEARNING WORKSHOPS

  1. 2012
  2. 2013
  3. 2014

 PNNL logoIBM logo