|8:40-9:30AM||Keynote-01||David Bader||Massive-Scale Analytics [SLIDES]|
|9:30-10:00AM||ParLearning-01||Tao Luo; Yin Liao; Yurong Chen; Jianguo Li; Victor Lee||LFRTrainer: Large-scale Face Recognition Training System [SLIDES]|
|10:40-11:30AM||Keynote-02||Yihua Huang||Octopus: unified Programming Model and Platform for Big Data Machine Learning and Data Analytics [SLIDES]
|11:30-12:00PM||ParLearning-04||Charith D Wickramaarachchi; Charalampos Chelmis; Viktor K. Prasanna||Empowering Fast Incremental Computation over Large Scale Dynamic Graph [SLIDES]|
|1:10-2:00PM||Keynote-03||Ananth Kalyanaraman||Problems, Challenges and Opportunities in Exploring the “Dark Matter” of Life Sciences: The Microbiome [SLIDES]
|2:00-2:30PM||ParLearning-05||Sai Rajeshwar; Sudheer Kumar; Vineeth Balasubramanian; Ravi Sankar||Scaling up the training of Deep CNNs for Human Action Recognition|
|2:30-3:00PM||ParLearning-02||Yusuke Nishioka; Kenjiro Taura||Scalable Task-Parallel SGD on Matrix Factorization in Multicore Architectures|
|3:30-4:00PM||ParLearning-06||Ravikant Dindokar; Neel Choudhury; Yogesh Simmhan||Analysis of Subgraph-centric Distributed Shortest Path Algorithm [SLIDES]|
|4:00-4:30PM||ParLearning-03||Bing Lin; Wenzhong Guo; Guolong Chen; Naixue Xiong; Rongrong Li||Cost-Driven Scheduling for Deadline-Constrained workflow on Multi-Clouds|
|4:30-4:50PM||Discussion & Closing|
CALL FOR PAPERS
Scaling up machine-learning (ML), data mining (DM) and reasoning algorithms from Artificial Intelligence (AI) for massive datasets is a major technical challenge in the times of "Big Data". The past ten years has seen the rise of multi-core and GPU based computing. In distributed computing, several frameworks such as Mahout, GraphLab and Spark continue to appear to facilitate scaling up ML/DM/AI algorithms using higher levels of abstraction. We invite novel works that advance the trio-fields of ML/DM/AI through development of scalable algorithms or computing frameworks. Ideal submissions would be characterized as scaling up X on Y, where potential choices for X and Y are provided below.
February 28, 2015: At least one author of each paper is expected to register for the conference (details at http://www.ipdps.org/)
February 28, 2015: Final manuscript must be uploaded including the paper id shown below (in the form ParLearning-0n). Camera-ready files are submitted directly to IEEE, not to EDAS. Details at http://www.ipdps.org/ipdps2015/2015_author_resources.html.
The best paper award, sponsored by Pacific Northwest National Laboratory, goes to Scalable Task-Parallel SGD on Matrix Factorization in Multicore Architectures by Yusuke Nishioka and Kenjiro Taura. Congratulations!
Submitted manuscripts may not exceed 6 single-spaced double-column pages using 10-point size font on 8.5x11 inch pages (IEEE conference style), including figures, tables, and references. Format requirements are posted on the IPDPS web page (www.ipdps.org).
All submissions must be uploaded electronically at https://edas.info/newPaper.php?c=19417
A best paper award, sponsored by Pacific Northwest National Laboratory, will be announced at the workshop.
Accepted papers with sufficient extensions will be recommended for publication in a journal (to be announced), subject to review by the journal editorial board.
Students with accepted papers have a chance to apply for a travel award. Please find details at www.ipdps.org.