Tremendous advances in our ability to acquire, store and process data, as well as the pervasive use of computers in general, have resulted in a spectacular increase in the amount of data being collected. This availability of high-quality data has led to major advances in both science and industry. In general, society is becoming increasingly data driven, and this trend is likely to continue in the coming years.
The increasing number of applications processing massive data means that in general focus on algorithm efficiency is increasing. However, the large size of the data, and/or the small size of many modern computing devices, also means that issues such as memory hierarchy architecture often play a crucial role in algorithm efficiency. Thus the availability of massive data also means many new challenges for algorithm designers.
The aim of the workshop on massive data algorithmics is to provide a forum for researchers from both academia and industry interested in algorithms for massive dataset problems. The scope of the workshop includes both fundamental algorithmic problems involving massive data, as well as algorithms for more specialized problems in, e.g., graphics, databases, statistics and bioinformatics. Topics of interest include, but are not limited to:
- I/O-efficient algorithms
- Cache-oblivious algorithms
- Memory hierarchy efficient algorithms
- Streaming algorithms
- Sublinear algorithms
- Parallel and distributed algorithms for massive data problems
- Engineering massive data algorithms
We invite submissions of extended abstracts of original research.
The submission should begin with the title of the paper, each author's name, affiliation, and e-mail address, followed by a succinct statement of the problems considered, the main results, an explanation of their significance, and a comparison to past research, all of which should be easily understood by non-specialists. More technical developments follow as appropriate. Use 11-point or larger font in single column format, with one-inch or wider margins all around. You may include a clearly marked appendix, which will be read at the discretion of the committee.
The submission, excluding title page, bibliography and appendix, must not exceed 10 pages (authors should feel free to send submissions that are significantly shorter than 10 pages).
Extended abstract should be submitted through the EasyChair website by July 14th. The submission page can be found here. Authors will be notified about acceptance by July 24th. There will be no formal proceedings, so work presented at the workshop can also be (or have been) presented at other conferences. An informal collection of the extended abstracts will be provided to the workshop participants. By submitting a paper the authors acknowledge that in case of acceptance
at least one of the authors must register at ALGO 2015 or MASSIVE 2015,
attend the conference, and present the paper.
Peyman Afshani (Aarhus University)
Deepak Ajwani (Bell Labs)
Alexandr Andoni (Simons Institute, Berkeley)
Michael Bender (Stony Brook)
Karl Bringmann (ETH Zürich)
Raphael Clifford (Bristol University)
Erik Demaine (MIT)
John Iacono (NYU)
Piotr Indyk (MIT)
Giuseppe F. Italiano (Universita di Roma)
Moshe Lewenstein (Bar Ilan University)
Ulrich Meyer (Goethe University)
Jelani Nelson (Harvard University)
Huy L Nguyen (Simons Institute, Berkeley)
Jeff M. Phillips (University of Utah)
Nodari Sitchinava (University of Hawaii)
David Woodruff (IBM Almaden)
Ke Yi (Hong Kong University of Science and Technology)
Norbert Zeh (Dalhousie University)
Qin Zhang (Indiana University)
Chair: Kasper Green Larsen (MADALGO, Aarhus University)
Lars Arge (Aarhus and MADALGO)
Gerth Stølting Brodal (Aarhus and MADALGO)
Peyman Afshani (Aarhus and MADALGO)
Trine Ji Holmgaard (Aarhus and MADALGO)
The workshop will take place as a part of ALGO 2015, further information will be available soon. All researchers and industry people interested in massive data algorithmics are encouraged to attend the workshop.