Center for Massive Data Algorithmics is a response to the inadequacy of traditional algorithms theory when it comes to processing massive datasets on diverse computation platforms. The ambitious goal of the center is to be a world-leading center in algorithms for handling massive data, where massive is interpreted broadly to cover computations where the data is large compared to the resources of the computational device.

The high-level objectives of the center is to:

  1. Significantly advance the fundamental algorithms knowledge in the area of efficient processing of massive datasets.
  2. Train the next generation of researchers in a truly world-leading and international environment.

  3. Be a catalyst for multidisciplinary collaborative research on massive dataset issues in commercial and scientific applications.

To meet these objectives the center builds on:

  • Research strength of center core researchers
  • Extensive international research collaboration
  • Multidisciplinary and industry collaboration
  • A vibrant international environment at the center cite
  • Focus on three (different but also very related) core research areas:

    1. I/O-efficient algorithms for efficient processing of massive datasets that reside on slow
      (external) mass storage devices
    2. Cache-oblivious algorithms for efficient processing of large datasets on devices with
      complicated (possibly even unknown) memory hierarchies
    3. Streaming algorithms for efficient processing of data that Is so massive that reading
      through it more than ones is infeasible, or for processing data that naturally arrive
      continually in a streaming way

The three core research focus areas are all relatively young research areas within the broader algorithms research area, formed as a response to some of the inadequacies of traditional theory.