What would you rather compute with a paper and pencil, 3'593'103 + 1'502'348 or 3'593'103 x 1'502'348? Most sane people would prefer adding the two numbers. But why?

2018.12.18 |

Date | Fri 22 Feb |

Time | 15:15 — 16:00 |

Location | Building 5335, room 016 (Peter Bøgh Auditorium) |

What would you rather compute with a paper and pencil, 3'593'103 + 1'502'348 or 3'593'103 x 1'502'348? Most sane people would prefer adding the two numbers. But why? Intuitively, addition is easier than multiplication. At least, if we consider the classic elementary school algorithms for addition and multiplication of two n-digit numbers, then addition takes in the order of n operations, whereas multiplication takes around n x n operations. But is this a proof that multiplication is harder? Or does it only mean that the thousands of years old elementary school multiplication algorithm is a badly designed sub-optimal multiplication algorithm? In 1960, Kolmogorov conjectured that the quadratic time elementary school algorithm for multiplication is optimal. He arranged a seminar at Moscow State University with the hope of proving this conjecture. Within a week, the student Karatsuba found a new multiplication algorithm running in time n^1.585, thereby refuting his conjecture. This was followed by a sequence of algorithms, culminating with the Fast Fourier Transform (FFT) which can be used to multiply two n-digit numbers in around n log n time, i.e. almost as quickly as addition. The FFT was included in IEEE's Top 10 Algorithms of the 20th century and has had a huge impact on our world.

The above story is just one example where attempting to prove optimality of a known algorithm led researchers to design even faster algorithms. Changing back and forth between designing new algorithms and proving optimality (lower bounds) of known algorithms has guided my research since I started my Ph.D. Approaching computational problems from both angles often leads to a deeper understanding of the problem at hand, and failure in proving a lower bound often reveals a line of attack for designing a new algorithm, and vice versa. In this talk, I will tell the story of my personal research journey in algorithms and computational complexity, on the way highlighting both new algorithms and proofs of optimality that I have developed together with co-authors.

On May 1, 2018, Kasper Green Larsen from the Algorithms and Data Structures group was appointed Associate Professor. Kasper's research interests span many areas of theoretical computer science, but in particular, he enjoys working on topics in data structures, range searching, lower bounds, dimensionality reduction, discrepancy theory and streaming algorithms to name a few.

Kasper recieved his PhD degree from Aarhus University in 2013, under the supervision of Professor Lars Arge. Kasper is a member of the Young Academy under the Royal Danish Academy of Sciences and Letters.