Many software engineers who have been in the industry seem to view asymptotic analysis (“big-O”) as not as effective or relevant to their work. Yet it is still fundamentally emphasized in academic computer science programs, forming a major part of standard algorithms and data structures courses. Is there still some value in asymptotic analysis, or is it better to evolve away from it going forward?
Personally, in my own journey through computer science, software engineering, and tech, I went through three stages with this topic.
First, I started off not caring much about performance at all. I was more focused on figuring out the space of all possible things I could do with my newfound programming knowledge, and computers were fast enough that optimizing algorithms would have been premature and unnecessary.
But the second stage came after I had learned enough programming and seen the kinds of things I was able to build, and that was when I got into algorithms and data structures. All the academic materials on these subjects emphasized asymptotic analysis, and I came to see it as a particularly elegant way to evaluate runtimes in a manner independent of specific physical environments. This made runtime determination (or determination of other resource usages like memory or number of threads) much more amenable to theoretical study, allowing mathematical tools to be brought to bear on the subject and enabling a standardized language and framework for evaluating things.
The third stage was when I started to reject asymptotic analysis in favor of embracing the much larger superset of performance engineering. Asymptotic analysis ignored constant factors and other things that ultimately are very important when looking at actual runtimes of programs, and performance engineering was instead closer to real problems that engineers faced, thus being a lot more relevant to their work. It included many techniques and aspects that asymptotic analysis had missed (judged to be not useful since they didn’t reduce the big-O runtime) but were nevertheless important in real-world programs.
But if this is the case, then why do academic computer science programs still emphasize it so heavily? And is there really still value in that, or should they evolve away from it?
I would love to hear your thoughts on this. Personally, I think that it is possible to study algorithms and data structures with less emphasis on asymptotic analysis but still theoretically (within a somewhat-idealized mathematical framework), e.g. counting the number of “basic steps” in the algorithm and choosing to not ignore the constant factors. I think that even in a theoretical computer science subject this approach should be taken instead of continuing to over-rely on asymptotic analysis.
