Many software engineers who have been in the industry seem to view asymptotic analysis (“big-O”) as not as effective or relevant to their work. Yet it is still fundamentally emphasized in academic computer science programs, forming a major part of standard algorithms and data structures courses. Is there still some value in asymptotic analysis, or is it better to evolve away from it going forward?
Category: Tech
In this project, we see how much we can generalize the core properties of ACID database transactions (atomicity, etc.) to general code and computer operations. The idea is to develop a framework that can provide these generalized capabilities.
Real-World Systems Aren’t Mathematical
When I first got into computer science, I came into it from a math background. Back then, I would think of computer science as essentially a way to “run math physically.” While in many respects this is true and a great way to capture the essence of computer science, I’ve learned over time that there are caveats in thinking of real-world systems, like those built by code, as mathematical. I’ll discuss some of these points now.
Ideas, Experiences, and Differentiation
Back in 2013 and 2014, I started what I call my “Ideas Doc,” in which I wrote down ideas for new tech products that I would want to create some day. This was geared towards an entrepreneurial purpose, of seeing what ideas I could potentially start a company on. I’ve kept that stored and have been continually updating it since, adding new thoughts and ideas while removing ones that I don’t think are as viable anymore.
Source Consolidation in Tech
I’ve been recently thinking about the potential of newer technology to accelerate a phenomenon that I call “source consolidation.” It seems to me that this will only speed up even further as technology progresses, for example as generative AI becomes more powerful and prominent. In this post, I talk about what I mean by source consolidation, how it relates to tech and generative AI, and what its implications can be.
On Declarative vs Imperative Programming
People who discuss programming often talk about the distinction between declarative and imperative programming as two different paradigms. However, in this post, I argue that these terms are really only relative and informal, and in fact that we don’t lose much if we de-emphasize the distinction.
Symbolic Dependencies and Refactorability
What language features are (minimally) necessary in order to implement full refactorability of symbols? (Think “Rename Symbol” in your favorite IDE.) For example, when I used Python instead of Java, I had discovered that refactoring interfaces was much harder. Is there a deeper explanation for this?
Reacting in Programming
There are lots of different concepts in programming related to reacting when special occurrences happen. This post is an attempt at sorting these concepts out and illuminating how they relate to each other. Of course, a lot of this is based on my understanding, and it’s possible that different people will have different definitions of these terms; however, this is meant to accurately reflect the standard meanings.
On the Definition of Machine Learning
Machine learning today is a major subfield of artificial intelligence, and a field that has significant applications in other subfields of AI and tech in general, such as in natural language processing and computer vision. In this post, I hold that the definition of this subject, at least as it is given today, is fundamentally informal. There are a couple candidates for definitions that we will consider.
On the Definition of Artificial Intelligence
Edit from the future (2025): This post’s points are now outdated according to my current thoughts, given technological developments since.
Artificial intelligence, or AI, is a loaded term. When it was first used, it would refer to the abilities of computers to play perfect-information games like chess. But now that is “old tech,” and we don’t consider that as much a part of AI. In fact, I contend that AI has always been the term used to refer to the bleeding edge of computer science at any given moment. It’s why we might never “solve” or “figure out” AI, since once we figure out AI today, the term AI will just be redefined tomorrow to be the new horizon to reach for.
Nevertheless, AI has taken on a distinct character recently that might prompt us to consider a closer definition of AI in terms of what it means today. This is the question we will investigate now.
