Categories
Tech

Real-World Systems Aren’t Mathematical

When I first got into computer science, I came into it from a math background. Back then, I would think of computer science as essentially a way to “run math physically.” While in many respects this is true and a great way to capture the essence of computer science, I’ve learned over time that there are caveats in thinking of real-world systems, like those built by code, as mathematical. I’ll discuss some of these points now.

First, product expectations are fundamentally informal. Sometimes, they can be formal, for example if your product is simply expected to sort a list and do nothing else, but often times they will not be. For example, none of the best cryptography standards and requirements in the world can defend against social engineering attacks, so no product that “implements security” can stop at cryptography and say that the problem is solved. In fact, this fundamental informality is in large part why it seems to me that computer security will always be a cat-and-mouse game between security engineers and hackers. What it means to “get hacked” may not have one formalizable definition, and instead may be something we need to address with multiple different measures.

Furthermore, product expectations often change given users’ shifting desires. Thus, even if you were able to formalize their expectations at one point of time, such a formalization may not be applicable afterward, making the notion of capturing users’ expectations fundamentally informal.

This leads to a fundamental difference between the development of math vs technology. In math, we can define a certain kind of object or a specific logical system and study its properties, and that knowledge will always be a part of humanity’s collection of math, forever. There will always be a research opportunity in building on it, and it’s always valid. However, in tech, work that was done before can easily become outdated and not useful if users’ desires change. Unlike with math, it’s not a good practice to store old code and expect that there will always be opportunities to use it or build upon it. (In fact, legacy software is usually something people want to avoid in the tech industry, if they are able to.) And if we do try to take a “mathematical formalization” approach to technological development, the time cost of formalizing, re-formalizing, and so on, especially in an industry where desires rapidly change and grow, is often not worth it. Instead, it is better to constantly seek user feedback and be “light on your feet” with implementing it, allowing yourself to respond more quickly without “planting your feet down too heavily” at one time.

A related point is that a single set of interfaces (think say Java interfaces, which really are a special case of formal verification) often won’t capture all the possible ways that product expectations can shift. Thus, at some point you will need to change and migrate interfaces. Ultimately, this means that the idealized “interface vs implementation” divide isn’t actually the most applicable all the time in real-world software engineering, and you need to “break interfaces” to understand implementations too.

We can see many examples of these ideas in tech. Languages and frameworks have changed and shifted over time: at one point, the most common language might have been Fortran; later on, C and C++ were prominent; after that, Java; then JavaScript had a moment, with calls for things like “isomorphic JavaScript” (JavaScript for the entire stack of an app); and now, Python is popular, in part due to strong libraries in scientific computing and machine learning. As a personal example, I spent quite a bit of time in 2013 and 2014 learning the ins-and-outs of JavaScript frameworks like Meteor and Angular, as well as the architecture of things like Chromium and V8 (from Google Chrome.) Then, I spent the next four years using many other languages (like Java and C++), so I didn’t focus as heavily on diving into the JavaScript landscape. After four years, I returned to look at JavaScript more closely, and guess what? Everything had changed. Meteor wasn’t popular anymore, instead React had taken over in a large way, and the architecture of V8 had changed considerably. So much of what I had learned in 2013 and 2014 wasn’t as applicable anymore.

One more important reason to break interfaces is to address errors. Real-world systems encounter issues all the time, and it is often the case that the cause goes beyond the logical divide imposed by the interface (into its implementation.) Thus, debugging usually necessitates a greater understanding than just what can happen on top of the interfaces. And this applies to all kinds of “errors” — not just accuracy of results, but also performance issues and so on.

In some ways, these points connect to rejecting perfectionism in technological development. Don’t be too concerned with a perfect implementation of a current set of design requirements, because those can change relatively frequently. (These requirements may be related to functionality, performance, or something else — the idea is generally applicable.) Donald Knuth famously said that “premature optimization is the root of all evil,” and while I may not have fully understood this in my early days in computer science, I certainly do now. It’s important to keep in mind that while code is logical and mathematics is a strong basis for computer science and tech, ultimately these are real-world systems, and so we can’t go too far in treating them like idealized mathematical ones.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.