You can tell a lot about a person from the Fortran they write.
Inconsistent line numbering: “I’ll fix it later.”
Illogical variable names: “It works, so why bother?”
Untidy indentation: “This isn’t a beauty contest.”
Poor commenting: “I understand it now, so everyone should understand it.
Good code reads like poetry.
***
There is a subtle irony in modern engineering.
We create layers of abstraction, only to spend days figuring out where our data is located.
Fortran never had that problem.
You always knew where things were stored. Arrays were just arrays. If a variable was important across the model, you placed it in a COMMON block and moved on.
Yes, COMMON blocks are considered outdated. Critics label them as global state.
But in real numerical computing—neutronics, thermal-hydraulics, CFD—the issue is not about philosophical purity. The challenge lies in maintaining a consistent state throughout a large, coupled model without losing track of it.
COMMON blocks made this explicit.
You had one designated spot for kinetics parameters, one for thermal state, and one for hydraulics.
There were no hidden layers, no frameworks silently copying data behind your back, and no debates over “who owns this variable.”
You could open the code and clearly see the model.
Additionally, the compiler could perform its job effectively: predictable memory usage, no aliasing surprises, and tight loops that executed quickly.
Of course, COMMON blocks can be misused.
However, anything can be abused.
When used correctly, they enforced something that many modern codes struggle with: clarity of the physical model.
The goal is not to impress others with sophisticated software architecture but to solve the equations accurately and efficiently.
COMMON blocks accomplished that quite well.
***
I chose Perl as my scripting language because I don’t think in terms of “programs”; I think in patterns.
Most of the problems I encounter aren’t algorithms in the neat, academic sense. Instead, they are messy streams of text that pretend to be structured data—logs that evolve over time, formats that were never clearly defined, and interfaces that almost, but not quite, adhere to their own rules. You don’t solve these challenges by creating elaborate abstractions; you solve them by identifying what remains constant amidst the noise.
This is essentially the mindset behind regular expressions (regex). You look at a line of text and instantly recognize what is important and what isn’t—anchors, optional parts, repetitions, edge cases. It’s not just an academic theory; it becomes instinctual. Once you view problems this way, Perl feels less like a traditional programming language and more like a natural extension of how you intuitively interpret the world.
Other languages can indeed use regex, but in Perl, it’s not something you reach for as an afterthought; it’s the default mode. The language is designed around the notion that text is fluid, inconsistent, and still manageable without unnecessary complexity.
There is also a certain discipline inherent in this approach. Regex does not allow for ambiguity. Either you understand the structure or you don’t; it either matches or it doesn’t. This necessitates precision in dealing with messy data, which is a valuable habit that extends far beyond the realm of scripting.
Like any sharp tool, regex can be misused. You can create unreadable one-liners that only you will understand, and maybe only for a brief period. However, this is not a flaw of the tool itself; it follows the same principle as any other area of work: if you leave a mess behind, it will come back to haunt you.
So yes, I use Perl because I think in regex—not because it’s elegant, and certainly not because it’s trendy.
I choose it because, when the problem revolves around finding structure within chaos, it aligns perfectly with the way I naturally think.
***
Tidy source code doesn’t seem urgent when everything is functioning properly. The model runs, the numbers appear reasonable, and there always seems to be something more “important” demanding attention.
When I was younger, I thought this way. As long as I knew where everything was and the logic made sense in my mind, that was good enough for me. And it was—until the moment it wasn’t.
Messy code doesn’t fail dramatically; it bides its time. Issues arise when you revisit something months later or when you need to change one small detail and discover that three slightly different versions of the same parameter are buried in various locations. Problems can also occur when a result seems off, and you can’t easily trace where things went wrong.
Tidy code isn’t just about aesthetics; it’s about being able to trust what you’ve built. A clear structure, consistent naming, and having a single, obvious location for each piece of logic all reduce the chances of hidden errors.
Ultimately, coding is no different from any other form of engineering. You shouldn’t leave things in a state that only you can understand or where correctness relies on memory. Instead, you should make your code robust, transparent, and repeatable.