Nat Knight

Reflections, diversions, and opinions from a progressive ex-physicist programmer dad with a sore back.

#mentalhealth #motivation

I'm curious about some experiences I've had in the course of my work as a software developer. My curiosity comes from a place of profound ignorance, based on subjective observations of my own mental and emotional state.

This is not a well researched or thoroughly studied position, nor is it:

  • An assertion of fact
  • A veiled shot at an employer or project from my past
  • An excuse for toxic or unprofessional behaviour
  • A manifesto
  • A cry for help
  • A call to arms
  • etc.

It's to do with a particular kind of anxiety. There are lots and lots of ways that work can make a person anxious, so I want to be clear: I'm asking about a particular, fairly narrow, set of circumstances.

Read more...

#python #testing

When I'm doing data analysis or building applications with Python and I have to give entities a unique ID, I like to use random UUIDs instead of sequential numbers. Sequential numbers include information about the order and total number of data, but I want my IDs to be just a unique identifier, nothing more.

Read more...

#podcasts #programming

There is a great diversity of material for improving programming skills and knowledge. It's no great task to give examples of books, blogs, videos, live-streams, interactive tutorials etc. This diversity makes our profession better.

A surprising inclusion in this list is the podcast, which one might expect to be a bit useless: we work with a lot of plain text, complex data structures, design and layout. At first blush, this seems like poor material for a purely aural medium.

Nevertheless, I derive great enjoyment and value from listening to other programmers talk about their craft and I recently had an object lesson in why.

Giant Robots Made My Day

While working my way through the back-catalogue of the Giant Robots Smashing into Other Giant Robots podcast, I listened to a conversation between Ben Orenstein and Matt Knox about deliberate practice for programmers. A few days later, I have a Github repo for my kata and whole bunch of energy for them that I didn't have last week.

That's not a bad return when all I started with was a half-hour of dishes and a free MP3. Where did I find this enthusiasm, and why is the podcast the perfect medium to go looking for it?

They Don't Cure Cancer But . . .

An obvious benefit of listening to other professionals is that it keeps good ideas front-of-mind. Kata for programmers is hardly a new idea (heck, Jeff Atwood wrote about them on CodingHorror, so pretty much everyone has heard of them) but it's easy to lose focus and drive in that amorphous category of “professional development” without the occasional injection of perspective. Regularly exposing oneself to the ecosystem of good ideas is a good way to stay on track.

Podcasts also have more emotional weight than blogs or Hacker News. Much as one might like to be a perfectly rational logic machine, a little tug on the right heart-strings can be wonderfully effective. There's a palpable excitement, an infectious enthusiasm that emanates from professionals keen on their craft, and it comes through headphones loud and clear.

And in case this all seems like perfect fellowship and sunshine, there's a slightly pointed aspect to consider as well: competition. Besides being sources of inspiration and content, the characters coming down over WiFi can be the masters we seek to surpass or the antagonists against whom we prepare our opinions. As much as we need idols and peers, rivals and tyrants also have their uses.

I'll Be Listening

So for code snippets and clever hacks, blogs are still king. Videos and prose will win out in the domain of architecture and lofty ideas. But as long as I aspire to a better technical culture than I inhabit, I'll listen to podcasts to make up the difference.

Finding good programming podcasts can be a bit challenging. There are a ton of options to choose from and Sturgeon's Law definitely applies. Try lots of things, be selective, and use an RSS reader or podcatcher to automate your subscriptions. Here are a few solid offerings to get you started.

#advice #problemsolving #debugging

The comparison between debugging and the scientific method has been made before. Originally, I understood this to be a cyclical process where you proceed based on observations of your program using things like interactive debuggers, lots of print statements, and a short iteration cycle. A recent experience with some buggy code has convinced me that this is the wrong way to look at the scientific method as it applies to debugging—at least for the beginner.

As part of my thesis research I'm working on tracing sources of emissions of carbonaceous aerosols using a general circulation model. This involves having an optimization routine modify emissions to bring them into agreement with observations. My results were obviously wrong, but they had a pattern that matched the spatial distribution of the biogenic emissions source in the model to the pixel. Clearly this observation implied that the biogenic emissions were to blame. (Spoilers: that wasn't the problem.)

What followed was a downward spiral of frustration and ineffectual iteration in the observe-edit-run cycle. That madness only stopped when I settled down and remembered a part of the scientific process that I'd left out (as the astute reader will have noticed). The essence of the scientific method and the most important insight it offers into debugging is:

Have a hypothesis.

Before you load the debugger, before you go crazy with print statements, before you do anything but carefully review the information about why your program is failing, have a hypothesis. Doing so ensures you spend some time thinking about your code, helps structure your investigation and keep it moving forward, and steers you away from bad some habits. In retrospect it's an obvious oversight, but it's essential to making the process work efficiently.

To have a hypothesis, you have to understand how your program works. Without a solid grasp of its overarching structure, an accurate mental model of how it executes, and at least an inkling of where the problem is, you can't form a hypothesis about your program. You also probably can't debug it effectively. If you find that can't easily come up with a hypothesis or two, it's probably time to pause and learn about the parts you don't understand. Having a brief space to think without the distraction of furiously interrogating the source code can also be enough to bring about that vivid flash of insight. It's important to have the right ratio of thinking, coding, and refining your ideas, but striking that balance is no mean feat even for an experienced developer. Having a hypothesis is a shortcut, an easy thing to keep in mind to get the mix just about right.

One of the motivations for applying the scientific method to debugging is to simplify an otherwise overwhelming task. Trivial examples aside, bugs arise in obscure and unexpected ways, so it's never obvious how to tackle them. Through a combination of optimism, desperation, and the apparent lack of alternatives, it's easy to escalate into logging, simulations, elaborate visualizations, flushing out the problem through sheer force of print statements etc. There are a multitude of activities that can help you understand how your code works, and you can pour hours into them achieving nothing but a frantic, buzzed half-instinct about what's going on. By putting the hypothesis first, you focus on the goals of your analysis and have a clear endpoint: the hypothesis stands, doesn't, or reveals another hole in your thinking. Having a hypothesis isn't as good as a detailed plan or solid good habits and instinct, but it lifts you out of the weeds and demands that you proceed in light of the bigger picture.

I've noticed a number of transparently bad habits in my debugging: poring over source code unrelated to my problem, belabouring preliminary points, relentless logging and printing, and (in times of truly dire frustration) the `change-a-line-and-pray-it-compiles' method of code validation. Each of these behaviours has its own trigger, it's own destructive cycle, it's own threats to watch for. Each is also inconsistent with having a hypothesis. Perhaps the most beneficial effect of having a hypothesis is that it's a tight, simple defense against seductive bad habits.

In my Carbonaceous Aerosols case, it turns out there was a bug in my optimization algorithm; the extra source was a red herring. I arrived at this conclusion after a large amount of inefectual flailing, followed by a walk, a few deep breaths, and a focused effort to keep my hypothesis in mind as I tracked down the problem. It's a useful brain-hack; coming to it on my own (at least in this instance) has made it particularly sticky. Now if only I'd had it two months ago ...