Firstly, thanks Lukas T (Blender Nodes dev) for the link.
Reading this was like seeing my development practices nicely packed up in a new light (or perhaps just in a form that I've long been considering implicitly). So, what does this article say?
1. Avoid Distractions
The points he mentions here pretty much sum up the kinds of forces I've been battling over the past month or so to actually get any work done (blasted uni timetabling!).
For instance, IMO if you really want to do a good job of anything, it typically takes about 30 minutes (usually slightly over) to actually "get yourself in the right headspace" to do any quality/productive work on something. It doesn't matter if it's programming or writing some article; the same thing applies regardless of the actual task if it involves any kind of cohesive and sustained creative-narrative-analytical-stream. As such, 1 hour time slots are usually fairly pathetic as they're really not long enough to focus+concentrate AND still have enough time left over to get much of any consequence done. Also, even when you do get anything done in such timeslots, it's usually of such scrappy quality that upon reinspection during a "proper" chunk of time, the little that was done needs to be totally redone (it gets worse if several of these patchy bits got done on top of each other).
The other aspect of this is that knowing you need to be somewhere else at a certain time (i.e. a meeting, class, mealtime with others, or some other appointment) also tends to have a negative effect on overall productivity. On one hand, it's as if knowing that there's a impending distraction keeps part of the subconscious stuck in a "while (timeNotUp()) sleep(5);" loop, reducing overall cognitive power available to focus on the actual task at hand.
On the other hand, I guess that such predetermined distractions end up putting a bound on the amount of available time currently available for working on the thing (see #2) before the distraction occurs. This is then to be followed by a period of time before you're able to get back to your work where you'd left off, which includes:
1) the amount of time needed to leave your work and head off to attend to the distraction,
2) the amount of time actually spent attending to the distraction,
3) the amount of time taken to finish dealing with the distraction and return to your desk/workstation to attempt to resume work where you left off,
4) and finally, the amount of time it takes to settle back into a suitable frame of mind to continue work
For example, under the timetable this term, the length of such "distractions" (in this case, heading off to campus for lectures) actually ends up amounting to something like:
1) 15-20 minutes to get from home to lecture room - this includes getting changed and out of the house, driving to uni and trying to find parking, and getting into the building. Traffic conditions and nasty carpark situations can make this blow out a bit. The lower end of this range can be achieved by travelling between 50-55kmh including through a winding road littered with speed humps (apart from being a bit bumpy, it's generally not that bad to hit them at 40k). Walking instead takes 20 minutes to get from home to the middle of campus and an extra 5 minutes to get to the building (+ being exhausted when you get there), totalling up over half an hour to get there.
2) Lectures last about 2 hours. Add a little extra for some lecturers + impromptu meetings with classmates.
3) 15 minute trip home. That is unless it's 5-6 rush hour, in which case 10-15 minutes stuck at one or two sets of traffic lights needs to be added on top of that.
4) The standard 30 minute minimum applies, but given that a 2 hour lecture has just passed and over 3 hours has passed since being forced to leave work on a project, this is more like a 1 hour warmup phase.
So in total, through one distraction, we can just about wipe off 6 hours productive time from a day. However, it happens that if placed at suitable times, such distractions can interact with other events such as mealtimes to leave useable work-sessions (see #2) untouched. In the case of this term's timetable though, it just happens to straddle 2 work sessions by starting when the first ends (forcing a lunch back an hour, eating into the previous work session) and just about completely consuming the one that would usually follow after that. And to have this happen on just about every day of the week (bar the weekend), and it's clear that we've got problems...
2) Work in long stretches
IMO, I've found that the best/most productive work sessions tend to last about 3-4 hours long (before a short non-interrupting break). Under such conditions, it is possible to do about 3-4 of these a day before exhaustion and incoherence start to set in.
Occasionally, there are true "in the zone" sessions where a great number of things get accomplished relatively effortlessly. It's probably how most reports get written (1 - 1.5 days before the deadline ;) or many normally tricky bugs squashed one after the other in a constant stream, or one mega awesome super feature gets coded. While such sessions are rare, it is dangerous to work in this "full blast sprinting mode" for more than 2 days, as it's generally easy to start getting sick after doing this for a few days (i.e. colds with bacterial complications). During such sessions, work can proceed for around 5-6 hours straight at a time. This is ideal for defeating tricky problems, but tends to be pretty hard to find during the day. Which is why these typically happen late at night, when there are little to no other "distractions" that are likely to occur. At times, I've pondered whether the lack of sunshine outside (i.e. no "I should probably be trying to get some more exercise outside..." or pretty things to take photos of), general quiet everywhere (i.e. lack of traffic, birds, people randomly dropping by), cooler temperatures (i.e. not too hot to focus) have helped to improve productivity over such periods of time. But, looking more carefully at these, they all boil down to what this article points out as Fail #1: distractions.
One of the popular recurring "campaigns" (see this video about data visualisation, specifically the bit where he talks about a visualisation of these recurring trends) the media love scaremongering and whipping up a storm about is this issue of "Is XYZ technology causing us harm?", with "harm" in these cases usually referring to things like attention span and ability/frequency to have physical face-to-face conversations with other human beings. More often than not, these often come to a few well-rehearsed staples: they usually begin with interviews with some who invariably think that everyone is regressing into an impulsive-ADHD'd-zombie-state, where attention spans are increasingly short and segmented, precluding "higher cognitive functions" or "activities which require sustained periods of concentration" (reading prose is often cited as primary casualty of this, as is forming a coherent and sustained argument about anything). However, just as all other "fluff pieces", these generally end by rounding up on an inclusive/conciliatory/full-circle happy ending by countering that perhaps people are just communicating in different ways; many more ways than ever before, and with richer interactions of different types that may not even have been possible in the past - that is, redistribution and augmentation not outright replacement.
------------------------------------------------
These first two points are, in many ways a core takeaway from this article which IMO can be applied beyond just simple programming. In fact, it probably applies to any human endeavour involving any significant generation and refinement of ideas. Creative processes.
The rest of the items mentioned in the article were more specific to programming...
3) Use Succinct Languages
Increasingly, I'm spending a lot of time writing a lot of my code in Python. Why? It's easy to write! Among most of the programming languages out there, it has IMO one of the nicest balances of expressiveness (power of compact statements and a nice number of flexible lexical features) while retaining readability. It does have some Achilles heels - notably threading, speed when dealing with sizeable amounts of numeric computation (especially for realtime display purposes), lack of nice syntax for do...while and switch-case control statements.
For instance, while Functional Programming Languages (like Haskell) may have perhaps the most extreme compactness <-> expressional simplicity, just like the mathematical equations (and algebra systems underlying both), these have perhaps taken it too far, to the point where the amount of effort we humans need to expend in order to try and fully visualise/comprehend/understand the full ramifications of the spartan artifacts presented before us is far too high. That is, they place too much of a cognitive burden on people who must internalise (or as Graham says, "load and keep in your head") both the meaning of the artifacts, the associated context, and consequences. (NOTE: I've got a long rambling post on this topic in the works. In the meantime, take a look at Bret Victor's stuff to get an idea of where I'm coming from).
On rereading the article, two other points stand out for me here again, which I totally agree with:
a) Sapir-Whorf hypothesis in action - people think (at least partially) in the language that their code is written in. In the case of Python, it is often said that the language very much resembles "pseudo-code". That is, there is quite a direct correspondence (with little additional "machine-satisfaction" boilerplate) between the simplified representations of the process that we're trying to code which we hold in our heads as we're working on the problem.
Being able to contain a task-specific representation of just all the useful/relevant concepts in a system in your head as an important ability is an important message of this article, as it is implied (or IIRC even actually explained somewhere) that this is necessary to actually do a proper job both efficiently and effectively (Incidentally, from a HCI-research perspective, I currently believe that if only we could build accurate predictive models of what components within an information space are most relevant to users, we should be able to greatly improve software usability overall. Indeed, my current research project is a step in this direction for a specific domain).
Anyways, getting back to the original point; this near direct correspondence between thoughts and code, which is only strengthened when your code is in a language which naturally lends itself to being used for thinking in (cue fuzzy amorphous linked-blob of thought patterns <-> design patterns <-> bank of related concepts and similarities between all of these, much like how the smell <-> taste duality can sometimes work), all of this combines in a fluid and organic way to contribute to productivity. In a way, one of the things I think I've really realised over the past few months is how with a lot of things/concepts, there are quite a lot of similarities and commonalities that can be drawn between some commonly unrelated things; sometimes it might just be a slight semantic difference related to the context in which they apply despite the ultimate implementation/underlying conceptual framework being the same.
b) Power of Abstractions - One of the concepts that was drilled into us during first year was the power of using suitable abstractions (at the time, it was presented in relation to Abstract Data Types and Collections). That is, using suitable representations of a problem which encapsulate and hide much of the messy/underlying implementational details which are relevant at one-level-of-complexity-deeper can reveal a lot more about the underlying structure of the problem (much like fractals). This is important, as humans only have a limited amount of short-term ("working") memory for holding information about what we're currently thinking about (i.e. 7 +/- 2, or "Miller's number"). By using suitable abstractions, we can mask out the parts of the problem that aren't of immediate importance and relevance to what we're trying to do, allowing us to focus our attention and use our limited resources to dedicate the necessary attention to things that we should really be focussing on.
Now admittedly, as a kid I didn't understand this. You see, at the time, I thought that the use of abstractions was a bit of a "cop-out" - something you did if you were just weren't good/talented enough to do without such a "crutch". The context in point at the time was drawing things. Specifically, many art books would frequently refer to the value of blocking out a drawing using rough placeholder shapes first to get a feel for the forms and other compositional issues. However, at the same time, you'd see some people being able to just draw anything at full levels of detail from thin air - no guidelines, no blocking shapes in advance, just immediately drawing fully formed and voluminous forms masterfully with a series of well placed lines. Perhaps as a sign of growing old and more careful about things, looking back I'm increasingly seeing the value of planning and blocking. Certainly when doing technical drawing with set square and pencil, the use of guidelines and rough outlines helped greatly with being able to visualise what was going on and thus keep everything on track. Also, I found the habit of using a centerpunch, to create a single well defined groove/depression for a drill bit to sit in before starting to drill a hole, was quite beneficial to the hole-drilling process as it helped to ensure that the drill wouldn't dangerously shudder all over the place and ultimately drill a hole in the wrong place.
Only now do I see that all of this comes down to the core idea behind use of abstractions: to focus your attention on the larger issues present at a particular level of detail (relevant to the current step in the process), so that you can avoid more costly errors from following a weird tangent, as the simplified situation now allows you to see and address these issues within a controlled framework.
Getting back to Graham's article: his point about "bottom-up programming" or more specifically, building your code in layers so that each layer acts as an abstraction for the lower-level details allows to describe and understand the program in simpler, more manageable terms. And this thus allows us to be able to keep track of larger amounts of the system as we're looking at it from higher-levels of abstraction, which less specifics and more generics. It can also be said that each level of abstraction, your code acts as a domain-specific language for succinctly describing the problem at that level of abstraction, representing your understanding of that.
5. Write Rereadable Code
Especially in the initial stages of writing some code, your priorities are usually trying to figure out how to do something (or whether it can be done at all), in which case expressional immediacy (i.e. we can do the dirtiest tricks for now to not waste too much time making something perfect if ultimately the larger approach is bound to failure anyway - see Brad Bird's clip on the Pixar website about directing) is most important. As long as it all makes sense to you for long enough that you can quickly figure out what you were thinking (if it works in the end), so that you can...
4. Keep Rewriting Your Program
Although not practical (or even necessary) in all situations, the value of short "throwaway" sprints to just test out an idea, get some experience, and learn the in's and out's of the problem is very important. Heck, with the Keying Sets stuff and many other things I've worked on, it's often been beneficial that I've gone through several iterations of trying to develop a good solution, as with each attempt, you can build on the successes and flaws of previous iterations.
As Graham mentions, it also helps maintain or improve clarity or design, ensuring that you can now understand most of the code yourself.
8. Start Small
In short, it's generally accepted now to avoid "Big-Bang" developments if at all possible. Apart from carrying a lot more risk for successful completion, you also end up needing to spend a lot longer on reintegration issues.
But perhaps the main advantages of this approach, as mentioned in the article probably stem back our friend: the power of abstractions. Incrementally growing a project ensures that you can safely test and ensure that various parts of the solution are working before trying to build other parts. This means that you can hopefully treat existing parts as blackboxes (i.e. details abstracted away - should be fine to just use, so problem must be in new code). However, this only really works if you've made sure to really test your code well.
It also happens that such an approach lends itself nicely to being able to perform slightly exploratory development, where you decide on the processing steps necessary based on the outcomes of the currently implemented steps. I've done this a few times in as many recent projects, allowing for quite a nice learning process. However, be warned that the resulting code can often end up with a nasty "branching-stick" morphology, which needs careful checking later to see if any design refactoring may be beneficial. In which case, see #4.
6/7 - Many cooks spoil the broth...
Beware the explosion of communication channels required as you increase the number of developers involved with a particular piece of code. In particular, don't forget that the code itself is a communication medium between programmers, as much as it is used to communicate with machines.
In particular, I'd like to draw your attention to the point that:
You never understand other people's code as well as your own. No matter how thoroughly you've read it, you've only read it, not written it. So if a piece of code is written by multiple authors, none of them understand it as well as a single author would.
From my experience over the years, I ultimately have to agree with this. Sure, anyone may add an extra check somewhere to fix a bug. But will the original developer now fully understand the consequences of this new check and its effect on the code? Taking a step back: did the developer adding this new check even fully understand the original code that he was trying to fix?
Obviously we hope the answer in both cases is yes, though a number of factors may stand in our way either way. Like all of the above!
Does coding needs a mathematical skill? i kinda "right brain" kind of person :p
ReplyDeleteCertain kinds of programming do; you're not going to be using much mathematics in an Excel macro that shuffles some data around, while scientific computing may require in-depth mathematical study. Programming is a very broad subject.
ReplyDeleteThe act of programming never requires complex math, per se. It definitely requires logic, intensely unforgiving logic, which is a way of thinking that not everyone is naturally equipped to do well. That isn't to say it can't be learned, but there is no question that "thinking like a computer" comes more naturally for some than others.
ReplyDeleteAs for math, that really only comes into play in the context of particular domains. Often the math will be worked out by a domain expert and the developer's job is to encode that math into functional steps (an algorithm).
There is a specialized type of software engineering known as computer science, where software itself is the domain. Programming of that type typically involves developing new algorithms and data structures which are optimized around a particular set of constraints (for instance, devising an ordered sequence search that's as fast as a skip-list but requires less memory, solving a P versus NP problem, etc.) But that type of work is an outlier - the vast majority of programmers never touch such problems as there aren't many people willing to pay for it. Game engine development is another outlier in that developers themselves are generally expected to be the domain experts and thus, need to know the math involved in 3D space rendering, physics, lighting, etc.
Aside from the outliers, it is important for any high level developer to be familiar with algorithms and data structures and to be able to identify when it's appropriate to take one approach over another, just as it's important for a structural engineer to know how to perform stress analysis on a joint. Known values being plugged into known equations -- which is not really doing math any more than a cashier at a supermarket is "doing math", though the complexity of the known equations may be higher.