I wrote my first line of code in 1967, on a "Hollerith" card that had perforations that allowed us to make holes with a pencil. We'd send the cards up to London and a week later (yes, I am not making this up), we got the results back. My first program was to solve the equation x = sech(x). I probably took about 12 lines of code (that's to say 12 cards) but I don't remember exactly. The only error I made was in the comment (see below) where I declared that the program was based on Newton's method of "apprnximation".
But after programming more or less full-time for about nine months in 1969 (and getting paid for it), I took time out to get my undergraduate degree where I did almost no programming whatsoever.
So, I recognize Bob Martin as a fellow "old fart" who has been through the trenches, like me. He gives a damn good presentation and my hat is off to him!
I found myself agreeing with him so wholeheartedly that it made me look back with quite some frustration at all of the times that I worked so hard to advocate good coding practices, only to be fought tooth and nail.
I remember back in 1984 for instance, a certain programmer whose name I will omit although it is burned into my memory, wrote a function (we didn't call them methods in those pre O-O days) that was more than 3,000 lines long! What was even sadder was that he didn't think there was anything amiss, and neither did his manager (my peer).
Then there were all those arguments about comments in code. I adhered to the view that if the code needed to be commented there was something wrong with it. And, worse, the comments were likely to become out of date as programmers changed the design but neglected to make the corresponding changes in the comments. Others disagreed vehemently.
And then do you remember all that stuff about Yourdon and "Structured Programming" (De Marco et al?). Those guys were living in cloud cuckoo land. But you couldn't say so in front of one of the managers who thought that such techniques were the proper way to write software.
Gee, I'm just getting started. I remember all those battles I had about "Q/A". The worst of these were not that long ago: in around 1992 and the years following. I wanted to concentrate on automated testing (we would call this unit testing nowadays) but that was met with huge skepticism. I had even developed a unit testing methodology of my own to support it. Not reliable enough I was told -- you needed real people to sit there and push buttons. Aaargh!
What my opponents in this debate failed to realize is that the inevitable time lag between a software release and Q/A's testing of it means that, almost by definition, it is constantly in a broken state. As soon as you try to go back and fix any bugs, the underlying software has already changed and you are extremely likely to create new bugs.
The modern "agile" approach with continuous integration, scrums, etc. minimizes this latency effect by early detection of problems. It's the only sane way to go.
I could continue and maybe I will in another blog entry. Meanwhile,
OK, back to work!