Tech Trained Monkey

Everyday Problem Solvers

The “-2000 lines of code” report

First of all I’m sorry I’ve been so absent. I’m working on something of my own and I hope I can make it stable and working properly so I can talk about it here.
Meanwhile I read a nice post and I’m reposting it here:

In early 1982, the Lisa software team was trying to buckle down for the big push to ship the software within the next six months. Some of the managers decided that it would be a good idea to track the progress of each individual engineer in terms of the amount of code that they wrote from week to week. They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week.
Bill Atkinson, the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementor, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code.
He recently was working on optimizing Quickdraw’s region calculation machinery, and had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.
He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.
I’m not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied.

About “Who should learn programming”

As most of you already know, there’s a buzz about who should learn to program and who should not. I will say now that im in favor that everyone should, and I say this because as you learn to program you learn a set of techniques that will most definitely help you all of your life.

Most people who are discussing this matter are focusing on the question “Ok, I learned to program, now what do I do with it?”. My answer to this question is: “I don’t care!”. Read more of this post

Phases of development

Late last week I wondered: where do the software terms alpha and beta come from? And why don’t we ever use gamma? And why not theta or epsilon or sigma?

alpha character beta character

Alpha and Beta are the first two characters of the Greek alphabet. Presumably these characters were chosen because they refer to the first and second rounds of software testing, respectively.

But where did these terms originate? Read more of this post

Logging and software development maturity

According to Wikipedia maturity is:

a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate manner. This response is generally learned rather than instinctive, and is not determined by one’s age.

It is a widely know and accepted fact that write logs is a very good thing, since it helps you find out what happened at a given moment of time. It is rule number one on the universal developer book, if it existed… But again and again I find a lot of programs that just don’t do it.

I’m used to clients calling and say that something is not correct with the product, but it makes me cry when I ask for the logs and he says that there are none and I want to kill the dev who didn’t write log. I always find the bastard. If by any chance it is not a intern, oh help me lord, some blood will be shed!

Logging came to me as a very instinctive thing. My firsts programs didn’t write any log, but of course, I was the only one using them. But as I got better and better i felt the need to better control my “world”. I was scared of the fact that a problem would occur and I wouldn’t be able to know about it. In order to really evolve and grow as developer one needs to make more complex and bigger programs and, more importantly, learn from ones own mistakes. Logging is perfect for the learning part, because you don’t need to bother the users with questions and other stuff (if you logged all the information you needed).

One might start to wonder what does it have to do with maturity. Just logging is not enough. Logging for the sake of it does not help. You need to have a consistent, useful, categorized and, most of all (I think), easy to track log message. It takes real-maturity to know how to log. You should log everything that happens but each event must be logged in a particular way. Logging user interactions that same way you log Null Reference Exceptions is not very helpful. Remember that logging provides information and information is gold!

A lot of people do write a bunch of logs. They actually log the entire thing. Take World of Warcraft for example: Everything you do in the entire game can be traced! They can trace every little critter that you killed. They know which weapon you used. They log everything! But, as i said before, just write messages is not enough. You have to write them in a understandable, easy-to-track way. If a developer in your team can’t trace back, for example, the users actions, such as buttons clicked, radio button choices, in-screen info, check-boxes selected, etc, your log is not good. It might help you find and solve a problem, but it could do a lot better. I recently learned a important lesson.

I must say that I have never had a problem with production environment software (Yes, my programs bugs, but always very soon after deploy, so I was always very close by to help). Thank god all programs that I wrote never gave me serious headaches. Until a couple weeks back. Deploy went fine and everything was peachy. So one beautiful morning the client called me and reported a issue. As always I asked for the logs, he sent me and then the my world felt apart. I could not trace what was going on. Everything was there, but i could not set a time-line to the events. Due to multiple front-ends and multiple web services servers it was very, VERY hard to track what happened to a user. I had a special hard time figuring out what messages on the web services logs belonged to whom. It was a nightmare. I wanted to cry. But I managed to do it and I matured a bit… no, I matured a byte… sorry for the pun…

So now I’m developing a new logging lib and a log-reader tool (feel free to “borrow”) that is planned to solve the multi user/thread/server scenarios. The current log lib we use today is perfect to developer and single-user/thread/server scenarios. You can easily “follow” up what and when it happened, what came after what and so on. But when multi-thread/multi-users/async-operations come-out to play things get ugly. Since the current lib only logs the time and severity of the event its very hard to track continuous actions of a user, or even the path of a particular thread.

The lib itself is not enough though. The real magic is the log reader. The reader creates a visual path of the massages. It literally creates a “fork” for multithread visually showing parallel operations and stuff! Its getting very cool when its ready ill post the source code… but now back to maturity…

I must disclose that I’m not the most seasoned developer out there… for christ sake I’m only 24, but this much I can say: Write easy to read and understand log messages. Your kids will be thankful! Ok maybe not, but you will when you need to know whats really going on!

When it’s time to change/move/replan my database?

Quite a while ago now, a “Database and DBMS” teacher assigned a project that we were supposed to research something that he would not talk about in class… some people gather in groups… I did my alone… I don’t like doing this kind of thing with other people because I hate when the idiot in the group says something like “Oh your wrong! I misunderstood the teacher, didn’t come to class, never worked with that and of course I didn’t studied jack-shit about what you just said but I know your wrong” (more about that in other posts)… I rather do it alone, and sell the “extra-spots” to people who didn’t had the time to do the paper… oh my god… back to the point… I did my paper about indexing and also prepared a killer presentation.

When I say that some people gasp on the fact that we have 2 semesters (1 year) of classes focused on databases and indexing was not even mentioned… luckily for me I don’t rely in the college to teach me anything… I learn and go there to get a degree (again more on that later)…

In the paper I developed an idea of a “threshold-line”, which would be an imaginary point that after which the time taken to a certain query to return would be so great that the results would not resolve the issue/demand. To illustrate lets imagine that you have to go to a party tonight and you have to pick your date before. You have 2 cars: one is top less, and the other not. What good is the weather report tomorrow morning to help you decide which car to use? Now try asking google what good is a search result that takes 0.7 sec…

I also worked on the idea that when you re-index your database your “threshold-points” increase, so you can’t reindex when your on the limit because you will burst your quota! the idea is to plan and determine the exact point where you can re-index and you would still be inside your “confort-zone”. One line of thought that i did not exposed on my presentation is that eventually re-indexing wont solve your performance problem.

A day will come when your single database server wont be able, even with a fully indexed file, to handle all the queries. At this point a natural answer comes to mind: Add more servers. But my point is: With 1 server you can handle X requests. With 2 servers you can’t handle 2X requests. Even very well archtected/engeniered/implemented programs cant scale like that. The main point is some applications are not so well built (archtected/engeniered/implemented), so when necessesity comes who has your back?

Lets face it: Planning might be fun to some, but its painfull for most of us. We are simply unable to predict the futere, and most of ours managers (at least mine) think that gathering data today to generate a projection for something else than sales is just time wasted. So, when we get bottle-necked, were not even close to getting that new server… We have to prove that we have a problem today, that will be solved by buying/renting/leasing a new server. The thing that most managers dont realize is that expansion plan is like a disater-recovery plan: “You will only use it if you need it. You might give it a try in a staging enviroment. But not really use it unless necessary.”

A important part of the plan that most people forget to study is “When I should start working?”. If your plan says to call the maintance guy once the pipes are broken, instead than when you realize that pressure is above normal, I must say that your plan could be better. In project management we call this risk mitigation: Identifing a possible problem and working towards that item to avoid a possible problem. If you start think that its might be time to add more servers when your servers are 90% and performance is long gone, I can say that it could be worst, I’ve witnessed it… Recently I client asked me when he should add a third server to his farm and I said “When your average load is about 45%”. He really said something like “I’m not paying your company for clowns”… Then I asked what he was expecting. He was expecting about 80%… then I replied “So, when 1 of your servers crash the other will crash two, because he will have to handle 160% load. If you operate 2 servers at 45% if one crash the other can absorb the hit and still has some margin for a spike.”

Planning changes (and sticking with the plan) is a act of maturity. Not everyone can do it. I have some problems with that. It is so important that i don’t recommend any system to go into production without a “grown” expectation plan at least on advanced stage of development. If you decide that its time when users start to complain, its already too late! Your product reputation is already stained!

How to Make a Killer Presentation

I know that the internet is full of posts about this, and trust me, I’ve read a bunch of them, and noticed that they’re all basically the same. They all focus on how and what to put on the slides, they compare PPT and Keynote and ultimately try to teach how crowd concentration works and how to retain your audience attention.

The fact is that having perfectly built slides, beautiful pictures, nice videos and demos does not guarantee a killer presentation. A killer presentation is a presentation where you expose whatever you want to expose knowing that the viewer is listening to that info, and that he will remember that info and pass/work it around.

The first and most important topic on how to make a killer presentation is to KNOW whatever you’re talking about. Ask yourself: Can I explain what im trying to say to a 6-year-old? If the answer is no, then you don’t know it yourself. Einstein said that. Steve Jobs is widely known as a presentation master and he also shared this notion. PPT/Keynote were banned from internal meetings, if you presented something using a PPT instead of a whiteboard you would be interrupted and deemed incompetent. Steve only used slides for public presentations. He also was a firm believer that if you know something you don’t need slides to explain it!

The second item in our list is “Be Confident”. Second guessing yourself on stage is a fatal mistake. When you enter the stage you have 70% of the crowd “trust”. This means that they trust you but there’s a silent voice on their heads telling them that things are never as good as they sound. If you’re confident and show that you do know what you’re talking about they won’t question. If you second guess yourself the little voice start to get stronger and eventually it will take over and you will be talking to empty-full room.

The third order of the day is “Be In Love”. Being confident don’t get the other 30% of the crowd’s attention. Showing passion for the topic on the stage does. When you show that you really care about whatever you’re presenting, that you worked your “but-off” to be there and tell them whatever is your saying, the crowd subconscious will notice “the love in the air” and will silent the inner voice.

The fourth tip is very simple but very often overlooked: “You are presenting, not the slides”. In a presentation the slides are not the focusing point. The presenter IS! Slides are helpful but you, the presenter, is the core of the presentation. Doesnt matter if you’re presenting a new Presentation platform. Slides are “support material”. Steve was a master in this point: His slides were always very basic, most times they had only one number, or maybe three words or even only a photo. The focus was always on Steve.

The dessert is: “Don’t fall in love for your own voice”. When preparing the presentation ask yourself: “What if I don’t talk about X?”. Always question the “weight” of your presentation. I usually say that if your slide has more than 10 words a lot can be shrunk. Think clean, think efficiency!

Is “crashing” the worst thing it could happen?

Here’s an interesting thought question from Mike Stall: what’s worse than crashing?

Mike provides the following list of crash scenarios, in order from best to worst:

  1. Application works as expected and never crashes.
  2. Application crashes due to rare bugs that nobody notices or cares about.
  3. Application crashes due to a commonly encountered bug.
  4. Application deadlocks and stops responding due to a common bug.
  5. Application crashes long after the original bug.
  6. Application causes data loss and/or corruption.

Mike points out that there’s a natural tension between…

  • failing immediately when your program encounters a problem, eg “fail fast”
  • attempting to recover from the failure state and proceed normally

The philosophy behind “fail fast” is best explained in Jim Shore’s article (pdf).

Some people recommend making your software robust by working around problems automatically. This results in the software “failing slowly.” The program continues working right after an error but fails in strange ways later on. A system that fails fast does exactly the opposite: when a problem occurs, it fails immediately and visibly. Failing fast is a nonintuitive technique: “failing immediately and visibly” sounds like it would make your software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production.

Fail fast is reasonable advice– if you’re a developer. What could possibly be easier than calling everything to a screeching halt the minute you get a byte of data you don’t like? Computers are spectacularly unforgiving, so it’s only natural for developers to reflect that masochism directly back on users.

But from the user’s perspective, failing fast isn’t helpful. To them, it’s just another meaningless error dialog preventing them from getting their work done. The best software never pesters users with meaningless, trivial errors– it’s more considerate than that. Unfortunately, attempting to help the user by fixing the error could make things worse by leading to subtle and catastrophic failures down the road. As you work your way down Mike’s list, the pain grows exponentially. For both developers and users. Troubleshooting #5 is a brutal death march, and by the time you get to #6– you’ve lost or corrupted user data– you’ll be lucky to have any users left to fix bugs for.

What’s interesting to me is that despite causing more than my share of software crashes and hardware bluescreens, I’ve never lost data, or had my data corrupted. You’d figure Murphy’s Law would force the worst possible outcome at least once a year, but it’s exceedingly rare in my experience. Maybe this is an encouraging sign for the current state of software engineering. Or maybe I’ve just been lucky.

So what can we, as software developers, do about this? If we adopt a “fail as often and as obnoxiously as possible” strategy, we’ve clearly failed our users. But if we corrupt or lose our users’ data through misguided attempts to prevent error messages– if we fail to treat our users’ data as sacrosanct– we’ve also failed our users. You have to do both at once:

  1. If you can safely fix the problem, you should. Take responsibility for your program. Don’t slouch through the easy way out by placing the burden for dealing with every problem squarely on your users.
  2. If you can’t safely fix the problem, always err on the side of protecting the user’s data. Protecting the user’s data is a sacred trust. If you harm that basic contract of trust between the user and your program, you’re hurting not only your credibility– but the credibility of the entire software industry as a whole. Once they’ve been burned by data loss or corruption, users don’t soon forgive.

The guiding principle here, as always, should be to respect your users. Do the right thing.

Top Project Manager Practice: Project Postmortem

You may think you’ve completed a software project, but you aren’t truly finished until you’ve conducted a project postmortem. Mike Gunderloy calls the postmortem an essential tool for the savvy developer:

The difference between average programmers and excellent developers is not a matter of knowing the latest language or buzzword-laden technique. Rather, it can boil down to something as simple as not making the same mistakes over and over again. Fortunately, there’s a powerful tool that any developer can use to help learn from the past: the project postmortem.

There’s no shortage of checklists out there offering guidance on conducting your project postmortem. My advice is a bit more sanguine: I don’t think it matters how you conduct the postmortem, as long as you do it.Most shops are far too busy rushing ahead to the next project to spend any time thinking about how they could improve and refine their software development process. And then they wonder why their new project suffers from all the same problems as their previous project.

Steve Pavlina offers a developer’s perspective on postmortems:

The goal of a postmortem is to draw meaningful conclusions to help you learn from your past successes and failures. Despite its grim-sounding name, a postmortem can be an extremely productive method of improving your development practices.

Game development is some of the most difficult software development on the planet. It’s a veritable pressure cooker, which also makes it a gold mine of project postmortem knowledge. I’m fascinated with the Gamasutra postmortems, but I didn’t realize that all the Gamasutra postmortems had been consolidated into a book: Postmortems from Game Developer: Insights from the Developers of Unreal Tournament, Black and White, Age of Empires, and Other Top-Selling Games (Paperback) . Ordered. Also, if you’re too lazy for all that pesky reading, Noel Llopis condensed all the commonalities from the Game Developer magazine postmortems.

Geoff Keighley’s Behind the Games series, while not quite postmortems, are in the same vein. The early entries in the series are amazing pieces of investigative reporting on some of the most notorious software development projects in the game industry. Here are a few of my favorites:

Most of the marquee games highlighted here suffered massive schedule slips and development delays. It’s testament to the difficulty of writing A-list games. I can’t wait to read The Final Hours of Duke Nukem Forever, which was in development for over 15 years (so it must be a massive doc). Its vaporware status is legendary— here’s a list of notable world events that have occurred since DNF began development.

Don’t make the mistake of omitting the project postmortem from your project. If you don’t conduct project postmortems, then how can you possibly know what you’re doing right– and more importantly, how to avoid making the same exact mistakes on your next project?

Reinvention of the Real Programmer

In classical computer science the Real Programmer (a.k.a Hardcore Programmer) is a mythical man who writes code almost in bare metal. The most high level language he uses is ANSI – C and he debug code in hexadecimal and things like that…

I don’t know anyone that fits that description but investigating and (mostly) philosophizing I came up with a real-world definition for the Real Programmer:

  • The Real Programmer leaves no broken windows behind. Ever.
  • The Real Programmer works with his project manager, not for his project manager.
  • The Real Programmer does not complain about anything. Never.
  • The Real Programmer knows that fixing bugs is more important than implementing new functionalities. (See first point.)
  • The Real Programmer works because he wants to, and when he wants to.
  • The Real Programmer uses a IDE, and high level languages, but he’s not afraid to go back to Assembly if necessary.
  • The Real Programmer does not re-invent the wheel. The Real Programmer re-engineers and re-implements the entire car.
  • The Real Programmer documents his code, in a way that a six-year-old can understand it.
  • The Real Programmer knows that the currency for computers is performance. He’s always thriving for the best performance so he can pay a beautiful interface and other items.
  • The Real Programmer does not TDD. The Real Programmer knows that only making the “bare minimum” is not enough to get to Carnegie Hall.
  • The Real Programmer delivers, but he’s not afraid to delay milestones if he feels that his work is not finish yet.
  • The Real Programmer does not fear tight schedules and rejoices on phrases like “The client changed his mind completaly”.
  • The Real Programmer knows how to separate trendy frameworks from real groundbreaking frameworks.
  • The Real Programmer doesn’t wear suits.
  • The Real Programmer’s computer is not a desktop nor a server. It’s a workstation with more than 1 monitor.
  • The Real Programmer only uses headphones when he’s working in a noisy environment.
  • The Real Programmer is always on top of the situation. He’s calm and never looses his temper. Like a Zen Buddhist master

Think I forgot something? Please fell free to comment!

Leading by Example

It takes discipline for development teams to benefit from modern software engineering conventions. If your team doesn’t have the right kind of engineering discipline, the tools and processes you use are almost irrelevant. I advocated as much in Discipline Makes Strong Developers.

But some commenters were understandably apprehensive about the idea of having a Senior Drill Instructor Gunnery Sergeant Hartman on their team, enforcing engineering discipline.

Scene from Full Metal Jacket, Gunnery Sergeant Hartman Pointing
You little scumbag! I’ve got your name! I’ve got your ass! You will not laugh. You will not cry. You will learn by the numbers. I will teach you.

Cajoling and berating your coworkers into compliance isn’t an effective motivational technique for software developers, at least not in my experience.If you want to pull your team up to a higher level of engineering, you need a leader, not an enforcer. The goal isn’t to brainwash everyone you work with, but to negotiate commonly acceptable standards with your peers.

I thought Dennis Forbes did an outstanding job of summarizing effective leadership strategies in his post effectively integrating into software development teams. He opens with a hypothetical (and if I know Dennis, probably autobiographical) email that describes the pitfalls of being perceived as an enforcer:

I was recently brought in to help a software team get a product out the door, with a mandate of helping with some web app code. I’ve been trying my best to integrate with the team, trying to earn some credibility and respect by making myself useful. I’ve been forwarding various Joel On Software essays to all, recommending that the office stock up on Code CompletePeopleware, andThe Mythical Man Month, and I make an effort to point out everything I believe could be done better. I regularly browse through the source repository to find ways that other members could be working better.

When other developers ask for my help, I try to maximize my input by broadening my assistance to cover the way they’re developing, how they could improve their typing form, what naming standard they use, to advocate a better code editing tool, and to give my educated final word regarding the whole stored procedure/dynamic SQL debate.

Despite all of this, I keep facing resistance, and I don’t think the team likes me very much. Many of my suggestions aren’t adopted, and several people have replied with what I suspect is thinly veiled sarcasm.

What’s going wrong?

I’m sure we’ve all worked with someone like this. Maybe we were even that person ourselves. Even with the best of intentions, and armed with the top books on the reading list, you’ll end up like Gunnery Sergeant Hartman ultimately did: gunned down by your own team.

At the end of his post, Dennis provides a thoughtful summary of how to avoid being shot by your own team:

Be humble. Always first presume that you’re wrong. While developers do make mistakes, and as a new hire you should certainly assist others in catching and correcting mistakes, you should try to ensure that you’re certain of your observation before proudly declaring your find. It is enormously damaging to your credibility when you cry wolf.Be discreet with constructive criticism. A developer is much more likely to be accept casual suggestions and quiet leading questions than they are if the same is emailed to the entire group. Widening the audience is more likely to yield defensiveness and retribution. The team is always considering what your motives are, and you will be called on it and exiled if you degrade the work of others for self-promotion.

The best way to earn credibility and respect is through hard work and real results. Cheap, superficial substitutes — like best practice emails sent to all, or passing comments about how great it would be to implement some silver bullet — won’t yield the same effect, and are more easily neutralized.

Actions speak louder than words. Simply talking about implementing a team blog, or a wiki, or a new source control mechanism, or a new technology, is cheap. Everyone knows that you’re just trying to claim ownership of the idea when someone eventually actually does the hard work of doing it, and they’ll detest you for it. If you want to propose something, put some elbow grease behind it. For instance, demonstrate the foundations of a team blog, including preliminary usage guidelines, and a demonstration of all of the supporting technologies. This doesn’t guarantee that the initiative will fly, and the effort might be for naught, but the team will identify that it’s actual motiviation and effort behind it, rather than an attempt at some easy points.

There is no one-size-fits-all advice. Not every application is a high-volume e-commerce site. Just because that’s the most common best-practices subject doesn’t mean that it’s even remotely the best design philosophies for the group you’re joining.

What I like about Dennis’ advice is that it focuses squarely on action and results. It correlates very highly with what I’ve personally observed to work:the most effective kind of technical leadership is leading by example. All too often there are no development leads with the time and authority to enforce, even if they wanted to, so actions become the only currency.

But actions alone may not be enough. You can spend a lifetime learning how to lead and still not get it right. Gerald Weinberg’s book Becoming a Technical Leader: an Organic Problem-Solving Approach provides a much deeper analysis of leadership that’s specific to the profession of software engineering.

Within the first few chapters, Weinberg cuts to the very heart of the problem with both Gunnery Sergeant Hartman’s and Dennis Forbes’ hypothetical motivational techniques:

How do we want to be helped? I don’t want to be helped out of pity. I don’t want to be helped out of selfishness. These are situations in which the helper really cares nothing about me as a human being. What I would have others do unto me is to love me– not romantic love, of course, but true human caring.So, if you want to motivate people, either directly or by creating a helping environment, you must first convince them that you care about them, and the only sure way to convince them is by actually caring. People may be fooled about caring, but not for long. That’s why the second version of the Golden Rule says, “Love thy neighbor”, not “Pretend you love thy neighbor.” Don’t fool yourself. If you don’t really care about the people whom you lead, you’ll never succeed as their leader.

Weinberg’s Becoming a Technical Leader is truly a classic. It is, quite simply, the thinking geek’s How to Win Friends and Influence People. So much of leadership is learning to give a damn about other people, something that us programmers are notoriously bad at. We may love our machines and our code, but our teammates prove much more complicated.