Tech Trained Monkey

Everyday Problem Solvers

Monthly Archives: April 2012

Logging and software development maturity

According to Wikipedia maturity is:

a psychological term used to indicate how a person responds to the circumstances or environment in an appropriate manner. This response is generally learned rather than instinctive, and is not determined by one’s age.

It is a widely know and accepted fact that write logs is a very good thing, since it helps you find out what happened at a given moment of time. It is rule number one on the universal developer book, if it existed… But again and again I find a lot of programs that just don’t do it.

I’m used to clients calling and say that something is not correct with the product, but it makes me cry when I ask for the logs and he says that there are none and I want to kill the dev who didn’t write log. I always find the bastard. If by any chance it is not a intern, oh help me lord, some blood will be shed!

Logging came to me as a very instinctive thing. My firsts programs didn’t write any log, but of course, I was the only one using them. But as I got better and better i felt the need to better control my “world”. I was scared of the fact that a problem would occur and I wouldn’t be able to know about it. In order to really evolve and grow as developer one needs to make more complex and bigger programs and, more importantly, learn from ones own mistakes. Logging is perfect for the learning part, because you don’t need to bother the users with questions and other stuff (if you logged all the information you needed).

One might start to wonder what does it have to do with maturity. Just logging is not enough. Logging for the sake of it does not help. You need to have a consistent, useful, categorized and, most of all (I think), easy to track log message. It takes real-maturity to know how to log. You should log everything that happens but each event must be logged in a particular way. Logging user interactions that same way you log Null Reference Exceptions is not very helpful. Remember that logging provides information and information is gold!

A lot of people do write a bunch of logs. They actually log the entire thing. Take World of Warcraft for example: Everything you do in the entire game can be traced! They can trace every little critter that you killed. They know which weapon you used. They log everything! But, as i said before, just write messages is not enough. You have to write them in a understandable, easy-to-track way. If a developer in your team can’t trace back, for example, the users actions, such as buttons clicked, radio button choices, in-screen info, check-boxes selected, etc, your log is not good. It might help you find and solve a problem, but it could do a lot better. I recently learned a important lesson.

I must say that I have never had a problem with production environment software (Yes, my programs bugs, but always very soon after deploy, so I was always very close by to help). Thank god all programs that I wrote never gave me serious headaches. Until a couple weeks back. Deploy went fine and everything was peachy. So one beautiful morning the client called me and reported a issue. As always I asked for the logs, he sent me and then the my world felt apart. I could not trace what was going on. Everything was there, but i could not set a time-line to the events. Due to multiple front-ends and multiple web services servers it was very, VERY hard to track what happened to a user. I had a special hard time figuring out what messages on the web services logs belonged to whom. It was a nightmare. I wanted to cry. But I managed to do it and I matured a bit… no, I matured a byte… sorry for the pun…

So now I’m developing a new logging lib and a log-reader tool (feel free to “borrow”) that is planned to solve the multi user/thread/server scenarios. The current log lib we use today is perfect to developer and single-user/thread/server scenarios. You can easily “follow” up what and when it happened, what came after what and so on. But when multi-thread/multi-users/async-operations come-out to play things get ugly. Since the current lib only logs the time and severity of the event its very hard to track continuous actions of a user, or even the path of a particular thread.

The lib itself is not enough though. The real magic is the log reader. The reader creates a visual path of the massages. It literally creates a “fork” for multithread visually showing parallel operations and stuff! Its getting very cool when its ready ill post the source code… but now back to maturity…

I must disclose that I’m not the most seasoned developer out there… for christ sake I’m only 24, but this much I can say: Write easy to read and understand log messages. Your kids will be thankful! Ok maybe not, but you will when you need to know whats really going on!

When it’s time to change/move/replan my database?

Quite a while ago now, a “Database and DBMS” teacher assigned a project that we were supposed to research something that he would not talk about in class… some people gather in groups… I did my alone… I don’t like doing this kind of thing with other people because I hate when the idiot in the group says something like “Oh your wrong! I misunderstood the teacher, didn’t come to class, never worked with that and of course I didn’t studied jack-shit about what you just said but I know your wrong” (more about that in other posts)… I rather do it alone, and sell the “extra-spots” to people who didn’t had the time to do the paper… oh my god… back to the point… I did my paper about indexing and also prepared a killer presentation.

When I say that some people gasp on the fact that we have 2 semesters (1 year) of classes focused on databases and indexing was not even mentioned… luckily for me I don’t rely in the college to teach me anything… I learn and go there to get a degree (again more on that later)…

In the paper I developed an idea of a “threshold-line”, which would be an imaginary point that after which the time taken to a certain query to return would be so great that the results would not resolve the issue/demand. To illustrate lets imagine that you have to go to a party tonight and you have to pick your date before. You have 2 cars: one is top less, and the other not. What good is the weather report tomorrow morning to help you decide which car to use? Now try asking google what good is a search result that takes 0.7 sec…

I also worked on the idea that when you re-index your database your “threshold-points” increase, so you can’t reindex when your on the limit because you will burst your quota! the idea is to plan and determine the exact point where you can re-index and you would still be inside your “confort-zone”. One line of thought that i did not exposed on my presentation is that eventually re-indexing wont solve your performance problem.

A day will come when your single database server wont be able, even with a fully indexed file, to handle all the queries. At this point a natural answer comes to mind: Add more servers. But my point is: With 1 server you can handle X requests. With 2 servers you can’t handle 2X requests. Even very well archtected/engeniered/implemented programs cant scale like that. The main point is some applications are not so well built (archtected/engeniered/implemented), so when necessesity comes who has your back?

Lets face it: Planning might be fun to some, but its painfull for most of us. We are simply unable to predict the futere, and most of ours managers (at least mine) think that gathering data today to generate a projection for something else than sales is just time wasted. So, when we get bottle-necked, were not even close to getting that new server… We have to prove that we have a problem today, that will be solved by buying/renting/leasing a new server. The thing that most managers dont realize is that expansion plan is like a disater-recovery plan: “You will only use it if you need it. You might give it a try in a staging enviroment. But not really use it unless necessary.”

A important part of the plan that most people forget to study is “When I should start working?”. If your plan says to call the maintance guy once the pipes are broken, instead than when you realize that pressure is above normal, I must say that your plan could be better. In project management we call this risk mitigation: Identifing a possible problem and working towards that item to avoid a possible problem. If you start think that its might be time to add more servers when your servers are 90% and performance is long gone, I can say that it could be worst, I’ve witnessed it… Recently I client asked me when he should add a third server to his farm and I said “When your average load is about 45%”. He really said something like “I’m not paying your company for clowns”… Then I asked what he was expecting. He was expecting about 80%… then I replied “So, when 1 of your servers crash the other will crash two, because he will have to handle 160% load. If you operate 2 servers at 45% if one crash the other can absorb the hit and still has some margin for a spike.”

Planning changes (and sticking with the plan) is a act of maturity. Not everyone can do it. I have some problems with that. It is so important that i don’t recommend any system to go into production without a “grown” expectation plan at least on advanced stage of development. If you decide that its time when users start to complain, its already too late! Your product reputation is already stained!

How to Make a Killer Presentation

I know that the internet is full of posts about this, and trust me, I’ve read a bunch of them, and noticed that they’re all basically the same. They all focus on how and what to put on the slides, they compare PPT and Keynote and ultimately try to teach how crowd concentration works and how to retain your audience attention.

The fact is that having perfectly built slides, beautiful pictures, nice videos and demos does not guarantee a killer presentation. A killer presentation is a presentation where you expose whatever you want to expose knowing that the viewer is listening to that info, and that he will remember that info and pass/work it around.

The first and most important topic on how to make a killer presentation is to KNOW whatever you’re talking about. Ask yourself: Can I explain what im trying to say to a 6-year-old? If the answer is no, then you don’t know it yourself. Einstein said that. Steve Jobs is widely known as a presentation master and he also shared this notion. PPT/Keynote were banned from internal meetings, if you presented something using a PPT instead of a whiteboard you would be interrupted and deemed incompetent. Steve only used slides for public presentations. He also was a firm believer that if you know something you don’t need slides to explain it!

The second item in our list is “Be Confident”. Second guessing yourself on stage is a fatal mistake. When you enter the stage you have 70% of the crowd “trust”. This means that they trust you but there’s a silent voice on their heads telling them that things are never as good as they sound. If you’re confident and show that you do know what you’re talking about they won’t question. If you second guess yourself the little voice start to get stronger and eventually it will take over and you will be talking to empty-full room.

The third order of the day is “Be In Love”. Being confident don’t get the other 30% of the crowd’s attention. Showing passion for the topic on the stage does. When you show that you really care about whatever you’re presenting, that you worked your “but-off” to be there and tell them whatever is your saying, the crowd subconscious will notice “the love in the air” and will silent the inner voice.

The fourth tip is very simple but very often overlooked: “You are presenting, not the slides”. In a presentation the slides are not the focusing point. The presenter IS! Slides are helpful but you, the presenter, is the core of the presentation. Doesnt matter if you’re presenting a new Presentation platform. Slides are “support material”. Steve was a master in this point: His slides were always very basic, most times they had only one number, or maybe three words or even only a photo. The focus was always on Steve.

The dessert is: “Don’t fall in love for your own voice”. When preparing the presentation ask yourself: “What if I don’t talk about X?”. Always question the “weight” of your presentation. I usually say that if your slide has more than 10 words a lot can be shrunk. Think clean, think efficiency!

Is “crashing” the worst thing it could happen?

Here’s an interesting thought question from Mike Stall: what’s worse than crashing?

Mike provides the following list of crash scenarios, in order from best to worst:

  1. Application works as expected and never crashes.
  2. Application crashes due to rare bugs that nobody notices or cares about.
  3. Application crashes due to a commonly encountered bug.
  4. Application deadlocks and stops responding due to a common bug.
  5. Application crashes long after the original bug.
  6. Application causes data loss and/or corruption.

Mike points out that there’s a natural tension between…

  • failing immediately when your program encounters a problem, eg “fail fast”
  • attempting to recover from the failure state and proceed normally

The philosophy behind “fail fast” is best explained in Jim Shore’s article (pdf).

Some people recommend making your software robust by working around problems automatically. This results in the software “failing slowly.” The program continues working right after an error but fails in strange ways later on. A system that fails fast does exactly the opposite: when a problem occurs, it fails immediately and visibly. Failing fast is a nonintuitive technique: “failing immediately and visibly” sounds like it would make your software more fragile, but it actually makes it more robust. Bugs are easier to find and fix, so fewer go into production.

Fail fast is reasonable advice– if you’re a developer. What could possibly be easier than calling everything to a screeching halt the minute you get a byte of data you don’t like? Computers are spectacularly unforgiving, so it’s only natural for developers to reflect that masochism directly back on users.

But from the user’s perspective, failing fast isn’t helpful. To them, it’s just another meaningless error dialog preventing them from getting their work done. The best software never pesters users with meaningless, trivial errors– it’s more considerate than that. Unfortunately, attempting to help the user by fixing the error could make things worse by leading to subtle and catastrophic failures down the road. As you work your way down Mike’s list, the pain grows exponentially. For both developers and users. Troubleshooting #5 is a brutal death march, and by the time you get to #6– you’ve lost or corrupted user data– you’ll be lucky to have any users left to fix bugs for.

What’s interesting to me is that despite causing more than my share of software crashes and hardware bluescreens, I’ve never lost data, or had my data corrupted. You’d figure Murphy’s Law would force the worst possible outcome at least once a year, but it’s exceedingly rare in my experience. Maybe this is an encouraging sign for the current state of software engineering. Or maybe I’ve just been lucky.

So what can we, as software developers, do about this? If we adopt a “fail as often and as obnoxiously as possible” strategy, we’ve clearly failed our users. But if we corrupt or lose our users’ data through misguided attempts to prevent error messages– if we fail to treat our users’ data as sacrosanct– we’ve also failed our users. You have to do both at once:

  1. If you can safely fix the problem, you should. Take responsibility for your program. Don’t slouch through the easy way out by placing the burden for dealing with every problem squarely on your users.
  2. If you can’t safely fix the problem, always err on the side of protecting the user’s data. Protecting the user’s data is a sacred trust. If you harm that basic contract of trust between the user and your program, you’re hurting not only your credibility– but the credibility of the entire software industry as a whole. Once they’ve been burned by data loss or corruption, users don’t soon forgive.

The guiding principle here, as always, should be to respect your users. Do the right thing.

Top Project Manager Practice: Project Postmortem

You may think you’ve completed a software project, but you aren’t truly finished until you’ve conducted a project postmortem. Mike Gunderloy calls the postmortem an essential tool for the savvy developer:

The difference between average programmers and excellent developers is not a matter of knowing the latest language or buzzword-laden technique. Rather, it can boil down to something as simple as not making the same mistakes over and over again. Fortunately, there’s a powerful tool that any developer can use to help learn from the past: the project postmortem.

There’s no shortage of checklists out there offering guidance on conducting your project postmortem. My advice is a bit more sanguine: I don’t think it matters how you conduct the postmortem, as long as you do it.Most shops are far too busy rushing ahead to the next project to spend any time thinking about how they could improve and refine their software development process. And then they wonder why their new project suffers from all the same problems as their previous project.

Steve Pavlina offers a developer’s perspective on postmortems:

The goal of a postmortem is to draw meaningful conclusions to help you learn from your past successes and failures. Despite its grim-sounding name, a postmortem can be an extremely productive method of improving your development practices.

Game development is some of the most difficult software development on the planet. It’s a veritable pressure cooker, which also makes it a gold mine of project postmortem knowledge. I’m fascinated with the Gamasutra postmortems, but I didn’t realize that all the Gamasutra postmortems had been consolidated into a book: Postmortems from Game Developer: Insights from the Developers of Unreal Tournament, Black and White, Age of Empires, and Other Top-Selling Games (Paperback) . Ordered. Also, if you’re too lazy for all that pesky reading, Noel Llopis condensed all the commonalities from the Game Developer magazine postmortems.

Geoff Keighley’s Behind the Games series, while not quite postmortems, are in the same vein. The early entries in the series are amazing pieces of investigative reporting on some of the most notorious software development projects in the game industry. Here are a few of my favorites:

Most of the marquee games highlighted here suffered massive schedule slips and development delays. It’s testament to the difficulty of writing A-list games. I can’t wait to read The Final Hours of Duke Nukem Forever, which was in development for over 15 years (so it must be a massive doc). Its vaporware status is legendary— here’s a list of notable world events that have occurred since DNF began development.

Don’t make the mistake of omitting the project postmortem from your project. If you don’t conduct project postmortems, then how can you possibly know what you’re doing right– and more importantly, how to avoid making the same exact mistakes on your next project?

Reinvention of the Real Programmer

In classical computer science the Real Programmer (a.k.a Hardcore Programmer) is a mythical man who writes code almost in bare metal. The most high level language he uses is ANSI – C and he debug code in hexadecimal and things like that…

I don’t know anyone that fits that description but investigating and (mostly) philosophizing I came up with a real-world definition for the Real Programmer:

  • The Real Programmer leaves no broken windows behind. Ever.
  • The Real Programmer works with his project manager, not for his project manager.
  • The Real Programmer does not complain about anything. Never.
  • The Real Programmer knows that fixing bugs is more important than implementing new functionalities. (See first point.)
  • The Real Programmer works because he wants to, and when he wants to.
  • The Real Programmer uses a IDE, and high level languages, but he’s not afraid to go back to Assembly if necessary.
  • The Real Programmer does not re-invent the wheel. The Real Programmer re-engineers and re-implements the entire car.
  • The Real Programmer documents his code, in a way that a six-year-old can understand it.
  • The Real Programmer knows that the currency for computers is performance. He’s always thriving for the best performance so he can pay a beautiful interface and other items.
  • The Real Programmer does not TDD. The Real Programmer knows that only making the “bare minimum” is not enough to get to Carnegie Hall.
  • The Real Programmer delivers, but he’s not afraid to delay milestones if he feels that his work is not finish yet.
  • The Real Programmer does not fear tight schedules and rejoices on phrases like “The client changed his mind completaly”.
  • The Real Programmer knows how to separate trendy frameworks from real groundbreaking frameworks.
  • The Real Programmer doesn’t wear suits.
  • The Real Programmer’s computer is not a desktop nor a server. It’s a workstation with more than 1 monitor.
  • The Real Programmer only uses headphones when he’s working in a noisy environment.
  • The Real Programmer is always on top of the situation. He’s calm and never looses his temper. Like a Zen Buddhist master

Think I forgot something? Please fell free to comment!

Leading by Example

It takes discipline for development teams to benefit from modern software engineering conventions. If your team doesn’t have the right kind of engineering discipline, the tools and processes you use are almost irrelevant. I advocated as much in Discipline Makes Strong Developers.

But some commenters were understandably apprehensive about the idea of having a Senior Drill Instructor Gunnery Sergeant Hartman on their team, enforcing engineering discipline.

Scene from Full Metal Jacket, Gunnery Sergeant Hartman Pointing
You little scumbag! I’ve got your name! I’ve got your ass! You will not laugh. You will not cry. You will learn by the numbers. I will teach you.

Cajoling and berating your coworkers into compliance isn’t an effective motivational technique for software developers, at least not in my experience.If you want to pull your team up to a higher level of engineering, you need a leader, not an enforcer. The goal isn’t to brainwash everyone you work with, but to negotiate commonly acceptable standards with your peers.

I thought Dennis Forbes did an outstanding job of summarizing effective leadership strategies in his post effectively integrating into software development teams. He opens with a hypothetical (and if I know Dennis, probably autobiographical) email that describes the pitfalls of being perceived as an enforcer:

I was recently brought in to help a software team get a product out the door, with a mandate of helping with some web app code. I’ve been trying my best to integrate with the team, trying to earn some credibility and respect by making myself useful. I’ve been forwarding various Joel On Software essays to all, recommending that the office stock up on Code CompletePeopleware, andThe Mythical Man Month, and I make an effort to point out everything I believe could be done better. I regularly browse through the source repository to find ways that other members could be working better.

When other developers ask for my help, I try to maximize my input by broadening my assistance to cover the way they’re developing, how they could improve their typing form, what naming standard they use, to advocate a better code editing tool, and to give my educated final word regarding the whole stored procedure/dynamic SQL debate.

Despite all of this, I keep facing resistance, and I don’t think the team likes me very much. Many of my suggestions aren’t adopted, and several people have replied with what I suspect is thinly veiled sarcasm.

What’s going wrong?

I’m sure we’ve all worked with someone like this. Maybe we were even that person ourselves. Even with the best of intentions, and armed with the top books on the reading list, you’ll end up like Gunnery Sergeant Hartman ultimately did: gunned down by your own team.

At the end of his post, Dennis provides a thoughtful summary of how to avoid being shot by your own team:

Be humble. Always first presume that you’re wrong. While developers do make mistakes, and as a new hire you should certainly assist others in catching and correcting mistakes, you should try to ensure that you’re certain of your observation before proudly declaring your find. It is enormously damaging to your credibility when you cry wolf.Be discreet with constructive criticism. A developer is much more likely to be accept casual suggestions and quiet leading questions than they are if the same is emailed to the entire group. Widening the audience is more likely to yield defensiveness and retribution. The team is always considering what your motives are, and you will be called on it and exiled if you degrade the work of others for self-promotion.

The best way to earn credibility and respect is through hard work and real results. Cheap, superficial substitutes — like best practice emails sent to all, or passing comments about how great it would be to implement some silver bullet — won’t yield the same effect, and are more easily neutralized.

Actions speak louder than words. Simply talking about implementing a team blog, or a wiki, or a new source control mechanism, or a new technology, is cheap. Everyone knows that you’re just trying to claim ownership of the idea when someone eventually actually does the hard work of doing it, and they’ll detest you for it. If you want to propose something, put some elbow grease behind it. For instance, demonstrate the foundations of a team blog, including preliminary usage guidelines, and a demonstration of all of the supporting technologies. This doesn’t guarantee that the initiative will fly, and the effort might be for naught, but the team will identify that it’s actual motiviation and effort behind it, rather than an attempt at some easy points.

There is no one-size-fits-all advice. Not every application is a high-volume e-commerce site. Just because that’s the most common best-practices subject doesn’t mean that it’s even remotely the best design philosophies for the group you’re joining.

What I like about Dennis’ advice is that it focuses squarely on action and results. It correlates very highly with what I’ve personally observed to work:the most effective kind of technical leadership is leading by example. All too often there are no development leads with the time and authority to enforce, even if they wanted to, so actions become the only currency.

But actions alone may not be enough. You can spend a lifetime learning how to lead and still not get it right. Gerald Weinberg’s book Becoming a Technical Leader: an Organic Problem-Solving Approach provides a much deeper analysis of leadership that’s specific to the profession of software engineering.

Within the first few chapters, Weinberg cuts to the very heart of the problem with both Gunnery Sergeant Hartman’s and Dennis Forbes’ hypothetical motivational techniques:

How do we want to be helped? I don’t want to be helped out of pity. I don’t want to be helped out of selfishness. These are situations in which the helper really cares nothing about me as a human being. What I would have others do unto me is to love me– not romantic love, of course, but true human caring.So, if you want to motivate people, either directly or by creating a helping environment, you must first convince them that you care about them, and the only sure way to convince them is by actually caring. People may be fooled about caring, but not for long. That’s why the second version of the Golden Rule says, “Love thy neighbor”, not “Pretend you love thy neighbor.” Don’t fool yourself. If you don’t really care about the people whom you lead, you’ll never succeed as their leader.

Weinberg’s Becoming a Technical Leader is truly a classic. It is, quite simply, the thinking geek’s How to Win Friends and Influence People. So much of leadership is learning to give a damn about other people, something that us programmers are notoriously bad at. We may love our machines and our code, but our teammates prove much more complicated.

Is the ability to use a search engine more valuable than memory itself?

Einstein still amazes me… Quite some time ago (i could not get a precise date) he said

Never memorize something that you can look up. – Einstein

and it got me thinking: Although memory increases our productivity (asking google everything is very time-consuming), is it really necessary? I know that this is a highly radical idea, but today you can easily “load” all the specific info you need to accomplish a task in a couple of minutes! How many times have you caught yourself checking google for semantics of a given programming language??? I find myself googling for LINQ semantics all the time…

There is no doubt that its extremely hard to go by a problem that has not yet been solved. Most of the already solved problems has a How To posted on a blog or something (ok, you may not find out a how to build a nuke…). The difficulty lies on finding the place where the How To is.

It’s a fact that some people know how to make better searchs in google than others. My mother for example usually query google the following: “How can I make a delicious dish using X and fresh tomatoes?”… I would query “X tomatoes recipes” instead… and probably get “better” answers. But the point here is: she doesn’t query “How can I tell that tomatoes are fresh?” or “How can I tell that the potatoes are ready?”… she memorized it already…

I know what you’re thinking. You’re thinking that a lot of times you can’t choose what to remember what to forget, and sometimes doesn’t matter how hard you try, you just cant remember a few simple things. I’m not arguing about that, I know that if you search something over and over again is very likely that at a point you will just memorize it and there’s nothing you can do about it. I just find very odd that some people are really focused on memorizing stuff. I know a couple of doctors that actually say that if you know where to look for information and know how to cross them, a 15 old teen can diagnose almost all deceases.

If information is available and easy to find why people try so hard to memorize them? Learn how to make it more accessible instead! Maybe its a cultural thing… my mother’s, and probably yours too, education was based on memorization! Subjects as history and geography can be really tricky… Few schools notice that understanding why something happened is more important that knowing when it did! To hell with schools, very few people get it! Its like math formulas: If you don’t know when and how to use them it doesn’t matter how many you know.

Google can sure help us remember formulas in a few seconds. But he does take a lot more time to teach us how to apply and use them… if he ever manage to do so…

Internet and Intelligence

Intelligence is a very controversial topic. There’s no official definition that is accepted worldwide, but among all the definitions, scholars usually adopt one of the two:

A very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. It is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings—”catching on,” “making sense” of things, or “figuring out” what to do.Mainstream Science on Intelligence


Individuals differ from one another in their ability to understand complex ideas, to adapt effectively to the environment, to learn from experience, to engage in various forms of reasoning, to overcome obstacles by taking thought. Although these individual differences can be substantial, they are never entirely consistent: a given person’s intellectual performance will vary on different occasions, in different domains, as judged by different criteria. Concepts of “intelligence” are attempts to clarify and organize this complex set of phenomena. Although considerable clarity has been achieved in some areas, no such conceptualization has yet answered all the important questions, and none commands universal assent. Indeed, when two dozen prominent theorists were recently asked to define intelligence, they gave two dozen, somewhat different, definitions. – American Psychological Association

Some think that this is a “I say tomato, you say tomahto” kind of discussion, to the propos of this post you can choose any of them.

Scientists did demonstrated that intensive use of the internet does decrease our capacity to concentrate. Some people in possession of that fact concluded, (in my opinion and racionalization) wrongly, that because we can’t concentrate we are getting dumber… In my opinion the lack of concentration does not allow us to show how intelligent we really are. In my opinion lack of concentration is like a dirty window, it’s hard to see through it, but the inside of the room is there!

The ability to concentrate is very powerful, but it’s not everything. Never concentrated so hard on something and actually got stuck? And then left, watched the birds while drinking some coffee and returned to the problem with a fresh perspective and a rush of creativity?

Common sense states that if you know something, you don’t need google for it (ok common sense don’t mention google), and the more you know, more intelligent you are.

At a first glance you might think that common sense is right again, but lets not confuse intelligence with the ability to hold information! Yes intelligence is commonly followed by good memory, but not the way around. Rain Man had an extraordinary memory but performed very poorly on all the areas of intelligence! Einstein himself stated “Never memorize something that you can look up“. Google does help us a lot in this point!

In the business world intelligence is highly associated with the ability to solve problems. The internet and its vast amount of knowledge did not turn those who can use it properly more intelligent! Notice that these people did not solve problems! Other people have! They were intelligent enough to think that maybe someone else already had the problem that they’re trying to solve, solved it and posted a How To about it! but that’s it, no real creation. R&D people only use google to find out how ahead they are, compared to the rest of the world… they don’t expect to find a “how to solve my problem”, because if they do, whats the point on reinventing the wheel?

I would like to conclude quoting Voltaire:

Judge a man by his questions rather than by his answers.Voltaire

Agressive thinking

I base all work I do on 3 quotes:

it’s really hard to design products by focus groups. A lot of times, people don’t know what they want until you show it to them.Steve Jobs

If I’d asked my customers what they wanted, they’d have said a faster horse.Henry Ford

If you talk to God, you’re religious. If God talks to you, you’re psychoticGregory House

Whenever I present them on there’s always a *GASP!* and a little flame wars start. Whenever I meet a client I assume that he’s an idiot and don’t know what he wants… he has a clue about it, but that’s it, a clue. People that don’t know what they want is the most common type of people on earth (and you know you agree with me on this one), so just because these people became executives/consumers they suddenly just “know”?!? Sorry, if God talked to you, you’re psychotic…

Agile people and sales guys (people who focus on sales and profiting… Admit it, in the end it’s all you care about…) always try to hang me. On the other side of the fence, products guys (people who focus on making perfect products) give me a standing ovation. Steve Jobs knew that people very often don’t even know what they need. He knew that most people are unable to tell that what they are doing is “stupid” until someone else comes and show them a better way to do it, or that it doesnt need to be done at all (sometimes this guy is called crazy, irrational, idiotic and often burned in a pillar of fire). In simple words: if the iPhone have not been invented, today, april 4 2012, we would be using blackberries with uncomfortable, tiny-lettered keyboards and so on… I even dare say that you would still be hanging out with one of these:

MP3 player 2007Mp3 CD Player Philips

The world is not perfect… 90% of the time I can’t talk to the end-user of my solutions… I develop customizations for portals e Project Management Solutions… I always get requirements from a (the clients) Project Manager or a coordinator or something like that… In a good world the client would call me and say like “Hey Leo, come over here, I think my people can do better and I want to find out what we can improve”… Real consulting! In a perfect one my boss would walk to me and “pro-activally” say that we are going to build a revolutionary project management tool, not because someone said so, but because (he’s an ex project manager) he notice that the field is ruled by complex and ugly tools that are not use friendly. Work, most of the time, come to me as an order: “The client wants this, implement it”. Classical.

When I get a “develop me a W that does X and Y” request I always think what motivated the client to think that he need W developed and design, in the end, this “solution”. I know that understanding the disease is more important and effective than treating symptoms. X and Y are usually a way to patch symptoms. Example: “Oh my department is not producing as it should because people are not communicating well! I want you to develop me an online msn like Conference/Chat app so people can communicate better!” Maybe… just maybe… the reason why people are not communicating is not because they can’t get in touch with each other… Maybe you hired lone wolf’s that can’t work in teams… Maybe they are producing as they should and just not as you expected…

To assume that your client is a very smart guy, that he knows about what he’s been talking, that he thought the scenario through, that he’s planning for the future, that he’s attacking the disease and that he’s using the right tools to do it,  is a very risky assumption. Yes, you can be right… I rather do the other way around and eventually be surprise by brightness then by stupidity…