Most software development companies measure productivity of teams and individuals. Those measurements are then used to rate the individual or group performance. Numbers are so nice, cozy and familiar. They make things simpler; and if someone’s productivity can be objectively rated with numbers, lucky is this person and lucky are the managers of this person. This person is lucky because the clarity of numbers backs the clarity of expectations, and if someone knows that they may get a raise if they hit a certain number of whatever, that’s great. Managers are lucky because they are spared the need of figuring out how the heck to rate people, so they can be given or refused a raise, or a promotion, or a reward. However, in some cases mapping the actual value of an individual’s productivity and contribution to numbers might be challenging, if not at all unattainable.
Let’s look into the reasons why individual productivity is measured by counting things. This habit can be traced back to material production, or to any activity to product tangible things. If a farmer picks 100 vs. 50 cabbage heads per day, just an abstract example, this is surely good. One can not let a cabbage that is ready to be harvested sit for too long out in the field; it may fell prey to some pests, etc. With cabbages it surely makes sense to move fast, if we’re concerned with harvesting solely. By the same token, a baker who runs a bakery on a busy street is more productive if she bakes more croissants. The logic is flawless: more croissants, more customers served, more profit.
With this measurement model looking so clear and simple, it’s very tempting to copy-paste this practice of “more is better” to knowledge work. The non-material production. They used to measure productivity of developers by lines of source code produced per certain amount of time. I wonder if someone still uses this metric. One smart person has something to say about it.
Measuring programming progress by lines of code is like measuring aircraft building progress by weight.
Other equally poor attempts to measure productivity include: count of bugs discovered by a QA (what if this person tests the heck out of a feature, making sure it’s clean, and finds no bugs?); the count of words in a written piece, or the count of graphic icons designed per day. These are abstract examples, and, thank God, it looks like most of the software development companies moved away from those naive metrics. The less is more adage is grasped better now, when we seem to live in the age of super-abundance of everything (which doesn’t save us from the chronic shortage of value).
That’s the word. Value. How much shippable, valuable, finished work has this person done? Working many hours is far from being equal to super productivity and, after a certain point, indicates inefficiency. What I call “productive” is when one uses time in the office wisely, rather than works around the clock. Then, which contribution is this person making to the group? What does he or she do to improve the workflow, or to keep the integrity of the team? Naturally, being a group contributor means that this person is biting some bits off of their individual contribution. What if this person contributes at a larger scope, beyond their core skill? Then, how to factor in the subtracted individual performance when measuring productivity?
With these intricate nuances, I wonder if someone is ever able to quantify them and use it as a numerical measure of productivity. Surely, the kingdom of tests and grades has its doors always open, as it attends to the needs of busy managers looking for fast and clear ways to rate a person’s performance. But, as often is the case, the flip side of fast is slow. Individuals concerned with the team’s success are the keepers, and if a numerical grade fails to code the value of this person correctly, they might be demotivated. We all are human, and managers are human as well (in case someone ever doubted that). They want to rate the performance of teams and individuals faster, especially if a company is large. Better safe than sorry, stakeholders better make sure they can trust the scoring methods. Otherwise, it would make more sense to stick to the old-school ways: observe people, what they do, and see if this brings value to the company. We know that it sometimes takes years for judges to be ready with their rulings. It might take what appears to be an eternity for a snail to figure out what’s inside this bubble. A rainbow or gas stains? But the time spent on deciding is well worth it.
Image credits: Vyacheslav Mischenko
Why Fast Is Slow
How Communication Factors In To Production
When Intensity Pays Off
Today is a great day to share some killer tips on how to get the best out of one’s creative potential. These tips would be of special help to digital creative individuals, that is, to anyone, who thinks for a living as they look at a screen. So, whether you are a graphic artist struggling for that elusive touch that would make a corporate identity unique, or a UX designer who wants to put together an intuitive interface, or a product owner looking to figure out what goes next in a product, or a project manager looking to facilitate a team’s performance, or a software developer crafting a piece of code, look no further. This article is your philosopher’s stone for achieving top results.
So, friends, lend me your ears. To turn on this magical power of brilliant insights, one just needs to do these three simple things day in day out.
1. Wherever possible, spend the bulk of your most productive time, preferably in the morning, when your brain is fresh, doing a research online as to how others have done this thing that you’re working on at the moment. If you’re a graphic artist, make sure you not only dig out all possible images or ideas that can be replicated, but remember to throw all those links with images at the other fellow designers in your team. Not only will this help
strangle their creative edge ensure that all the industry-accepted standards are followed, but they won’t need to spend any more effort on inventing original concepts. Leave no stone unturned. You need to chase each and every clue. For strategic decisions, make the list of step-by-step routines copied from how others addressed the same challenge. You will never do anything valuable if you fail to follow the proven routines that other people have followed many times before.
2. The second magic success ingredient is to expose the drafts of your work, or to have your incentives for strategic decisions
bullied discussed by as many people as one can possibly get. Facebook is an ideal space for that. Remember to be consistent in sharing the in-progress sketches or ideas with strangers, who don’t know you personally and who are completely unaware of the particular context you’re working in. They’d shoot their comments, wasting your time making their invaluable contribution to shaping up this great idea, or a graphic, or a piece of code you’re currently working on. Consistency is the key here. The more exhausted you get filtering out the rare grains of sensibility from the avalanche of clueless comments, the closer you’re to what you’re looking for. The logic here would be the same as on the picture below. One is more likely to build a snowman with plenty of snow, picking out those special unretarded pieces with care.
3. Finally, there goes the trickiest part. Once you’ve let out your finished and polished brainchild to the outer world, work to secure the right attitude to external criticism within yourself. You absolutely need to master the skill of proving your worth based on each and every comment received from your network of personal and professional contacts. The smartest way to accomplish this would be to build a model that would transform the bites of criticism to a numeric value. You’d need to set a certain plank for yourself with this model. Once this value gets below this plank, you need to work harder on the points 1) and 2).
Repeat this cycle forever, and you will sleep serenely, like a baby, enjoying the bliss of reaping harvest from all your hard work.
2 Approaches to Focus In Knowledge Work
How Communication Factors In To Production
Non-Judgmental Communication for Agile Teams
Are You a Copycat?
Work in software development is made up of 2 essential parts: planning and doing. It stands true for any methodology, be it agile or lean or any other variety thereof. The theory of project management has it, that a project can accommodate 10-20% of time spent on planning, tracking, adjusting trajectory, wiping the binoculars, etc. At least, we used this approximation when I worked in an outsourcing company many years ago, and it varied depending on the challenges the project was facing.
With projects, this world has only 2 options for anything else than doing:
Option #1: the bulky “talk-plan-remake-do it again” cycle is a given.
Option #2: over-indulgence in all things talking and planning is a waste.
The hardest job is to distinguish if a project falls into option #1 or option #2. If the option #2 is disguised in the make-up of a given, developers and other performers will feel troubled about whatever they’re working on. When someone has a job to do, all they need is to be left tete-a-tete with their work, and as little contacts with the outer world as possible. If a stretch of work — be it during the day, or over longer periods of time — is interrupted by previously overlooked hurdles, the drive to ship is replaced with something like this.
The unwelcome “talk-plan-remake-do it again cycle” usually happens if planners are not directly involved in the ship-deliver activities, and lack foresight in tracing the hurdles in advance. For the actual performers, being subjected to all kinds of catch-up moves feels like a huge annoyance. Performing is a raw and healthy thing. Some software developers naturally feel more inclined to performance-based activities, and their philosophy is: “I care only for what I have to ship, and I don’t want to mess with the waste of talking.” If we draw a parallel with performance on stage, the feeling after having shipped a piece of work would be akin to this: I’ve worked my a.. out, I’m sweating and exhausted, but I’m happy because I performed and delivered the show that people liked!
There’s nothing as fulfilling as a happy experience of a well-accomplished performance, and performers find delight as they ship something on any given day. All the tiny dwarfs in the world run a round of applause at that. The deliverable can be a piece of code, or those few milliseconds of faster load times, or a finished design element, or an HTML-coded web-page. If it feels like too much unwanted planning and discussing that undermines this desire to perform, your team is in trouble. This feel signals that the whole act of planning is misinterpreted, and is seen as a thing-in-itself that lives in isolated reality, unrelated to a shippable outcome. Besides, it’s a proof that the keepers of “talk-plan-remake-do it again” routines lack competence in nailing down the optimal solutions at the right time. If that’s what’s happening in your organization, remember that talkers and planners are not the rainmakers. Performer is the rainmaker, and planners would be better off if they let performers perform, or better yet, get to performing and doing rather than talking and planning. The catch here is that with time some sort of organizational blindness might develop, as talkers and planners manage to come across as rainmakers. Beware that optical illusion.
… and why it implies a lot more than what we see on the surface.
I guess it wouldn’t be an exaggeration to say that software development is the most knowledge-intensive field of production there is. No other industry is relying so much on the balanced influences of society and technology. The ever-changing ecosystem of software development thrives on knowledge… or is it wisdom… or is it information… or Big Data? There might be some confusion here, because “Big Data” is a lot more trendy term these days than “knowledge” or “information”, not to mention “wisdom”. With so much emphasis on data, the other three options look greyed out. Little is being said on why organizations might or might not need it, and where this whole movement is rooted. Curiosity may have killed the cat, but it can for sure save many humans (and bucks, too), so I took a deeper look into all things Big Data and came to some interesting conclusions. It makes sense to note the points of departure and destination before carelessly catching this train.
Previously I’ve been exploring how human needs and reactions triggered the rise of agile and then Kanban. The Big Data trend is tied to human-related incentives as well. To the shortage of some skills required to fulfill some needs, to be exact. Let’s look into what those needs and skills are. Fast backward to the early 80′s, or even 70′s. Not to go into too much detail, the trends that started prevailing in education around that time were shaped by the philosophy of pragmatism — and still are. I’m talking about the United States mostly. The summary of this philosophy is: business only. When young people make their educational choices based on this thinking, they want to pick a profession that would give them the fastest pay off on the college loans. Computer science and finance seem to be best suited for that. This narrow specialization looks like a reasonable way to get to earning money fast. This “business only” trend in education delusively looked (and still looks) like an excellent option. Why would someone need more skills and more knowledge than exactly required for doing their job with computers or with finances? That’s where the trap is. With several generations of engineers and tech business executives who majored in their narrow specializations, the community of tech professionals encased themselves in one-sided thinking and experience. As these people find themselves living in their tech-only universe, and at one point feel the lack of skills required for wise decision-making and business leadership, they only have their limited domain to look for any clues and hence resort to the Big Data or artificial intelligence panacea. Obviously, IT businesses are in permanent shortage of efficient leaders who can steer the wheel sensitively and wisely. The current setup of mass education, however, be it high school, college, or university, is not suited for raising such individuals. The point is that a proficient tech leader has to be knowledgeable in humanities, in addition to computer science. Someone with technical background will probably want some scientific proofs for that argument. I have those proofs, and I hope to share them in an article one day, although it might be quite hard to prove that technically. Technical background endows people with the ability to keep mental focus, no doubt. But the ability to focus alone is not enough. What if the focus is on wrong target? From what I’ve observed, techies mostly fail in seeing how nuances and small things can make a big impact. Tech business leaders have to wear many hats, see the big picture, stick to common sense and mix it with foresight and intuition. That’s what humanities bring on the table. People of such mixed breed have become a scarce treasure these days.
Back to Big Data. This trend is there for a reason. Technical folks genuinely want to squeeze all they can out of their limited technical universe. That’s about the same as with Newtonian physics vs. quantum physics. It’s neither good, nor bad, just too narrow, and techies are taking the course of action they believe to be the best. There are exceptions, of course, but exceptions only confirm the rule. This guy has delivered a presentation called The Ephemeral Role of Data in Decision-Making. I’m sure he has some sort of humanities background behind, if not through the formal schooling then in some other way. Big Data is touted with many trumpets, and it would be a professional suicide in some organizations to stand up and question the common sense side of this “golden” rush. Techies eagerly measure all they can, assuming that the count of molecules in water will dictate the shape of ocean waves, figuratively speaking. By the way, the original concept of Big Data stood rather far from where it is now. It was about physical space needed to store all the data in the 80′s – early 90′s. With the storage costs declining, the Big Data trend found an outlet in another barn and took its current shape. The overvalued significance of Big Data reminds me of the days when they sang lavish praises to Facebook, until with time it became clear that it is harmful for people’s health.
Summing up, the Big Data trend and its misuse is rooted in the short-sighted approach to education. We are facing the consequences. Too few technical leaders can rely on their own guts and broad background in making smart business choices. As if enchanted, they see nothing except the very questionable maxim that past trends will predict the future trends and hence provide some safe ground for their decisions. Hell, no. But someone will reap the produce as there’s a whole tribe of consultants and companies waving the banner that reads “leverage your BigData”.
As you read the article, you’ve probably noticed that my conclusions come from various sources: trends in education, philosophy, social sciences. Thinking is the hardest job. Joining the flock is much easier. A mule tempted by a carrot will sheepishly follow the carrot anywhere without thinking. It takes more than being a mule to stop, look around, contemplate the hidden driving forces of things happening around, and take a sober unretarded decision as to what your organization needs. Big Data has to be approached with reason and caution. There’s no point to collect data just for the sake of measuring. There are many other more interesting things in the world to do, rather than that. Speaking of interesting things, humanities combined with computer science education — that’s what we need to grow smart tech leaders. Yes, there are costs involved. But where will the cost be higher, and for whom: letting it all go with the same impaired narrow education, or finding a way to nurture thinking individuals who are indispensable in any positions of governance, not only in IT?
Prioritization and Big Data? Think Human Nature
A couple months ago I started a series of posts about communication (see Non-Violent Communication for Agile Teams). The concept of non-violent communication has been introduced and championed by Marshall Rosenberg in his notable book. As I received feedback from readers on that post, some reactions could be summarized as follows: “What are you talking about? Which violence? Do you think we behave outright violently when we communicate at work?” I pondered that, and came up with a bit re-framed concept. While Marshall Rosenberg has been dealing with people from various spheres of life, e.g. jail confines, family violence, and other cases of the outright aggressive behavior, we have quite a different situation as IT professionals. We can not be violent per se; this is office, work and yelling at someone, or using physical force (which would be perceived as the ultimate violence) is out of question, of course.
So, I’d like to give a bit different highlight to the subject of communication in agile teams, calling it “non-judgmental communication“. If we think about it, in its verbal form, in a well-behaved social environment, what would the utter form of this “violence” be? To me, this is accusation. For example, when a person in a team publicly blames another person for a failed release, saying: “It’s all your fault. You’ve missed this thing in the code. You overlooked this bug. You’re the one to blame for this late release”. It’s an utterly simplistic example, just for the sake of example. From what I see, the culture in most software development teams does not allow people to be that bluntly accusing. We are all human, and we know that people must have their reasons for the delays, or problems with code, etc.
The next gradation of violence in agile teams would be judgment. Someone might ask, why on Earth can judgement be a form of violence? Let’s look deeper into that. The most common example is calling a thing someone else has done “good” or “bad”. Especially when this judgement comes from someone in a position of a formal or informal authority. Like, your code is “good”, your design is “bad”, your article is “good”. Think about it, which value such an evaluative adjective would bring to what the team is trying to do? It’s only a judgement, and it suggests no way out. Doomed. However, what we are looking for when we work in a team? We look for the ways to improve. To keep our colleagues encouraged to perform at their top ability. To me, the “good-bad” judgment ultimately kills this spirit of friendly feedback and improvement. Okay, you say that this design is bad. But have you noted that this person is truly searching for a sweet spot? That this person genuinely cares? That he or she wants to come up with the workable solution? Why not reach out and help? Same with the written pieces and presentations. The best thing one can do is express their perception as a feedback. So, instead of saying “this presentation was the best”, we need to learn to say “this presentation was of most interest to me. I find it well prepared”. If we want a more fitting word instead of ”good” to express our attitude, I have this friendly word ready: interesting.
Now, someone might ask why it might sound so upsetting to the other people when someone else gets the “this one is the best” score. First of all, everyone invests their best effort in what they do. The judgment given to only one out of many might sound especially painful to those others, if they feel underappreciated for their input. They would then project their feeling of being underappreciated to this “good” praise that goes to another bird in the common flock. Someone might call this nonsense, but it’s as serious as it can get. It’s all in the culture. If the feeling of underappreciation is mixed with the judgmental “good” that goes to another peer, this is a flammable mixture. It’s a far graver cultural flaw than one might see on the surface.
This whole subject of judgments can acquire another perspective. Some of you might have heard of the Dunning-Krueger effect. In short, this is when the incompetent rate their skills high simply for their inability to recognize their mistakes, and the competent are too harsh on themselves. For agile teams, this might have a consequence that competent professionals who suffer from this bias, might get all unenthusiastic about their abilities at all, which in the end would mean losing out for the whole team, as they won’t be able to capitalize on the skills of the competent ones. On the other hand, we tend to be very condescending to those “incompetent”. If someone does an utterly improper thing, and then goes how great he is, we somehow rather laugh at this person, whereas a harsh judgement could be a proper response in this particular case. It’s a very fine line, and certain psychological skills are required to keep it. Remember the Emperor’s New Clothes tale? No one dared to say that the emperor just had no clothes on, until a young boy voiced the truth. Actually, there’s another human reason for this loose attitude. Subconsciously, we might feel that we are superior to the incompetent ones, that’s why we “let them live”, just to make fun out of them, keeping their heightened false beliefs of their abilities. In such cases, a shock treatment might be needed, and something like a judgement really needs to be expressed. Like, dude, can’t you see that what you say is absolutely clueless? This will be like a sobering shower for this incompetent person, and finally it would help them form a realistic understanding of their abilities, and improve, if not in the area where they’re clueless, then in something else.
The benefits of non-judgmental communication are enormous. Organizational culture does not emerge overnight. This is something that builds up with time. The first steps that one might want to take looking to introduce the non-judgmental culture, could be this: watch yourself, whether you use judgement when giving feedback about someone’s performance. Note if you’re inclined to “good” or “bad”, and consciously replace it with “interesting”, or “smart thinking behind this code”, or “I can see that you’ve put much effort in this design”. If you need this person to improve their work — this means, to make the design more compliant with the objectives, etc. — rather give them some advice. Creative people are usually very sensitive. They need to be treated like fragile antique vases. *almost no kidding*. To get the best of their abilities, one has to learn to communicate with such people. Acknowledge their input. These are all very subtle things. But devil is in the details, and software development is more about people than anything else, and I humbly hope that my posts contribute to this shared awareness.
Non-Violent Communication for Agile Teams
Are You Dumb?
The Dangers of Small Talk
How Communication Factors In To Production
I’ve been contemplating recently about agile movement as something that has risen from software developers who felt that the challenges of their work are not addressed by the waterfall approaches of the industrial production, and shared my thoughts in the Back to the Future of Agile Development essay. Then, as another layer of this thinking paradigm, I saw that Kanban as a method in agile software development stems from the human need to get rid of deadlines (check out the article Kanban as Multiban?) Now, there’s another concept in project management that falls into the same pattern. I’m talking of what is known as “project portfolio management” in the enterprise lingo, and what we call “multi-project prioritization” for smaller agile companies. So, this time I want to look deeper into prioritization, and give one more example of how a promising new trend stems nowhere else as from human nature.
On to project portfolio management. Just as waterfall was copy-pasted from industrial production, and turned into the Rational Unified Process for the sake of software development, in the same fashion, project portfolio management has its roots in the financial investment industry. As always, there’s a human need behind it. Someone must have been tired of looking for the ways to manage risks in software development projects, and resorted to what seemed the closest available counterpart — financial investment. But if we look deeper, would there be any difference between financial investment and following through many projects to completion in an organization? For sure, yes. First of all, projects are not shares or stocks, and in investment management one is looking to lower the risks, and compile the investment portfolio in such a way, so as to mitigate them, and optimize the financial gain. Only that and nothing else. It’s not that this person is concerned with being strategically involved with a public company whose stocks or shares they’re buying.
It’s far more complicated when we deal with projects, and especially with software development projects. One key difference is that the projects are meant to be followed through to completion. Of course, a lot depends on the organization. I figure that IT and software development projects could be shuffled as investment stocks — the closest match — in budgeted research fields, like in the military or in the government projects. Most IT companies, though, have nothing to do with picking up or giving up on projects. While in the finance industry the only value indicator is financial gain, there are many more value indicators in software development. Usually, the question is not if the project should be picked up or given up. The questions are:
1. Are we fitting in the budget? Do we need to secure a leeway to complete our projects?
2. Are we lax in terms of time? Do we need to sacrifice some parts of the project, and skip them, in order to ship some workable software in time?
3. How about people? Are they all balanced well in the projects? Have we made sure that the team’s collective energy is sent in the right direction?
4. This one is the closest to where we might come at portfolio management. Let’s say you work in a large organization and you have many projects, as an IT director, to supervise and to report on their health to someone standing higher. Or, in a smaller product dev company, one might have this multitude not with the projects per se, but with what we call “product features”. Ironically, for a smaller organization, this would seem the closest approximation to portfolio management. We have to prioritize and decide, whether we skip this feature, or follow through.
On all of those 4 levels, it’s about prioritization. That’s what it’s all about. Prioritization is the toughest job of all in the world, be it in personal life, or at work. Sacrificing is the most daunting challenge, and it imposes a huge load on a person or a group of people who are supposed to decide and prioritize. By now, the buzz in the industry says that the concept of “project portfolio management” has something missing in it. The tools for multi-project prioritization are not universal, and they don’t do the magic instead of this tired human being. Either the tools have to be customized (at big costs), or they miss some instrument that is crucial for this particular organization. In a nutshell, the project portfolio management concept has outlived itself for effective prioritization, just as RUP had outlived itself previously, and was replaced by agile as a methodology in software development. But still, as a product owner, or a project manager you need to have a birds-eye view on all the risks. Still, you want to get this burden off of your shoulders, and finally get a tool do the bulk of the prioritization for you, as this is the hardest job in the world. What happens usually when some methodology is not working out as expected for those human beings? Right. They’re on the lookout for new, better — and what’s most important — easier ways to prioritize.
Voila: enter Big Data. There’s much talk going on about it now. This is the next big thing coming, as it is supposed to make a productivity breakthrough in prioritization and decision-making. If we draw a parallel with the previous occurrences of groundbreaking phenomena (like, the way agile appeared in software development, or how people resorted to Kanban within the agile paradigm, or how they looked to use investment portfolio methods for project management), this prioritization seems to be the hardest job that IT professionals want to make easier for themselves, intrinsically.
Big Data is a trend that can be briefly described as follows: all the huge data about past performance and work is stored, and can be retrieved then to see how past trends can recur in the current trends, thus helping decide and prioritize. Considering the meta-law of things developing in cycles, this might work, to some extent. There is a certain probability that past trends would help one prioritize efficiently in the present. There’s some financial software developed that calculates those trends. But, from what I’ve been able to see, the big fish and the big gain in stocks usually come at random. This all boils down to the intuitive feeling. Something outside data and calculations. This is not to diminish the importance of data analysis. Any data is a huge asset. Even more this is an asset as we live in the age of information, and we need to learn to get the best of those assets. But, just as the term “project portfolio” has the trace of hope instilled in copying this practice from financial investment to software development, in that it would set us free from prioritization problems, there’s something to watch out with the Big Data trend. Yes, we always strive to put burdens off of our shoulders, as Homo Sapiens, and that’s why we started with sticks as tools, then shovels, and on. Same with the heavy duty prioritization and Big Data. To a certain extent, we can be sure that it would help us move forward, and give more sophisticated tooling for effective prioritization. But ultimately, there’s something even beyond Big Data. Some other infotech miracle. All in all, not that we should lose hope in freeing ourselves from prioritization work altogether, but I don’t think that even Big Data would become the silver bullet for the IT professionals in prioritizing their project choices. We will have to carry personal responsibility all the same. But this would definitely be a step ahead in the evolution of methods and tools for effective prioritization across many projects and initiatives.
Back to the Future of Agile Software Development
Kanban as Multiban?
Software penetrates every pore of human existence. We look up the weather info over the web, giving up on outdoor thermometers. We’re driving to destinations with GPS navigator (forget paper maps with their G7 sections on page 59). We turn on RunKeeper when riding a bike to calculate the average speed and run and boast in Twitter. We’re using software every single day of our lives. It seems we’re hugging our dear gadgets a lot more than our loved ones.
No one knows the exact how-to of writing great software fast, that’s the problem. Waterfall passed away at the crossing of 2 centuries, whereas new software development methodologies (agile) fail at solving the fundamental problems so far. We’re living in very interesting times. Software development industry grows fast right here, right now, and the foundation for a quantitative leap is building up.
1. You don’t waste time on estimation
Estimation takes time. Even if you do planning poker and use story points, it still takes time. What do you do to improve estimation accuracy? You gather some data, analyze the data and discuss the results. You are spending time on all that. But are you sure that estimates really provide any value? Most likely not. It is a waste. You’d better spend this time doing something important for the product.
2. You shouldn’t explain to higher managers why it took soooo loooooooong
If you don’t have estimates, you can speak without fear and explain things clearly. You can enumerate problems and explain how they were resolved. You can show that you work really hard and take all the necessary actions to release this story as soon as possible with great quality.
If you have estimates, be ready to hear something like “you screwed up, maaan! You had to deliver this feature till the end of the month! What the f%$k is going on?” as an argument. The team put themselves on a weak side immediately and have to care about deadlines more, not about quality or a better solution. Is it something you really need?
3. You don’t give promises that are hard to keep
You are not relying on people’s optimism (which is built in). People are optimists (almost all of them) and inevitably give optimistic estimates. You can use complex formulas or very simple rules to have better estimates, but is it really worth it? You can spend many hours defining correct formula for exactly this development team. People will not trust you, since you (rightfully) don’t believe in their estimates. Numbers will look “made up” and still you will have incorrect estimates in the end.
4. You don’t put additional pressure on development team
In some teams Estimate is equal to Deadline. That’s bad. What do you do to meet the estimate? Compromise quality? Tolerate average architectural solution? Abandon polishing? All that leads to poor solutions that end users will not like. It is better (almost always) to spend more time, than planned, and release something really usable and useful.
5. You focus on really important things
Imagine, you have a planning poker session and estimate all stories inside an epic. You sum up all the stories and define it as a 1000 pts epic. You just stick a “fear” label on the epic. People do fear large problems. A 1000pt problem is huge and psychologically it is harder to decide “let’s start it right now”. Product Owner will have a temptation to implement 100 smaller user stories instead of this huge epic. However, it may be a bad decision. This epic may be really important, the morst important thing for the product.
If you don’t estimate it and are not completely sure about its size, you have a better chance to actually start doing it.
Today I’ve read two interesting posts: The Cost of Code by @unclebobmartin and Code as a Cause of Project Failure by @DocOnDev. They discuss various arguments to prove that all projects fail because of code. The main argument is that if code is free and light-fast to change, project can’t fail. OK. But this case is rather extreme and obviously impossible. We don’t live in a world with hyper-space jumps, teleportation and free medicine (unfortunately). In real life code costs a lot and it will be true in the next decades for sure, so this argumentation proves nothing. In ideal world there is no such thing as code. In ideal world you have a solution in seconds without computers, software and other complicated things. So I don’t buy this idea. Code is not the main reason for projects failure in reality.
Code is not free. Code is expensive. We do not sell source code though, we sell solutions. If it is possible to create a solution without source code, it would be fantastic. Let’s take an industry that handles tangible objects. Automobile industry does not sell carbon and metall — it sells cars. It sells a solution to transportation problem. Teleportation is an ideal solution, but we can’t teleport nothing but electrons, sadly. We buy cars to get from point A to point B. We buy a solution.
Code != Solution
I think the main problem with projects is that they provide either bad solution or no solution at all. Nobody buys stagecoaches these days, since there are more efficient solutions. If a project does not solve anything, it will fail. If a project solves something, but in a nasty, unusable way, it will fail. You can create an absolutely beautiful architecture with the cleanest code in the world. You may have 100% test coverage, complete separation of concerns, flat hierarchies and methods without boolean arguments. You may have all that beauty, but still fail miserably if the program does not solve user’s problems efficiently.
You may argue that with clean code you can refactor it fast and change everything. C’mon, it will be a full re-write if it solves the wrong problem. Can you fix a stagecoach to make it a true competitor of a car?
On the other hand, if a projects solves a right problem, but with some issues, clean code is very important. You can’t adopt fast without it. You can’t change things and react to people demands.
Don’t get me wrong, I believe that clean code is an important thing, but it is not the most important asset in software development.
This week’s hot topic is Scrum Alliance. Scrum Alliance dysfunctions were revealed by Tobias Mayer in interview and a blog post State of Agile.
Scrum should not be codified in any way: there is no authoritative Scrum, there is just what we do. Any attempt to nail Scrum down to one definition will be a precursor to its death. The Scrum Guide comes very close to taking the life out of Scrum. The Scrum Alliance-threatened Scrum BOK will kill Scrum, for sure.
I personally see the danger as well. While I understand a desire to protect Scrum from blurring focus, it is also obvious that Scrum should evolve and improve in a very flexible way. BOK is too heavy and slow.
I spent several hours this week improving my hiring skills. Interesting article and even more interesting discussion. I especially liked two questions and most likely will include them into future interviews:
Give me a normalized database structure that you’ll implement if you were to build gmail – incorporate conversations, messages, multiple message participants and labels.
Then, depending on the candidate, I build upon the question,and go into various optimizations possible, the ways caching would be implemented, sharding/splitting/de-normalization would be done with load, etc. etc. With good candidates, its always a very interesting discussion.
This one is a true gem:
My favorite: “Write a script that will save you one minute of time every day”