Three years ago I wrote an article that describes the changes in our Agile software development processes from 2008 to 2012. Three more years have passed by and our processes were not set in stone. Here I want to provide you with 90 months of changes in our product development practices, company culture, structure and engineering practices. Hope you will find it interesting and learn from our mistakes.
We are still working on the Agile project management software Targetprocess. While we started three small side products last year, 90% of the company is focusing on Targetprocess.
| 2008 | 2009 | 2011 | 2012 | 2013 | 2014 | 2015 | |
| Company size, people | 15 | 22 | 30 | 48 | 60 | 72 | 80 |
| Company structure | One cross-functional development team | Two development teams: Core (5 Developers, Scrum Master, 3 Testers) Integration (3 Developers, Team Lead, Tester) | Several Mini-teams. We form a mini-team for every Epic. It is fully responsible for Epic implementation. | Several Mini-teams (quite clear backend/frontend separation) | 4 larger cross-functional feature teams | 5 cross-functional teams | 5 cross-functional teams, 2 of them fully cross-functional |
| New Technologies | C#, ASP.NET, ASP.NET Ajax, NHibernate | ExtJS | LINQ (own powerful data framework), jQuery, NServiceBus | Single-Page JavaScript, REST, Python | SignalR, Rx | ReactJS, CasperJS BuildBoard: Scala, Angular.js | Redis, webpack, ElasticSearch, Kibana |
Team Structure
There is a clear pattern of more and more cross-functional teams. We started with just developers and testers in 2009, added feature owners later in 2011, added designers in 2014 and now we even added product specialists into development teams to close the gap between customers and developers. Maybe in future these cross-functional teams will transform into independent business units.

Emergency Team
This is a special team we formed in 2013 to fix bugs and address minor improvements. Initially it consisted of newcomers. The idea was that newcomers spend 2-3 months in the emergency team, learn the codebase on bug fixes and then join some development team. It didn’t work well. Lack of experienced team members with a good codebase knowledge in fact impedes learning.
In April 2014 we decided to remove the emergency team and instead rotate this role between all the other development teams. In August 2014 we paused the rotation to focus on new functionality. As a result, it became much harder to put small fixes and improvements into the roadmap. In a retrospective we admitted this role is required and I don’t think our company can live without this for more than 6-8 months. Overall product quality degrades.
Technology
On UI we have completely abandoned ASP.NET and transformed Targetprocess into a single-page application. The new UI is extremely interactive and novel, but a bit more complex than we desired. We struggled with the UI architecture, but now we have a decent component-based architecture with React.js on top.
We have a somewhat pluggable architecture that allows the customer to extend the product according to their project management needs and business domain, with plugins on the server-side and mashups on the client-side.
We feel that the server-side decisions we made 8 years ago are outdated now, so we have to change significant parts of the product core to utilize new approaches and meet new market requirements.

Process
It is hard to fit all the data into a single table, so I split all practices we use into several areas.
Planning practices
This table shows the evolution of practices related to planning.
Orange background - practice change caused significant problems.
Green background - practice change led to nice improvements.
| 2008 | 2009 | 2011 | 2012 | 2013 | 2014 | 2015 | |
| Iterations | weekly | none, we switched to Kanban | some development teams tried to use iterations again | back to Kanban | |||
| Release planning | Formal release planning meeting. Release has ~2 months duration | We do not plan releases | We create a long-term roadmap that shows the future product vision on a very high level (Epics). The roadmap is updated every 3 months. | ||||
| Features prioritization | ad-hoc, by Product Owner | ad-hoc, by Product Board | by Product Board using a formal ranking model | ||||
| User Stories estimation | Planning poker. Estimate user stories in points | don't estimate | Things got worse | quickly estimate in points without a formal approach | |||
| User Stories split | User Story should fit 1 week iteration. | We split stories, but sometimes not as aggressively as required. | It is still our weak point. Sometimes user stories are too big. | Some improvements reached, but it's still a very problematic practice. | Still hard | ||
Iterations
We switched to Kanban 6 years ago and in general didn’t miss anything. It is interesting to note that after several years without iterations some teams decided to try them again. The argument was like “Well, maybe we can increase our development speed with iterations since they will be a pacemaker and set the tempo”. However, iterations didn’t stick and the teams get back to Kanban.
Features prioritization
For years it was a responsibility of a single person to create the product backlog and prioritize high-level features. In 2013 we decided to form a cross-functional committee (we call it Product Board) to replace the Product Owner. It consists of several people representing almost all roles: support, design, sales, marketing, development and QA. This solves many problems. More people in the company understand how product decisions are made, what is the market situation, what problems customers face. We removed this single point of responsibility and now we feel more confident for mid-term planning (1 year). However, there is one thing that the board can’t replace so far — the product vision. It appears that Product Board can handle tactics nicely, but innovations is something out of its scope so far.
Last year we introduced a formal model to prioritize features. We estimate several parameters and calculate the feature score. It is interesting that the model actually works and we rely on it. Definitely we don’t take features blindly one by one, but at least we have a clear separation between top 20 problems we should attack soon and all other problems that can be postponed.
Estimates → No Estimates → Estimates
We used to not estimate user stories till 2013, but the real problem with this approach was a forecast. We didn’t find a way to forecast feature duration without estimates, so we got back to user stories estimation with a very basic and quick process. People often say that there is no correlation between Estimate and Cycle Time and it is quite true indeed in our case as well.
Here is the chart that shows the correlation between Cycle Time and estimated Effort for a single development team:

So maybe our desire to forecast better with estimates is an illusion, but it doesn’t take much time and helps to notice and split large stories better, so we keep this practice so far.
Smaller user stories
To split stories is hard. Period.
Tracking
| 2008 | 2009 | 2011 | 2012 | 2013 | 2014 | 2015 | |
| New Tracking and reporting tools | Task Board, Iteration burn down, release burn down, Targetprocess | Kanban Board, Cycle time | Builds board | live roadmap on the wall | Interactive roadmap, various new reports in Targetprocess | ||
| Time tracking | We track spent and remaining time on tasks and user stories. | We don't track time | |||||
| WIP Limits | Time-boxing | We have a limit of 3 user stories or bugs in progress. | Flexible limits, not defined clearly | 2 user stories in WIP per developer (1 is recommended). In general a mini-team decides for itself. | No clear WIP definition, people decide how to work | ||
| Retrospectives | every 2 weeks | We run Just In Time meetings instead of periodic meetings. We have an Issue Board, limited to 3 items. When the limit is hit, we run a meeting. | We have stop-the-line meetings with people related to the issue. They are quite rare now. | No retrospectives | regular on a team level, teams decide | company wide retrospectives | |
| Daily meetings | Yes, at 10 am, 10 minutes on the average | Yes, at 11 am, 15 minutes on the average | Yes, at 11 am, 7 minutes on average | ||||
WIP limits
We’ve discovered that it is extremely hard to imply WIP limits to development teams. We tried several times, but they just didn’t stick. In general WIP limit should affect the cycle time. Here is the average cycle time by quarters for all user stories. You see a clear positive trend — Cycle Time is getting shorter. Most likely other practices helped, but a further Cycle Time reduction may be problematic without good WIP limits.

Metrics
We like physical things, and on the kitchen we created a new roadmap that shows major features, releases, new hires, sales indicators and "fuckups". This focus on fuckups was considered destructive by some people, so we are thinking to replace it by "wins".

We have custom reports in Targetprocess now and can track more interesting trends. For example, we see that we complete more and more user stories every quarter, so the company velocity is increasing.

Retrospectives
In 2013 we were so focused on pushing a new major version of Targetprocess to the market that we abandoned retrospectives. When you work without retrospectives for more than a year something bad happens, you start to feel exhausted and the productivity drops. I really don’t recommend to ignore this practice, it has effect on everything, like team morale, productivity and quality. In 2014 we decided to do retrospectives regularly and each team decides how often it will have them. Interestingly, after this huge pause retrospectives in some teams were quite long, while in few teams they were short with a very brief list of action items. Teams are different and it seems some really jelled teams do not need retrospectives since they solve problems on the fly.
In 2014/2015 we tried all-hands meetings to brainstorm ideas for various problems like productivity and development process improvements. It was a very successful practice and in general it is not that hard to have a full day meeting of 60 people.
Daily meetings
This practice lives for years. We never neglect it (however, some discussions about the termination of the practice did happen). In 2013 we had more than 20 people at a single meeting and it started to be long and boring. So we introduced a rule that one person from every team speaks about the team’s accomplishments and plans, thus we shrank the meeting to 5-7 minutes and we keep this practice till now.
Meetings
| 2008 | 2009 | 2011 | 2013 | 2014 | 2015 | |
| Local / Team level | Release planning (team) Iteration planning (team) Iteration demo (team) Retrospective (team) Daily (team) | User Story kick-start (3-4 people) User Story demo (4+ people) Retrospective (team) Daily | User Story kick-start User Story demo Stop-the-line (team) Daily | User Story kick-start | User Story kick-start Retrospectives (every 2-3 months) | User Story kick-start Retrospectives (every 2-3 months) Product Specialist + Development Team (weekly) |
| Global / Cross-team level | Product board (weekly) Development board (weekly) | Product board (monthly) Global Retrospectives (1-2 per year) Development board (monthly) Features demo (monthly) | Features demo (ad-hoc) Global Retrospectives (every 2-3 months) Development board (weekly) |
Just two meetings have survived a several years timeframe: Daily and User Stories kick starts. It is really important to get the whole team together and discuss a user story before implementation. With the company’s growth we have more global meetings now that include people from several teams or even the whole company. We do company-wide feature demos and infrequent retrospectives. We have Development and Product Boards that solve cross-team problems and set the product vision, these committees meet regularly as well.
Our teams are mature enough and there is no need in frequent retrospective meetings. One new meeting we have introduced is for the team and a product specialist to discuss customers’ requests and bring development teams closer to the real users.
UX and Craftsmanship
| 2009 | 2011 | 2012 | 2013 | 2014 | 2015 | |
| UX | Wireframes | Sketches (many ideas) Live prototypes live usability tests on prototypes design studio | Sketches (many ideas) Live prototypes Live usability tests on a real system | Cross-functional UX teams Sketches (many ideas) Live prototypes | Cross-functional UX teams Sketches (many ideas) Huge gap between UX and Development. | Cross-functional UX teams Sketches (many ideas) Live prototypes design studio |
| Craftsmanship | Workshops | Salary depends on learning Mini-conferences A paid major conference visit for every developer Workshops Friday shows 5 hours per week to personal education | Salary depends on learning Mini-conferences A paid major conference visit for every developer 5 hours per week to personal education or project | Internal conferences 8 hours per week to personal projects (Orange Friday) | Focus on getting things done | 20% time on own projects (team orange months) Internal conferences Developers’ backlog to handle the technical debt |
UX
We started to form cross-functional UX teams a few years ago and in general it works pretty well. Definitely there are gaps in knowledge, but people provide different perspectives and understand problems better. Usually a UX team consists of 4-6 people, including a product specialist, a developer, a tester, a designer, a support engineer and the feature owner.
We used to have a more formal approach to UX with a single UX team, now things are quite unstructured. Sometimes it slows us down. We are trying to bring a bit more formality into our UX process. The main problem is that feature owners need more knowledge about UX processes and now we are starting the formal education to liquidate this gap.
Craftsmanship
In 2010 we experienced a huge shift in craftsmanship practices, but in 2014 we decided to focus on the development of new features and stopped all educational programs, like conferences and 20% time on own projects. The effect was controversial. While we improved the short term productivity and completed more features, the price was a burnout and an overall morale drop. In retrospective I think it is OK to focus, but one year of focus is the limit. Then productivity drops and you might lose some key developers. It seems it is better to alternate focus and relaxation periods to create a healthy environment and teams that can run a 5+ years product development distance.
Here are the company’s focus changes by year:

Development practices
| 2012 | 2013 | 2014 | 2015 | |
| Source control and branching | Git. We are trying a single-branch development again to enable continuous delivery. It is impossible to do that with the feature-branches. | Back to feature branches. Gitflow with a mandatory Code Review | ||
| Pair programming | Pair programming is completely optional. The mini-team decides for itself. | We use pair programming less and less. This practice is fading away. | ||
| TDD/BDD | Clear focus on BDD. We've customized NBehave for our needs and implemented VS add-in to write BDD scenarios. | Stopped BBD usage. We discovered that BDD tests are hard to navigate and maintain. Even custom developed tools didn’t help. | Stopped TDD. Now we write unit tests after the code is created. | |
| Automated Functional Tests | We are migrating to Selenium WebDriver. | Custom JS test framework. | CasperJS tests | CasperJS tests are deprecated. Use Custom JS test framework |
| Continuous Integration | Still use Jenkins. The goal is to have a Continuous Delivery process eventually. | Continuous delivery is a mirage… | We created Build Board. Deployment is fully automated. | We use staging servers and roll out new builds gradually with instant roll-back if required. |
| Feature toggling | We apply feature toggling and are heading to continuous delivery process. | Per account feature toggling. Fast feature disabling. | ||
| Code duplication tracking | — | We started to track and fix code duplication | ||
| Public Builds per Year | 26 | 24 | 34 | 34* (forecast) |
Build Board
We created an internal tool Build Board. It is a software that integrates Targetprocess, GitHub and Jenkins to automate Definition of Done for every User Story or Bug before merging a feature branch to the trunk.
Developers have to keep in mind various information related to a current task. Is it already tested? Have all tests passed? Is an integration branch open for commits? And so on. All this information is spread among different tools that are usually not integrated. The Build Board provides a single place to handle all that.

Code Review and Green Develop Branch
We introduced code review in 2013 and it was a success. In 2014 we introduced a special rule “green trunk”. It means that any commit to the develop must contain at least one tested User Story or Bug. A person who does the commit must ensure that tests are passed on trunk and revert the last commit if tests are failed. It was hard to enforce this practice, but with time it indeed helped to keep the trunk green.
Continuous Deployment and Branches
Our path of branching is Single → Feature → Single → Feature. The last change is interesting. We hoped to introduce continuous deployment and migrated to a single branch, but it appeared our build/test infrastructure was just not ready. The main two problems were slow tests and unstable tests. It takes years to stabilize tests and create a decent build process. Functional tests are quite slow and unstable, so we re-run them several times to ensure correctness. We have more than 60 virtual servers, but it is still not enough to run tests for less than an hour. Single branch development with such slow tests is just impossible. Master is seldom green and we struggled for several months before we reverted to feature branches.
Stats
Some real numbers that help to understand the size and complexity of the product.
| 2012 | 2015 | |
| Unit Tests | 5,200 | 8,540 |
| Functional Tests | 2,500 (1,700 Selenium and 800 JavaScript Tests) | 5,200 (2,700 Selenium and 2,500 JavaScript) |
| Virtual machines to run tests | 38 | 62 |
| Short circle build time (without package) | 40 min | 50 min |
| Full circle build time (with package) | 1 hour | 2 hours |
| Git commits | 8,400 | 14,090 (in 2014) |
| LOC | 441,000 Client – 145,000 (js) Server – 296,000 (c#) |
Note that we have much more tests in 2015 and almost doubled virtual machines to run them, but still the build time has increased. We have fought unstable tests, but still don’t have a fast and reliable environment to enable Continuous Delivery. The main lesson learned is that it is extremely hard to have CD for a monolith application. We consider using more and more microservices and deploying each service asynchronously, thus decoupling the system and reducing test/build time for every module.
Wrap Up
In the conclusion section three years ago I wrote "Still our development process is far from perfect". I just can repeat this again. We are getting better slowly and some areas even degrade slightly and demand re-enforcement to shine again, like User Experience. In general our UX processes are OK, but not exceptionally great as we want them to be. The software development process has no destination, it is just a journey.
P.S. Reddit discussion
