“What do you think about overtime working?”

In one form or another, that question has been asked to me in 90% of all the job interviews I’ve ever had and my answer has probably changed with every response I’ve ever given.

Overtime working.

At the time I was first asked the question, I had probably never stopped to fully comprehend the nature of the 40 hour week. In primary school I’d been cursorily told about the Magna Carta, English Feudalism and the Lord’s day, and in my teenage years I’d probably watched Fight Club, Easy Rider and Jerry Maguire, while quietly thinking to myself, “I’m never gonna be a wage-slave to the Man, man!”. I’d certainly never heard of the 8-8-8 movement. The theories of  Henry Ford or Karl Marx on the matter. The origins of the perfect beuracracy…




What a question to ask in an interview! I thought this was a tech job interview, not a philosophy jam. Where are the questions on how many tennis balls can fit in a school bus? Overtime working is such a complex issue and one that games and other tech industries have so endemically failed to get to grips with, that the culture of persistently working beyond contracted hours has spawned it’s own term – Crunch.

// The #RyseFacts Twitter Storm

(Oh what, you thought this was going to be about R*? Nah, I started writing this article 5 years ago.)

#RyseFacts was an interesting moment in games development. To recap the #RyseFacts campaign was to form part of the pre-release publicity for the Xbox One launch title Ryse: Son of Rome, revealing behind the scenes snippets about the game’s development. One of the campaign’s first tweets was as follows:

The resulting outpouring of commentary formed a twitterstorm far beyond anything the game’s community team might have expected from such a simple factoid. The debate quickly became heated, instantly focussing on the devisive subjects of deadlines, ambitious specifications, overtime and developers’ health. All of which was widely reported within the gaming press [1][2][3]. The outcome for the developer was a toxic hashtag and a hastily dropped PR campaign.

Some Industry members came out in condemnation of Crunch. Cliff Bleszinski, former Design Director at Epic Games on Gears of War and Simon Unger,  Animation Director on IO Interactive’s Hitman: Absolution tweeted:

Cliff Bleszinski @therealcliffyb
 "Crunch time" = bad managment. #RyseFacts gamasutra.com/view/news/2024…
 03:18 PM - 16 Oct 13
Cliff Bleszinski @therealcliffyb
 This just in: Next gen AAA console launch game with many scripted sequences required lots of crunch. #noshit #RyseFacts #unsustainable
 03:30 PM - 16 Oct 13
Simon Unger (@Simonunger)
 Bragging about how many overtime meals your team has had is like bragging about how many times you've crashed your car.
 11:05 PM - 15 10 Oct 13

However the condemnation was not unanimous, there were plenty of people willing to declare their support for overtime working:

John @JonVoy
 @therealcliffyb no dishonor in working hard and having passion about what you do ,CRYTEK is a amazing company #RyseFacts
 03:49 PM - 16 Oct 13

I reflected previously that one of the aspects that made the #RyseFacts tweet so unusual, was the fact that a developer itself had revealed such detailed information about the management of a project under active development and the long hours being worked by its team. While previous public discussions of long working hours in the games industry have raised the physical and mental issues relating to long working hours on employees and their support networks – notably the EA Spouses and Rockstar Wives (oh wait, maybe this article *is* about R*…) open letters – it is highly unusual for official comment on overtime working.

Usually the debate is conducted from individual to individual. In June 2014, Corrinne Yu, Vice President of Engineering at General Motors, a former graphics programmer at Naughty Dog and former Prinicpal Engine Programmer at 343 Industries on Halo 4 caused a stir with her comments on Twitter.

On occasion games company directors have spoken about overtime working, with Cliff Bleszinski’s former colleague Mark Rein, Vice President at Fortnite developer, Epic having said in July 2012:

‘I’m sure we crunch just like everyone else. But passionate people will want to ship the best product they can.’

– Mark Rein

// Does Crunch Make Things Better?

It is clear that the subject of Crunch is a deeply personal issue on both sides of the debate and the culture of Crunch would never be sustainable without the support of the developers who undertake it.

However, the question remains, does working longer hours demonstrate passion, commitment and result in an overall improvement in quality, or is it passion theatre that hides underlying failings in management? To remove the emotive language, is there a causal link between high quality output and high intensity working, or is it a matter for personal preference? Should a conscientious manager encourage developers who show extra commitment to working hours or insist they stop work at 5 o’clock?

Well actually, for the amount of debate on the subject, it’s quite hard to say for sure. The anecdotal data is insufficient to make a solid judgement and measurable data is quite hard to come by. Indeed, when I first began writing this article in October 2013, the most striking aspect about working hours was the prevalence of political motivations in forming the 40 hour work week.

“Eight hours’ labour, Eight hours’ recreation, Eight hours’ rest”.
– Robert Owen, 1817

In Das Kapital, 1867, Karl Marx wrote, “By extending the working day, therefore, capitalist production…not only produces a deterioration of human labour power by robbing it of its normal moral and physical conditions of development and activity, but also produces the premature exhaustion and death of this labour power itself.”

By the beginning of the 20th century, the 40 hour work week was championed by revolutionary socialists and free market capitalists alike. Henry Ford was one of the first major industrialists to embrace the 8-hour day, 5-day  week. In an interview with his biographer in 1922 he stated, “The country is ready for the five day week. It is bound to come through all industry. In adopting it ourselves, we are putting it into effect in about fifty industries, for we are coal miners, iron miners, lumbermen, and so on. The short week is bound to come, because without it the country will not be able to absorb its production and stay prosperous.

“Just as the eight hour day opened our way to prosperity, so the five day week will open our way to a still greater prosperity.”

In modern times, a 2003 report for the UK Department of Trade and Industry was commissioned to perform a review of the research literature and secondary analysis about the effectiveness of long hours working (defined as more than 48 hours a week), particularly with respect to organisational performance and increasing productivity.

The review looked at working time patterns in the UK and made comparisons with the EU and other developed countries, with a view to explaining why the UK workforce had some of the longest working hours in Europe.

“There are considerable theoretical and methodological difficulties in measuring the impact of long hours working on organisational performance. Overall, however, on the basis of the current evidence, it is not possible to establish conclusively whether long hours working has beneficial, detrimental or neutral overall effects.”

While generally inconclusive, the review did highlight specific concerns with shift working and health and safety issues caused by fatigue.

“The review of the research literature shows clear grounds for concern about the adverse effect of long hours working and (the frequency of) health and safety incidents. However, most of this research focuses upon specific occupations (e.g. long distance lorry drivers, the medical professions), which precludes more general conclusions being drawn.”

While many of the hazards in heavy industry that may cause severe injury or loss of life are not present  in the digital workplace, the concept of industrial accidents may be repurposed for the modern creative industries in the form of digital accidents that could cause loss of earnings or endangerment of employment. Such accidents could take the form of irretrievable data loss, security vulnerabilities that cause disclosure of personal private information or more simply a collection of bugs and poor user experiences that cause users to avoid your game.

This overall lack of data led me to shelve this article. However, in December 2014, the Game Outcomes Project published their research on the factors that make or break game development projects. It was the fourth part of this summary, “Crunch Makes Games Worse”, that really piqued my interest.

The summary focuses on two hypotheses relating to the effectiveness of Crunch:

  • The “Extraordinary Effort” Argument – extraordinary results require extraordinary effort, and extraordinary effort demands long hours.”
  • The “Crunch Salvage” Hypothesis – the idea that crunch is more likely to be used on projects in trouble, and that this “trouble” is itself the cause of the poorer project outcomes, and that when crunch is used in this way, it leads to outcomes that are less poor than would otherwise be the case.

In both cases the authors found strong correlations to refute the hypotheses that Crunch was beneficial to projects but, in fact, was clearly detrimental to a project’s economic and critical outcomes.

// So Why Crunch?

There are only three things that you are working with as a development manager: resources (people and money), features and the schedule. Changing one has an impact on at least one other axis, usually two.

– Jim McCarthy, 21 Rules of Thumb for Shipping Great Software on Time

In traditional software development, the launch day is the single most expensive day in the project. All the development costs have been spent and marketing committed, yet no revenues have been generated. There is only so much capacity in the supply chain: manufacturing, warehousing, distribution, retail space, billboards, advert breaks, website banner ads, feature articles, endorsements and digital store front page promotions must all be booked in and paid for in advance. To add to those pressures, gaming is a highly competitive sector.

In the case of a normal sales period timing is a major factor, if Grand Theft Auto shipped last month, Batman this month and Call of Duty the month after, the developer must hit the deadline or face the very real possibility that their game just won’t sell enough to recoup the development costs. Worse still, if that game is an exclusive, launch day title for a brand new console in a multi-billion dollar industry then the deadline is everything, it will not move.

If the timing cannot be changed, another option is to try and reduce the amount of work required by cutting features from the specification. However, if the goal is to make an astounding, new gameplay experience to sell an expensive, new entertainment device to consumers, it is highly unlikely that the stakeholders will accept reducing the feature set around which all the marketing and sales pitches have already been made.

So if time & features are intractable and the project is behind, the only option is to look at resources. How about scaling up?

There are only so many developers in the world and integrating new people into a team is extremely difficult due to problems with the time it takes for new recruits to bed in to the team, with the increased complexity of communication that adding more people causes.  Once you’ve scaled up to ship the project, what do you do with all those extra people when the project is over? Have people sitting around doing hobby projects? Roll the entire team straight into full development of another major project? Let people go again?

Contractors are a potential solution but due to the high demand for such highly skilled individuals and the time constraints on the project deadline they can come with a high price tag.

So the only option to managers appears to be to increase the amount of output from the existing resources and if/when that is insufficient they add more overtime. A former colleague of mine who’d worked in production, once described AAA games as ‘crunch driven, miracle based software development’ and for the most part, I wouldn’t disagree with that.

When the schedule slippage is recognized, the natural (and traditional) response is to add manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster.

– Frederick P. Brooks, Jr. The Mythical Man-Month

// How to Avoid Crunch

Firstly a large amount of emphasis should be put on the organisational effectiveness of the development team. In the analysis of the 2014 survey results, The Game Outcomes Project indicated three positive correlations that makes crunch more likely and one negative correlation that makes crunch less likely:

  • +0.51: “There was a lot of turnover on this project.”
  • +0.50: “Team members would often work for weeks at a time without receiving feedback from project leads or managers.”
  • +0.49: “The team’s leads and managers did not have a respectful relationship with the team’s developers.”
  • -0.49: “The development plan for the game was clear and well-communicated to the team.”

As such, it is the duty of managers to take a hold of their projects and ensure good communication within their teams and with external stakeholders, whether they are corporate clients or end consumers.

It is fundamental too, that the risks in a project are well understood and mitigated, while the scope of work is kept in check. (see Jim McCarthy’s, 21 Rules of Thumb for Shipping Great Software on Time for more information on this topic). Shifting the risk out of the manufacturing, distribution & warehousing can give a team a huge advantage here. The advent of cloud computing and digital distribution has revolutionised the iteration times and confidence with which a team can deliver a new version of a digital product to the end user.

The incredible success of Minecraft has demonstrated the value of generating income early and often by releasing Alpha builds to the paying public, while also maintaining user interest through keeping the ideas simple and the production values down by using systemic, procedural content and allowing users the freedom to customise and modify the product to do more of the things they want over time.

The lessons of Minecraft’s early access model have been feeding back into the games industry far and wide, when Valve introduced Greenlight into their long established Steam digital distribution platform, it allowed smaller developers to reach an audience early without the expense of traditional marketing channels. A model that has since evolved into Steam Direct.

Valve have been the masters of this new development model, ever since the release of Half Life in 1998. Taking focused core gaming experiences and pairing them with community features, that provide great player retention over time. With the release of Steam in 2003 and Half Life 2 in 2005, Valve began the radical shift to a fully digital distribution model.

With Valve’s more recent games, their digital model has evolved yet again to take on aspects of the hugely successful and far-reaching Free-to-Play market, as typified by Zynga, while trying not to alienate their core audience.

The latest iteration, as seen in games such as Dota2 and Counter Strike Global Offensive, focuses on a well defined core game experience before adding player value incrementally over time. The success of this model can be seen in the analysis of Steve Gaffney, a former Director at Splash Damage and now Program Manager at Google DeepMind:

Is it any wonder that having seen the incredible successes of Valve’s transition, Mark Rein and Epic should follow that trajectory to utilise their Unreal Engine licensing as a platform, away from deadline fueled, crunch heavy development, into releasing Fortnite as a service? Not really.

// Passion Proganda

(Okay I’m going to have to talk about R* now aren’t I?)

Yet here we see Rockstar, a world spanning, mega-developer – with a successful games as a service business already in operation in the form of GTA Online – bragging about how much crunch their team are doing, when:

  1. They have seen others getting into hot water over it
  2. They have a record of getting into hot water over it themselves
  3. They have a history of utilising hype to gain front page media coverage

Do they have a game out next week by any chance?

// So… What *Do* I Think About Overtime Working?

I guess I can’t avoid it any longer, I’m going to have to answer the overtime question once and for all.

The Graduate me answer:

“Whatever overtime is required to get the job done!”

The Experienced me answer:

“It’s best not to work overtime but sometimes there are exceptional circumstances where some extra effort is required.”

The Director me answer:

“Development is always a trade off between Time, Resources and Features. The requirement for overtime is indicative of a failure of management. Tired people do bad work, while being horrible to each other and I have high quality standards in both code and people. Actually, you know, this is a complicated issue. Here, I wrote a blog about it…”


So what is the best technical interview in my view? That honour goes to two game developers Avalanche Studios and Dovetail Games (others may also use similar test but these are the 2 I know of) whose tests go something along the lines of:

  1. Send candidate code for a retro game clone, e.g. Pacman or Asteroids.
  2. Tell them to get it running on their machine and fix any bugs encountered.
  3. a) If interested in creative coding: improve the game in any way they see fit, e.g. new features, enhanced movement, etc.
    b) If more interested in core tech: Profile the code, then make it run 20% faster (10% is fairly easy, 20% shows skill, more than 20% shows real mastery)
  4. Send the result back

The reasons I find these tests superior to all the other technical tests I’ve heard of are as follows:

  • The tests are pragmatic.
  • The tests are real.

Does it matter if a programmer has memorised the technical standards for every programming language invented? Does it matter if a programmer can recite the Gang of Four or every bit trick under the sun? How is that programmer actually going to work with your code base? What will they add? Will they make a mess of things? Will they have the ability to work unguided? Will they respect the architecture and coding standard? Will they know when is the right time to throw the coding standard out of the window?

There is a place for developers who can fathom ideas never undertaken before but that is not the norm of development. Innovation rarely comes in bounding leaps, it usually comes from careful iteration on a theme. For those reasons, I find this type of pragmatic, real world approach to tests by far the most useful way of determining whether someone will be up to the technical challenge of working in an existing team.


In my previous post I talked about how there are still some very bad interview processes being used by top firms when hiring developers. In the last two weeks I have seen many of my former colleagues going for interviews with a wide array of firms, including some of the best technology and financial companies in the world. This has given me a great opportunity to compare and contrast the hiring process in its current state and has led me to two further observations.

The first is quite obvious – the range and quality of hiring processes is as varied as there are firms. Some are very old fashioned, some are incredibly fastidious, while occasionally, a firm will surprise me by creating a programming test that is both novel and directly attributable to the kind of real world work one of their developers might be expected to do on a daily basis. Congratulations to those companies for taking the initiative – everyone else, you’ve got some competitor research to do if you don’t want to lose out in the hiring game. Remember the candidate is interviewing you as much as you are interviewing them.

The second observation is one that starts to show my age. Some of the tests I’m seeing  over my colleagues’ shoulders are very familiar indeed. In fact one was almost an exact facsimile of a programming test I sat over 10 years ago. Another of the tests – one that stood out from the crowd for its pragmatic approach – was almost identical to a test a friend once described to me he’d sat in Sweden. A quick dig around the wonders of social networks informed me that in both of these cases, I could directly trace someone I’ve previously worked with to these firms at some point in the last 3 years. I can only assume they took their coding tests with them when they moved, ensuring that the lineage lives on. Just please, don’t take the bad tests with you when you move on and, whatever you do, don’t let them get into the cloud else we’ll never be rid of them!


I’ve been doing a round of interviews recently, trying to replace a contract that has recently come to an end and I’ve come to a conclusion – I’m really good at my job but I’m terrible at interviews!

Well, not all interviews. There are some interviews I’m great at. The ones where a real conversation can be had about a diverse range of topics and in which the answers can have some nuance to them. The kind of interviews where I feel I have come out knowing a bit more than I went in with regardless of which side of the table I’m sat. I like those interviews.

I hear you walk good. Let me tie your shoelaces together, blindfold you and spin you round a few times. Now show me your walk.

The kind of interviews I’m terrible at all have a very particular rhythm to them. The meeting starts, pleasantries are exchanged (mostly, although some technical interviewers I’ve met don’t even give their names) and everyone settles into their chairs. A sheet of paper and a pen are slid across the table. My heart sinks, this is going to be one of *those* interviews which have the very black and white questions.

Am I perhaps putting pride before a fall? Shouldn’t a candidate expect to jump through some hoops to get the job? Isn’t this all just good due diligence? Don’t most programming problems come down to boolean logic and therefore the answers can be deemed correct or not? Well no, I suspect that the only interview questions with objectively correct answer are the cases which are trivial. Those questions annoy me.

Question – “In C++, what is wrong with this reverse function?”

Answer – “It’s not std::reverse?”


Answer – “It’s in the global namespace?”

Correct Answer – “No! There is no test associated with it and there are loads of bugs!”

More Correct Answer – “Why did someone write all this codes and all these tests to bloat our code base with a worse version of the C++ Standard Library? Who do we employ who is better than Howard Hinnant and why aren’t they contributing pull requests to llvm?”

Question – “Write an algorithm to count all the pairs in an array, matching a predicate, the 2 test sets are 5 elements long… with permanent pen and a single side of A4!”

Answer – “Er… vaguely this? I usually hit F7 about now and run some tests while I have a little think about the wider structure (or it’s already been gulped).”

Correct Answer – “No! There’s a bug! And it uses O(n^2) complexity!”

Question – “How would you optimise it?”

Answer – “Er… does this code really need optimising? What’s the real world usage of this algorithm? How often does it get run? Is this ever going to have more than 5 values passed into it? If this is all we’re doing can’t we just go perfectly wide over all cores for every combination and accumulate to an atomic variable? Can I just Map-Reduce it? Use a GPGPU?”

Correct Answer – “No! This was a set of unique numbers! You could have optimised for that!”

“Er… nothing in the brief said this was guaranteed to be a set… why isn’t it sorted? Is this really all you ask people to find out if they’re worth giving a job to? Where are the architectural system design questions? How much did you say the hourly rate for this position was?”

Don’t you have infinite monkeys somewhere that can do this?

The older I get, the more these trivial pen and paper questions irritate me. I don’t go as far as to say all these things in the interview but I probably don’t hide my inner monologue very well.  I have had a fair number of jobs now and hired a fair few people myself, yet not a single one of those jobs started with someone giving me a permanent pen and paper coding test.

My Dad used to write code on punch cards. To compile it he had to catch a train to London. I write more lines of code per day than my Dad did.

We are all standing on the shoulders of giants. Alan Turing gave the world its greatest tool since the wheel and it is the height of arrogance to consider that any code is written in a bubble. If you are not assessing the capability of  your developers to write code in a meaningful environment, in context to other code and the data it transforms, then you are failing to do your due diligence and opening yourself up to immense risk.

Hiring is a two way process with the candidate trying to assess the quality of the development team just as much as the interviewer is trying to assess the quality of the candidate. The goal of an interview is to assess the depth and breadth of a candidate’s understanding and ability to apply principles in practice, not to find out how much someone enjoys toy puzzles.

So, hiring managers, I have one thing to ask. Next time you take the effort to look at a CV,  check the references and talk to a candidate on the phone. Don’t then call them in for a pen and paper test of the most trivial nature and waste their time and yours. And even if you change nothing else about your tests, at the very least give the candidate a text editor and a delete key and assess whether they can even type!

OnRef – Game Outcomes Project

Finding publicly available data, on the factors that make or break development projects, is hard. As such, the Game Outcomes Project should be congratulated for bringing a large amount of data on a wide range of topics to the fore.

The survey results can be found in Paul Tozour’s  series of excellent blogs at:



There is one simple fact that, while often hard to accept, is true of every single person who ever lived: we are all ignorant. For everything we think we know, there are almost infinite aspects of understanding that are and will always be beyond us. But that doesn’t necessarily have to be a bad thing. The key is to realise what form your ignorance takes and take action to try and steer it in the right direction.

// Blissful Ignorance

Blissful ignorance is an easy trap to fall into: everything seems to be going great, until it isn’t and you realise that your project is at risk of missing the deadline or even of total failure. Every effort should be made to avoid becoming complacent and failing to recognise faults and their potential mitigation within your organisation.

// Willful Ignorance

A far greater crime than blissful ignorance is to notice the warning signs, the cracks and creaks of a failing system, and to turn a blind eye to them, or declare that there is little that can be done. Something can always be done and if it isn’t, the trouble is just stored up for the future.

// Lucid Ignorance

If blissful and willful ignorance are the two negative aspects of ignorance, the positive counterpoint is lucid ignorance. While at first, lucid ignorance (a term I first came across in Jim McCarthy’s 21 Rules of Thumb) seems an unusual concept, its symptoms are very commonly understood: a daunting feeling at a great challenge or the realisation that you are out of your depth.

The natural reaction to such trepidation is often to try and suppress it. However, in the first of his rules Jim asserts the importance of embracing lucid ignorance and not accepting a group belief in pseudo-order or magical conversions of ignorance into knowledge. To question and demand acknowledgement of exactly where the risks – the lack of prior experience, lack of resources and lack of understanding – are in the project and to work as a team to get from the state of unknowing to a well understood vision, goals and structure.

‘It is essential not to profess to know, or seem to know, or accept that someone else knows, that which is unknown.’

– Jim McCarthy, 21 Rules of Thumb for Shipping Great Software on Time

As such it is important never to accept shaky justification or perceived wisdom as reason enough for project decisions. If no-one can justify why you’re doing something then that should be a major warning sign that it might be the wrong thing to do. If you are doing something purely because someone else does it then you may be missing the whole point, or just driving your product into a crowded marketplace, against established competitors, while missing the opportunity to find your own unique selling point.

// Mitigations for Ignorance

So what can be done to try and improve the amount of lucid ignorance within a team?

  • Don’t delay or defer the difficult aspects of your design. Attempt to establish within well understood parameters exactly what is and isn’t possible – both theoretically and from a practical standpoint.
  • Don’t assume something you are told to be true. It may be that a concept that appears easy is unfeasible, while another that seems impossible, at first, only requires a little lateral thinking. This is particularly important if it involves the collaboration of multiple parties, check with all of them that they concur.
  • Establish good review procedures, welcome the input of even the most junior members of your team and frequently re-examine your group assumptions to ensure they still hold.
  • Don’t skimp on project maintenance or ignore warnings as ‘minor’. While working on the next big feature can seem more appealing than fixing existing features, it is easy  for major issues, that cause projects to run the risk of failure or damage relationships with clients, to be obscured or overlooked.


// Response Windows

When talking about bugs and issues, we talk about the relation between opened and resolved issues to give us general trends as to the overall quality of the project and the group effort being employed in the pursuit of shippable quality. The key being the overall numbers of issues as defined by urgency & severity from critical to minor. This gives a good overall impression of where the project currently stands.

However, in bug count trending there is no inspection of the particular issues, how long they may have been in existence or how badly they have affected the ability of the team to undertake their daily work. Triage lists are often created to target issues but the overall shape of the data may allow certain issues to lurk for long periods without being addressed.

What if the QA of games was approached from the perspective of security vulnerabilities in software and what different information would that give us about the state of a project?

‘The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled.

Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities”[2] This practice generally refers to software vulnerabilities in computing systems.’


Here the key issue is not how many bugs exist but how long it took for the bug to be addressed, giving an overall impression of the stability of the product and the responsiveness of the team in addressing issues. This may be a useful metric on the day to day usability of the product and the immediate quality of features.

Response windows may also form a useful guide as to the relative efficiency of the production pipeline. How long does it take the team to get a feature turned around? How long does a build or feature go without being in a workable state? How long do niggling issues persist without being dealt with? How good are individual teams at maintaining existing features? How quickly can the team get a build together?

If teams were to focus on the responsiveness of their ability to iterate content and address issues, how would that affect the perceived quality of the product and the developer’s willingness to change for the better in the opinion of their peers and customers?

Q – What might the data look like?
  • Modelling the data as a distribution should match a normally distributed bell curve.
  • The Response Time is the total time from the issue being introduced to being resolved.
  • The Response Window is the area underneath the curve defining the time in which issues are addressed.
  • Urgent bugs would hopefully trend to the left hand side of the distribution with a sharp peak and decay
  • The lower the priority of issues the further to the right and flatter the expected peak.

Example Response Window graph for bugs (x= response time, y = number of issues):

Q – What would good news look like?
  • Really urgent issues would be dealt with in a very short number of hours with very little tail to the distribution curve.
  • Higher priority issues would be dealt with before lower priority issues.
  • Not all bugs are urgent.
  • Not all bugs are minimal.
Q – What would bad news look like?
  • Urgent issues being further to the right of the graph than lower priority issues.
Q – What further data might be extracted?
  • The time to triage, fix, test, & resolve issues
  • Per component, strike team, feature or build data
Q – What actions could be taken to improve the response?
  • Try to focus on more urgent issues to reduce the Response Window.
  • Try and make the response windows as short as possible.
  • Try to improve workflows or ramp up resourcing for the stages of the pipeline identified as the biggest contributors to the Response Time.

OnRef – The Mythical Man-Month

One of the first books I was ever recommended in my first job was Frederick P Brooks’ The Mythical Man-Month. In this book, the former development manager for IBM’s System/360 mainframe, offers a series of essays on software development.

Though some of the technology and processes in these essays may seem odd to modern developers – dating between 1975 & 1995 as they do – many of their concepts are still insightful and refreshing. Indeed, as my understanding and experience of software development grows, it is a book I find myself returning to again and again.

Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.

– Menu of Restaurant Antoine, New Orleans

More software projects have gone awry for lack of calendar time than for all other causes combined. Why is this cause of disaster so common?

First our techniques of estimating are poorly developed. More seriously, they reflect an unvoiced assumption which is quite untrue, i.e., that all will go well.

Second, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine’s chef.

Fourth, schedule progress is poorly monitored. Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.

Fifth, when schedule slippage is recognized, the natural (and traditional) response is to add manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster.

– Frederick P Brooks, Jr. The Mythical Man-Month

Read more about Fred Brooks here.

OnRef – 21 Rules of Thumb

I really cannot recommend this blog post, 21 Rules of Thumb – How Microsoft develops its Software by David Gristwood more highly. It republishes Jim McCarthy’s original article encompassing 21 simple concepts on shipping great software on time.

We’ll be revisiting some of these rules from time to time in relation to various subjects but, in the meantime, if you want more from Jim McCarthy then he has a couple of books too: