OnTheBestInterviews

So what is the best technical interview in my view? That honour goes to two game developers Avalanche Studios and Dovetail Games (others may also use similar test but these are the 2 I know of) whose tests go something along the lines of:

  1. Send candidate code for a retro game clone, e.g. Pacman or Asteroids.
  2. Tell them to get it running on their machine and fix any bugs encountered.
  3. a) If interested in creative coding: improve the game in any way they see fit, e.g. new features, enhanced movement, etc.
    b) If more interested in core tech: Profile the code, then make it run 20% faster (10% is fairly easy, 20% shows skill, more than 20% shows real mastery)
  4. Send the result back

The reasons I find these tests superior to all the other technical tests I’ve heard of are as follows:

  • The tests are pragmatic.
  • The tests are real.

Does it matter if a programmer has memorised the technical standards for every programming language invented? Does it matter if a programmer can recite the Gang of Four or every bit trick under the sun? How is that programmer actually going to work with your code base? What will they add? Will they make a mess of things? Will they have the ability to work unguided? Will they respect the architecture and coding standard? Will they know when is the right time to throw the coding standard out of the window?

There is a place for developers who can fathom ideas never undertaken before but that is not the norm of development. Innovation rarely comes in bounding leaps, it usually comes from careful iteration on a theme. For those reasons, I find this type of pragmatic, real world approach to tests by far the most useful way of determining whether someone will be up to the technical challenge of working in an existing team.

OnInterviewCycles

In my previous post I talked about how there are still some very bad interview processes being used by top firms when hiring developers. In the last two weeks I have seen many of my former colleagues going for interviews with a wide array of firms, including some of the best technology and financial companies in the world. This has given me a great opportunity to compare and contrast the hiring process in its current state and has led me to two further observations.

The first is quite obvious – the range and quality of hiring processes is as varied as there are firms. Some are very old fashioned, some are incredibly fastidious, while occasionally, a firm will surprise me by creating a programming test that is both novel and directly attributable to the kind of real world work one of their developers might be expected to do on a daily basis. Congratulations to those companies for taking the initiative – everyone else, you’ve got some competitor research to do if you don’t want to lose out in the hiring game. Remember the candidate is interviewing you as much as you are interviewing them.

The second observation is one that starts to show my age. Some of the tests I’m seeing  over my colleagues’ shoulders are very familiar indeed. In fact one was almost an exact facsimile of a programming test I sat over 10 years ago. Another of the tests – one that stood out from the crowd for its pragmatic approach – was almost identical to a test a friend once described to me he’d sat in Sweden. A quick dig around the wonders of social networks informed me that in both of these cases, I could directly trace someone I’ve previously worked with to these firms at some point in the last 3 years. I can only assume they took their coding tests with them when they moved, ensuring that the lineage lives on. Just please, don’t take the bad tests with you when you move on and, whatever you do, don’t let them get into the cloud else we’ll never be rid of them!

OnInterviews

I’ve been doing a round of interviews recently, trying to replace a contract that has recently come to an end and I’ve come to a conclusion – I’m really good at my job but I’m terrible at interviews!

Well, not all interviews. There are some interviews I’m great at. The ones where a real conversation can be had about a diverse range of topics and in which the answers can have some nuance to them. The kind of interviews where I feel I have come out knowing a bit more than I went in with regardless of which side of the table I’m sat. I like those interviews.

I hear you walk good. Let me tie your shoelaces together, blindfold you and spin you round a few times. Now show me your walk.

The kind of interviews I’m terrible at all have a very particular rhythm to them. The meeting starts, pleasantries are exchanged (mostly, although some technical interviewers I’ve met don’t even give their names) and everyone settles into their chairs. A sheet of paper and a pen are slid across the table. My heart sinks, this is going to be one of *those* interviews which have the very black and white questions.

Am I perhaps putting pride before a fall? Shouldn’t a candidate expect to jump through some hoops to get the job? Isn’t this all just good due diligence? Don’t most programming problems come down to boolean logic and therefore the answers can be deemed correct or not? Well no, I suspect that the only interview questions with objectively correct answer are the cases which are trivial. Those questions annoy me.

Question – “In C++, what is wrong with this reverse function?”

Answer – “It’s not std::reverse?”

“No!”

Answer – “It’s in the global namespace?”

Correct Answer – “No! There is no test associated with it and there are loads of bugs!”

More Correct Answer – “Why did someone write all this codes and all these tests to bloat our code base with a worse version of the C++ Standard Library? Who do we employ who is better than Howard Hinnant and why aren’t they contributing pull requests to llvm?”

Question – “Write an algorithm to count all the pairs in an array, matching a predicate, the 2 test sets are 5 elements long… with permanent pen and a single side of A4!”

Answer – “Er… vaguely this? I usually hit F7 about now and run some tests while I have a little think about the wider structure (or it’s already been gulped).”

Correct Answer – “No! There’s a bug! And it uses O(n^2) complexity!”

Question – “How would you optimise it?”

Answer – “Er… does this code really need optimising? What’s the real world usage of this algorithm? How often does it get run? Is this ever going to have more than 5 values passed into it? If this is all we’re doing can’t we just go perfectly wide over all cores for every combination and accumulate to an atomic variable? Can I just Map-Reduce it? Use a GPGPU?”

Correct Answer – “No! This was a set of unique numbers! You could have optimised for that!”

“Er… nothing in the brief said this was guaranteed to be a set… why isn’t it sorted? Is this really all you ask people to find out if they’re worth giving a job to? Where are the architectural system design questions? How much did you say the hourly rate for this position was?”

Don’t you have infinite monkeys somewhere that can do this?

The older I get, the more these trivial pen and paper questions irritate me. I don’t go as far as to say all these things in the interview but I probably don’t hide my inner monologue very well.  I have had a fair number of jobs now and hired a fair few people myself, yet not a single one of those jobs started with someone giving me a permanent pen and paper coding test.

My Dad used to write code on punch cards. To compile it he had to catch a train to London. I write more lines of code per day than my Dad did.

We are all standing on the shoulders of giants. Alan Turing gave the world its greatest tool since the wheel and it is the height of arrogance to consider that any code is written in a bubble. If you are not assessing the capability of  your developers to write code in a meaningful environment, in context to other code and the data it transforms, then you are failing to do your due diligence and opening yourself up to immense risk.

Hiring is a two way process with the candidate trying to assess the quality of the development team just as much as the interviewer is trying to assess the quality of the candidate. The goal of an interview is to assess the depth and breadth of a candidate’s understanding and ability to apply principles in practice, not to find out how much someone enjoys toy puzzles.

So, hiring managers, I have one thing to ask. Next time you take the effort to look at a CV,  check the references and talk to a candidate on the phone. Don’t then call them in for a pen and paper test of the most trivial nature and waste their time and yours. And even if you change nothing else about your tests, at the very least give the candidate a text editor and a delete key and assess whether they can even type!

OnRef – Game Outcomes Project

Finding publicly available data, on the factors that make or break development projects, is hard. As such, the Game Outcomes Project should be congratulated for bringing a large amount of data on a wide range of topics to the fore.

The survey results can be found in Paul Tozour’s  series of excellent blogs at:

 

OnIgnorance

There is one simple fact that, while often hard to accept, is true of every single person who ever lived: we are all ignorant. For everything we think we know, there are almost infinite aspects of understanding that are and will always be beyond us. But that doesn’t necessarily have to be a bad thing. The key is to realise what form your ignorance takes and take action to try and steer it in the right direction.

// Blissful Ignorance

Blissful ignorance is an easy trap to fall into: everything seems to be going great, until it isn’t and you realise that your project is at risk of missing the deadline or even of total failure. Every effort should be made to avoid becoming complacent and failing to recognise faults and their potential mitigation within your organisation.

// Willful Ignorance

A far greater crime than blissful ignorance is to notice the warning signs, the cracks and creaks of a failing system, and to turn a blind eye to them, or declare that there is little that can be done. Something can always be done and if it isn’t, the trouble is just stored up for the future.

// Lucid Ignorance

If blissful and willful ignorance are the two negative aspects of ignorance, the positive counterpoint is lucid ignorance. While at first, lucid ignorance (a term I first came across in Jim McCarthy’s 21 Rules of Thumb) seems an unusual concept, its symptoms are very commonly understood: a daunting feeling at a great challenge or the realisation that you are out of your depth.

The natural reaction to such trepidation is often to try and suppress it. However, in the first of his rules Jim asserts the importance of embracing lucid ignorance and not accepting a group belief in pseudo-order or magical conversions of ignorance into knowledge. To question and demand acknowledgement of exactly where the risks – the lack of prior experience, lack of resources and lack of understanding – are in the project and to work as a team to get from the state of unknowing to a well understood vision, goals and structure.

‘It is essential not to profess to know, or seem to know, or accept that someone else knows, that which is unknown.’

– Jim McCarthy, 21 Rules of Thumb for Shipping Great Software on Time

As such it is important never to accept shaky justification or perceived wisdom as reason enough for project decisions. If no-one can justify why you’re doing something then that should be a major warning sign that it might be the wrong thing to do. If you are doing something purely because someone else does it then you may be missing the whole point, or just driving your product into a crowded marketplace, against established competitors, while missing the opportunity to find your own unique selling point.

// Mitigations for Ignorance

So what can be done to try and improve the amount of lucid ignorance within a team?

  • Don’t delay or defer the difficult aspects of your design. Attempt to establish within well understood parameters exactly what is and isn’t possible – both theoretically and from a practical standpoint.
  • Don’t assume something you are told to be true. It may be that a concept that appears easy is unfeasible, while another that seems impossible, at first, only requires a little lateral thinking. This is particularly important if it involves the collaboration of multiple parties, check with all of them that they concur.
  • Establish good review procedures, welcome the input of even the most junior members of your team and frequently re-examine your group assumptions to ensure they still hold.
  • Don’t skimp on project maintenance or ignore warnings as ‘minor’. While working on the next big feature can seem more appealing than fixing existing features, it is easy  for major issues, that cause projects to run the risk of failure or damage relationships with clients, to be obscured or overlooked.

OnResponsiveness

// Response Windows

When talking about bugs and issues, we talk about the relation between opened and resolved issues to give us general trends as to the overall quality of the project and the group effort being employed in the pursuit of shippable quality. The key being the overall numbers of issues as defined by urgency & severity from critical to minor. This gives a good overall impression of where the project currently stands.

However, in bug count trending there is no inspection of the particular issues, how long they may have been in existence or how badly they have affected the ability of the team to undertake their daily work. Triage lists are often created to target issues but the overall shape of the data may allow certain issues to lurk for long periods without being addressed.

What if the QA of games was approached from the perspective of security vulnerabilities in software and what different information would that give us about the state of a project?

‘The window of vulnerability is the time from when the security hole was introduced or manifested in deployed software, to when access was removed, a security fix was available/deployed, or the attacker was disabled.

Vulnerability management is the cyclical practice of identifying, classifying, remediating, and mitigating vulnerabilities”[2] This practice generally refers to software vulnerabilities in computing systems.’

http://en.wikipedia.org/wiki/Vulnerability_%28computing%29

Here the key issue is not how many bugs exist but how long it took for the bug to be addressed, giving an overall impression of the stability of the product and the responsiveness of the team in addressing issues. This may be a useful metric on the day to day usability of the product and the immediate quality of features.

Response windows may also form a useful guide as to the relative efficiency of the production pipeline. How long does it take the team to get a feature turned around? How long does a build or feature go without being in a workable state? How long do niggling issues persist without being dealt with? How good are individual teams at maintaining existing features? How quickly can the team get a build together?

If teams were to focus on the responsiveness of their ability to iterate content and address issues, how would that affect the perceived quality of the product and the developer’s willingness to change for the better in the opinion of their peers and customers?

Q – What might the data look like?
  • Modelling the data as a distribution should match a normally distributed bell curve.
  • The Response Time is the total time from the issue being introduced to being resolved.
  • The Response Window is the area underneath the curve defining the time in which issues are addressed.
  • Urgent bugs would hopefully trend to the left hand side of the distribution with a sharp peak and decay
  • The lower the priority of issues the further to the right and flatter the expected peak.

Example Response Window graph for bugs (x= response time, y = number of issues):

Q – What would good news look like?
  • Really urgent issues would be dealt with in a very short number of hours with very little tail to the distribution curve.
  • Higher priority issues would be dealt with before lower priority issues.
  • Not all bugs are urgent.
  • Not all bugs are minimal.
Q – What would bad news look like?
  • Urgent issues being further to the right of the graph than lower priority issues.
Q – What further data might be extracted?
  • The time to triage, fix, test, & resolve issues
  • Per component, strike team, feature or build data
Q – What actions could be taken to improve the response?
  • Try to focus on more urgent issues to reduce the Response Window.
  • Try and make the response windows as short as possible.
  • Try to improve workflows or ramp up resourcing for the stages of the pipeline identified as the biggest contributors to the Response Time.

OnRef – The Mythical Man-Month

One of the first books I was ever recommended in my first job was Frederick P Brooks’ The Mythical Man-Month. In this book, the former development manager for IBM’s System/360 mainframe, offers a series of essays on software development.

Though some of the technology and processes in these essays may seem odd to modern developers – dating between 1975 & 1995 as they do – many of their concepts are still insightful and refreshing. Indeed, as my understanding and experience of software development grows, it is a book I find myself returning to again and again.

Good cooking takes time. If you are made to wait, it is to serve you better, and to please you.

– Menu of Restaurant Antoine, New Orleans

More software projects have gone awry for lack of calendar time than for all other causes combined. Why is this cause of disaster so common?

First our techniques of estimating are poorly developed. More seriously, they reflect an unvoiced assumption which is quite untrue, i.e., that all will go well.

Second, our estimating techniques fallaciously confuse effort with progress, hiding the assumption that men and months are interchangeable.

Third, because we are uncertain of our estimates, software managers often lack the courteous stubbornness of Antoine’s chef.

Fourth, schedule progress is poorly monitored. Techniques proven and routine in other engineering disciplines are considered radical innovations in software engineering.

Fifth, when schedule slippage is recognized, the natural (and traditional) response is to add manpower. Like dousing a fire with gasoline, this makes matters worse, much worse. More fire requires more gasoline, and thus begins a regenerative cycle which ends in disaster.

– Frederick P Brooks, Jr. The Mythical Man-Month

Read more about Fred Brooks here.

OnRef – 21 Rules of Thumb

I really cannot recommend this blog post, 21 Rules of Thumb – How Microsoft develops its Software by David Gristwood more highly. It republishes Jim McCarthy’s original article encompassing 21 simple concepts on shipping great software on time.

We’ll be revisiting some of these rules from time to time in relation to various subjects but, in the meantime, if you want more from Jim McCarthy then he has a couple of books too:

OnRyseFacts

So two interesting things just happened:

1) A games company decided to advertise exactly how much crunch one of their current projects required before release.

2) With the wall of silence truly blown, a stream of public condemnation then followed on Twitter under the #RyseFacts tag.

Why these are interesting I’ll reflect on in another post but follow the story on Twitter & Gamasutra.