Thursday, June 25, 2015

Why the Shenmue 3 kickstarter campaign is a smart move

About a week ago during Sonys press conference on the E3 the kickstarter campaign for Shenmue 3 was revealed. There has been a lot of criticism ever since, mostly stating that this is the new "pre-order bonus" in the gaming industry. Being a gamer myself, I am not really fond of all the pre-order bonuses aswell as the DLC strategies applied for many games. Let me tell you why I think the Shenmue 3 kickstarter campaign was actually quite a smart move.

The risk of Shenmue 3

Despite high critical acclaim Shenmue 1 and 2 were not too succesful regarding sales according to various statistics (see here, here and here). The first installment is even considered a commercial failure and had a total cost of about 70 million US$. Remember that this was back in 1999. Final Fantasy XIII, which came out 10 years later, had a total cost of about 65 million US$. Furthermore it is now already 14 years ago that Shenmue 2 came out for the Dreamcast and Xbox. In the meantime we've seen the rise of Call of Duty, Grand Theft Auto III, IV and V, Dota 2, League of Legends, World of Warcraft and Final Fantasy XI - XIV. Meaning there are a lot of new games and game series that are consumed by the gamers. These factors add up to a high risk for a possible third installment of the series. Former Sega producer Stephen Forst confirmed this in a few tweets from 2013 stating that the risk was to high and that the brand awareness is probably rather low.

The Kickstarter campaign as MVP

So when they (meaning Sony and Suzuki Yu) started the Kickstarter campaign they actually created a minimum viable product. Other than a simple community vote they found actual customers who are willing to pay money in order to be able to play Shenmue 3. For Sony, who were in from the start (although stating otherwise in the beginning) the successful funding of the 2 million US$ Kickstarter campaign within the first 2 days was reason enough to count the MVP as successful and to officially support the game as publisher, partner and stakeholder. Given the "official" definition of an MVP they have done everything right:
Once the MVP is established, a startup can work on tuning the engine. This will involve measurement and learning and must include actionable metrics that can demonstrate cause and effect question.

The pros for the gamers

Even though there are other voices saying that the campaign is just another kind of the pre-ordner madness in the gaming industry, I think this is not the case. First, other than when pre-ordering a game I don't have to pay the full price. The lowest pledge that contains a digital copy of the game is $29:


Furthermore not only will the gamers get frequent updates on the game progress but from the lowest pledge on the supporters already have the possibility to influence the direction of the game:


Conclusion

I understand that gamers are fed up with all the pre-order madness and the DLC strategy of today publishers. Nevertheless I think the that Kickstarter campaign is a good thing. For Sony it is a definitive proof that given the risks there seem to be enough gamers willing to pay. For us gamers it is a possibility to get a game we want and even influence parts of it.

Friday, May 29, 2015

How to measure productivity

A few days ago I was having a discussion on how to measure productivity (I will not elaborate on if you should measure it all, that is for another blogpost). We came up with a few metrics that might be useful indicators.

Hit rate

That is the actual story points achieved in the sprint divided by the commited/forecasted story points.
Example: You commited 52 story points, but you have only achieved 37. Your hit rate is 71%.

Bugs per sprint + their average lead time

Track how many bugs are opened for your team/product per sprint. The idea is: the less bugs arise, the higher the quality of your software and the less you are sidetracked by bugfixing. Naturally this indicator works best if you fix bugs instantly and don't collect them in a bugtracker.
Also: track the average lead time it takes to fix bugs. The less time it consumes, the less you are sidetracked. Try adding a policy for this (for example: "We try to fix every bug in 24 hours").

Impediments per sprint + their average lead time

Track how many impediments arise per sprint and their average lead time. The less things you have, that impede you, the more productive you should be. Also: the faster you can remove these impediments, the higher your productivity should be.

Amount of overtime / crunch time

We were a bit unsure about this one. How much does it really say about productivity? In my opinion you should only do overtime in absolutely exceptional situations. In my opinion if you need overtime (or crunch time) there is something fundamentally flawed in the way you do your work. My theory is, that overtime is taken when planning badly. This can be either because you (constantly) plan too much and/or there are way too few people to do the work you want to be done. If you want to track this, make sure that people are able to provide their overhours anonymously.

Reuse rate

One thing we were completely unsure about is the reuse rate (how many of your code is getting reused?). The idea was that the less you reinvent the wheel over and over again, the more productive you should be. But how to track this? The only things we came up with was to run c&p-detection/duplicate-code-detection. Is this a valid metric in this case? What if you have multiple projects? If you have any ideas for this one, please let me know in the comments.

Don't: Velocity

Don't use your velocity as an indicator for productivity. First it is very easy to manipulate and chances are about 99% that it will be. Second every team has its own velocity, meaning there is no qualitative information about productivity to be found here.


So far we haven't tested these metrics yet as indicators for productivity. If we should start to do so, I will gladly let you know about any outcomes. If you should have any more ideas on how to measure productivity, please let me know in the comments below.

Thursday, July 18, 2013

Active learning cycle

Many teams seem to struggle with keeping track of their improvements from the retrospective. One really useful tool for that is the active learning cycle.

Take a sheet of flipchart paper and divide it into 4 areas: Keep, Try, Breaks and Accelerators. The most common form looks like this but you can always use a different form if it suits you better:
Active Learning Cycle
At the end of the retrospective you put your actions/improvements you decided on in "Try". Those are things that you want to try out. Remember to put the active learning cycle afterwards in a place where everybody can see it, near the team board would be a good place.

Not later than in the next retrospective you use to active learning cycle to decide what you want to do with the actions that are on the cycle.

  • Did you like it and you want to continue doing it? Put it in "Keep" and keep on doing it
  • Did you think it rather impeded you and you want to stop doing it? Put it in "Breaks". This could be things like "Standup at 2pm", "Digital team board", etc. And, more important: Stop doing it ;-)
  • Was it something that helped you but which is nothing you can really keep on doing all the time? Put it in Accelerators. This could be things like "2-day team offsite" (It was an accelerator for the team, but you can't do a 2-day offsite every week).
You don't have to wait though, the active learning cycle is supposed to be a "living" artifact, so you can always move post-its around when you feel it's time to do so. Of course you can also move things from "Keep" to "Breaks" or "Accelerators" if at some point it isn't helping you anymore. Since your active learning cycle will be very full at some point you might have to remove post-its someday. The moment, when you remove something is totally up to you, but from my experience it's best to only remove them, when they've already become second nature to the team.

Wednesday, July 3, 2013

Why is the 4 week sprint still the literary default?

I’ve been wondering for a long time why the 4 week sprint still seems to be the default in Scrum literature. Even the State of Scrum Report states there is a 38% majority using 2 week sprint while 29% use 3-4 week sprints (page 25). Given that 3 and 4 week sprints have been merged in the statistics implies that the actual percentage amount of teams using 4 week sprints is even lower than 29%. Yet in the same report insight #2 states that “a Sprint is one iteration of a month or less that is of consistent length throughout a development effort.” completely ignoring its own results (page 38). Also, why isn’t the book “Software in 30 days”, released in 2012, called “Software in 14 days”?

Part of Scrum and agile in general is to generate feedback as quickly and often as possible, using 30 day sprints you spend a whole lot of time between two feedback cycles. In addition 4 weeks of time is so much that it’s really hard to look back at them when sitting in a retrospective. Can Scrum literature please inspect & adapt and use the 2 week sprint as the new default?

Friday, June 21, 2013

How to improve your retrospective

Marc Löffler did a session about "How to improve your retrospective" past weekend at agile coach camp 2013. Result was a list of possibilites what you can do to improve your retrospective and maybe sometimes vary the usual format as described in "Agile Retrospectives".

Since the results speak for themselves, I will just post the photos of the results here:


Wednesday, June 19, 2013

Coaching Dojo

You probably already know Coding Dojos (I'm not going to go into detail here, so if you don't know Coding Dojos yet you can get all the information at codingdojo.org). At agile coach camp 2013 (accde13) I heard for the first time that there is something similar for coaching called Coaching Dojo.

The goal of a coaching dojo is improving your skills by practice and by being exposed to various coaching styles. In Martins session at accde13 about Coaching Dojos we used the following setup:

Coaching Dojo setup

Split up into groups of 4-6 people. One person in the group will be your seeker. This should be someone who has a real-life problem/question he needs solved or answered. Nevertheless keep in mind that the goal of the coaching dojo is NOT to find a solution for the seeker but to train your coaching skills (although it might occur that the seeker's problem is solved). Next you need 2 people from the group that are the first to coach the seeker.

Do the coaching in timeboxes (we used 10 minutes). During this time the spectators watch and take notes (and most important: do not take part in the coaching!). When time is up, give feedback to the coaches. Usually most of the feedback will come from the spectators, as they are the ones watching from the outside. Then rotate the coaches and continue coaching, meaning the seeker will stay the same. Usually you do 4 rounds of coaching.

I thought it was exhausting talking a few times about my problem (in our group I was the seeker), so you can consider switching the seeker after a while (and with it the topic). I would leave this decision to the group.

Additional reading:

Tuesday, June 18, 2013

Sprint Burndown Chart: Yes or no?

I don't believe in sprint burndown charts. So far, in every team I've been Scrum Master for, not one developer really wanted to update the burndown by himself, meaning I was either the burndown-monkey or I found myself asking the team to update the burndown regularly. And since I don't get the real advantages of burndown charts, I struggle to explain the team why it's important to maintain the burndown chart. I guess that's what you call a chicken/egg-problem.

The burndown chart is supposed to show the current status of the team and indicates whether the team is likely to successfully get everything done in the sprint. But: In my opinion you don't need a burndown for that because all the information can be read from the sprintboard. Nowadays sprints are mostly 2 weeks long (I haven't heard from anyone using longer sprints than that in a long time), so it's relatively easy to overlook this time. While I think that a burndown is useful with 4 week long sprints, in sprints up to 2 weeks length it's basically just maintaining an information that is already on the sprintboard.

I've been trying to solve this dilemma for quite some time now but haven't found a real solution yet. First I tried to find the reasons for a burndown chart. Having found none that did satisfy me, I thought: Well, maybe there's another good, intuitive way to visualize the current status. One that would maybe integrate directly into the sprintboard. So far, I haven't found one.

Being on agile coach camp 2013 past weekend, I took the chance and conducted a session called "Burndown Chart 2.0" hoping to find this intuitive, sprintboard-integrated way.

Although we haven't found one, the session has helped me a lot to remind myself what the burndown chart is about, whom it is for and whether it is useful or not. First of all the burndown chart is a tool by the team for the team and no one else, no manager, no stakeholder, no one else. Second its main purpose is transparency: Transparency where we are, transparency what has happened.

Deriving from the fact that it's a tool by the team for the team, the burndown chart should be used when the team wants to use it. If you feel that a burndown chart could help you in your current situation, then use it, otherwise there's probably no real reason to use it. Using a burndown chart is not essential.
Apart from that it doesn't always have to be a chart. Depending on what you want to visualize it can also be a traffic light (visualizing if the team thinks the sprint will be successfull) or any other visualization you can think of.