Learning by Shipping

products, development, management…

Posts Tagged ‘methodology

Combining guessing and planning in product development

with 74 comments

Mission: Impossible movie frameAt the extremes of product development methodology, characterized by waterfall and agile approaches, are different views about planning. Many today would say that planning a product “up front” is nothing more than guessing that locks you into a guess that will be wrong. At the other end, the conventional wisdom is that you should get something to the market soon for testing as a best guess and then iterate and learn to further develop a product. Since no team really practices a method precisely, let’s look at how to get the best out of the planning aspects of your own product development process.

Challenges

Developing a product, new or an update, always faces the same challenge at the start. Using software as an example, folks want to get coding as soon as possible since not coding is clearly wasting time, but the team (and management) want to know that the code will yield the right product (useful, cool, innovative, profitable, whatever). There are usually those on the team that claim instinct tells them what to go build. There are those that are certain they can work out an answer to the right product with enough up front ideation.

To compound these broad challenges, different disciplines have different perspectives. It is likely testers will want to spend more time up front on building a baseline and foundation for the work, but a baseline relative to what? Ops needs to know answers to questions in order to scale and provide the required infrastructure, but those answers require a lot of information you probably don’t know. Designers usually want time to iterate in lower fidelity media in order to hone in on the design language and overall approach. Business folks want to know that their key problems (onboarding, churn, referrals, etc.) will be addressed.

All of these lead to the natural tension between a desire to get started and a desire to pause and make sure the start heads in the right direction.

Waterfall (using this as a positive description) methods argue that an effort should begin with work to identify the problem being solved, available technologies, and proposals for how to go forward. The basic notion is that you should spend the energy considering a wide range of alternatives before you just start down a path that might be fruitless. These days you will be hard-pressed to find proponents of waterfall approaches because of the downsides often associated with waterfall execution.

The downsides of this approach, as critics claim, is that you waste a lot of time building up robust plans that are “going to be wrong (or outdated) anyway”. This criticism essentially states the reality that most plans are still guesses. This is particularly true in a fast-changing, multi-player marketplace where you can wake up one morning to a competitive product or dramatically new approach to solving the problem you identified many moons ago. The ultimate downside of the waterfall approach is that it is viewed as stubborn—so stubborn that even in the face of awareness about a changing market, the team has no choice but to move forward. Proponents will offer tools to mitigate any of these risks and say none of the risks are intrinsic to the methods.

Agile development methods respond to these criticisms by creating an approach where the primary focus is to get enough product work done so you can learn from real customers and then iterate. At the extreme, efforts should actively strip away all non-essential elements of the product development process and focus on the essence of the idea. Agile methods focus on constant iteration, learning, and responding to usage of the product (or lack thereof). Teams focused on iteration work in a tight loop to quickly adapt to the changing landscape and competitive dynamics, in addition to learning.

The downsides of agile methods are actually trickier to pin down right now. These days being critical of agile methods labels one as somewhat of a Luddite in the world of product development. That said, there are understood challenges with agile methods. The pressure to release can often result in a quality bar that is less than customers (or your own testers) would appreciate. A focus on learning might cause you to learn that your service does not scale or scales in a cost-ineffective manner. A large project (people, code, partnerships) is challenging enough to keep coherent and multiple efforts executing in different iterative loops poses a significant architectural and communication challenge. Proponents will offer tools to mitigate any of these risks and say none of the risks are intrinsic to the methods.

It is no surprise that proponents of any method can point to successes, while critics can point to failures of methods as well. In reality, the causal relationship between a project success and the method used is weak at best. Even with examples of success, we need to keep in mind that product development projects are not traditional repeated processes and as such the ability to draw scientific conclusions from them is limited. This should be apparent from the fact that successful products come from both methods, and equally true is that products can be failures from both methods.

It is not unusual for failures in waterfall-styled projects to be ascribed to the methods, but not so with successes. Whereas failures in agile projects usually ascribed to external factors, not the methods. In today’s climate, it is not unusual for agile to be viewed as a causal factor for a successful product.

That’s really why starting the project by declaring the methodology runs the risk of missing the big picture. It runs the risk of spending finite and scarce time on abstract or “meta” concepts and not the work at hand. The best thing is to avoid going down that path in the first place and focus on getting clarity on what work will get done and when that will happen (and why!).

The truth is that starting a product is always a guess. Whether you plan every detail or just start coding, beginning a new product is a leap of faith based on intuition. To those of us that build or have built new products, this leap is the most interesting, terrifying, and rewarding part of product development. It does take a special confidence to make that leap. Planning every last detail can give you a false sense of security. Coding out of the gate can give you a false sense of progress. Guessing is guessing, whether you have 1000 pages of specs and high fidelity model or a whiteboard with a sketch and a functional prototype.

These challenges—choosing between methods and the known challenges of any methods—are real. In product development we are faced with these every time we start a new project. If you have a clear starting point, clear points in time to check on your progress, and a plan so you know what you’re working against then you’ve defined a methodology that works for you.

Where to start?

In a previous post we talked about focusing on the work and not worrying about the label of the methodology. A concrete way to realize this is to take a step back and see how to combine the elements of agile and waterfall in the right amount for your project.

An approach that scales from new projects to next iterations, and small project to big, is to plan to iterate your plans. While this sounds like an obvious cop-out to pick the best of both extremes, it is what reality tends to look like in practice. If you start out knowing you are going to commit time to up front planning, but recognize you will take points in time during development to adjust and learn, then you can mitigate the challenges of both methods.

Of course agile proponents say there are always some plans. And waterfall proponents always say there is room to adjust. Let’s just say those labels and attributes don’t matter and try to arrive at an approach of where to start.

Every project can start with a plan. Legendary products start with a sketch on the back of a placemat at a diner or an all-night coding session. The original plans for some pretty big projects were conceived and documented in pretty short, but articulate, memos or detailed sketches/prototypes. The spark for a plan can come from anywhere and different people have different ways of translating that spark into something more than one person can internalize and visualize.

While a tool like PowerPoint can communicate the gist of the plans, the details will be too open to interpretation. So write down the plan in long form—writing is thinking.

The simple act of writing down a product plan in a couple of dozen pages opens you up to have useful discussions with a broad set of people. If you’ve identified the problem being solved, competitive products/services, technology bets, and overall investments you have the basis of how to talk with marketing, design, development, testing, and product management. Everyone can look at the plan from their perspective and offer insights, advice. You can even package this up as a dialog or exercise with potential customers.

Combining this overview with a functional low-fidelity prototype is a way to visualize the plan for a broader audience. It is usually a good idea for the prototype to support the written description and to lead with the written description. You’ll want to minimize the time you spend on “don’t worry this UX (user experience) isn’t final” or responding to feature suggestions without the context of where you are heading.

This is all a plan needs to be. It is a guess. You can’t prove it is right. You can’t prove it is a good business, great product, or the right thing to do. You can criticize it. You can add more to it. You can find problems. That will be true of any plan (or frankly any product). But you now have a foundation to move forward. To build something that is a “target” that is shared by a group—a vision.

With this plan comes the first chance to iterate. What’s amazing about just writing this down is how much you’ll find you’re iterating on your own thinking. You can think of this somewhat as agile planning. This shouldn’t be new though – anyone who has “told a story” of a product or an idea knows that the story improves quite a bit as you tell it more. This is basically the same thing. Any good product plan is a story—the problem you are solving is the challenge to overcome, the competitors are the antagonists, the technology bets and investments are the plot devices.

Depending on the size of the team/project, the next steps are about the specifics of what to do when. The amount of up front work and the ordering of the tasks is really a choice the team needs to make. Being economical about what you do is of course a key part of ordering. Agile methods often say to do the least possible work to express the unique value of the product. Waterfall methods are about landing all the details early. Different projects will simply have different ideas of what to do at this point and your own intuition as to what makes a good investment of time, relative to quality and time to market, will dominate.

Iteration: Local or globally optimize?

Regardless of the specifics of your development schedule, you are going to iterate. You can choose to iterate after code is in the hands of customers (in a broad or limited way) or just self-hosting until a broader release. The key though is to iterate.

Iteration is as much of an art form as deciding how much of what investments to do up front. It is super easy to fall into a trap of iterating but not making forward progress. You can find yourself rewriting the same code, circling back to previously discarded alternatives, or just changing things but not making them better. As necessary as iteration is, simply iterating does not mean you and the project are moving forward.

The lack of iteration or iteration at only the most fine-grained levels is potentially a sign of a project that won’t learn as it goes. Consider a standard kitchen remodel, something that has been done millions of times. Architects draw up plans and pass them off to contractors. Then you run into existing conditions. You find you’re missing an electrical run or there’s a support wall where one wasn’t expected. It is time to iterate on the plans. You can hack through a solution with your contractor on site. Or you can take a step back and work with the architect to reconsider the design. Either can work depending on your constraints. When you consider the time value of choices, it becomes more interesting to think about taking a step back.

In the heat of the moment with the need to get in market or respond to significant challenges with early customers, a redesign or revisiting decisions seems risky. Maybe the data is poor. Maybe the fear of discovering the need for big changes concerns you. Perhaps you just want to keep moving forward. All too often when problems arise in a project the need to iterate quickly trumps taking a step back. In today’s environment it is often viewed as a positive to iterate and try something different. Activity is not always progress. Change is not always improvement.

Of course the data you use to determine what to try and how to value the feedback is important. It is just as easy to get led astray by the wrong measures of success/failure. Regardless of the quality of data, you’re going to reach a point where you are faced with the need to change something. The question is whether the changes are the right ones.

The point of a change will be to optimize your product. You’re going to have to pick the dimension for which you want to optimize—is it for immediate mitigation or longer term success. It is easy to see mitigating the immediate challenges with some changes. Longer term might feel like another guess. On other hand immediate changes have an obvious fragility relative to broader goals and there’s clearly an appeal to being thoughtful about what to change.

Having a sense of a plan helps you to weigh the alternatives. Are you dealing with a bug or minor design nit or is there a fundamental flaw in the value proposition? There’s no mistaking the reality that you might hit a major reset, especially on a brand new product. There’s also a reality that you will have to revisit a pretty broad set of small design choices—that is steps in a flow or portions of a design, rather than the entire flow or design language.

Defining a time up front when the product is in a state to evaluate all the feedback and make choices about how to optimize is an important part of the process. This checkpoint stage can be a first self-host, private beta, partner beta, public beta or anything in between. Any product today is going to have telemetry and an understanding of how it is used and what you are measuring. This helps you to inform both what is going on and even what you failed to measure correctly.

Armed with a set of potential problems to address—optimizations yet to be done—there’s a simple question to ask of the team, which is “do we need to change/fix this or not, and if we need to take a step back and re-evaluate?”

A way to look at what to change/fix or not is to think of changes relative to the longer term goal, to go beyond the immediate. There’s no doubt the feedback about something is real. The question is really whether the cost to change (hours, risk, churn) gets you closer to the broader goals of the plan or is more reflective of iterative activity.

A checkpoint discussion where members of the team are aware of all the changes going on and what is being prioritized is a way to level-set. Some teams might have bigger challenges or more changes and other teams might be making more local changes. Calibrating these across the team is akin to making sure the whole project is thinking and acting globally, not locally. The plan that was put in place serves as a reminder of what the team was hoping to accomplish. Accountability, aka decision making, can be clear because the roles and responsibilities are clear and communication has been clear. A discussion to inform doesn’t have to be an invitation for everyone else to join in revisiting the choices made by the team.

Is the new data driving you to revisit the plan completely (whether immensely detailed or not)? That could be. For a brand new business and/or a brand new product where the effort is to grow an entirely new customer base, you could be going in a wrong direction. For a product update, there’s always going to be pressure from existing customers to innovate in a more incremental manner versus taking the product in a new direction. The presence of a plan allows you to have an informed dialog as to what went wrong. When you make choices to change things you have a shared foundation upon which to agree about what went wrong.

Is the data driving you to tweak something? That certainly is the case with some changes. The presence of the plan allows you to decide how critical it is to make changes. Too often changes to the code are made because of the presence of feedback even if the change doesn’t really alter the overall outcome. When you choose to keep things a bit more stable it doesn’t have to be viewed as blindly sticking to a plan when it can just be prudent engineering of cost-benefit. The capability to change is not the same as the value of change. Something that might be 10% better might introduce a high risk of change management or might just be 100% different–ask if something really is twice as good (or 10x better) after the change.

There’s a simple view of optimization that one can use in having discussions about changes—whether at the feature level of the whole plan. The idea is to discuss whether the change is a global optimization or a local optimization. When resources are right and time to market critical, optimizing locally is wasteful. When the plan is generally right but has some holes you discover, then making sure you optimize globally leads to an agile view of planning. The following just sketches this conceptual view–believe it or not this can often be a useful visual aid in the discussions around whether a change is needed.

Image that represents the difference between local and global optimization.  There is an x and y axis showing conceptual peaks and valleys in product development.  The current "you are here" position is a low point.  Local optimization is shown as a positive peak.  Global optimization is an even higher positive peak.

Whether you label the axes performance, suitability to task, conversation rate, success rate, or transactions per second is not really as important as taking a step back and asking the question about whether the change gets you close to where you need to be over time. It is far too easy to get caught optimizing your plan relative to the nearest peak, not the best peak.

Whenever you have more than a few folks working together, having a set of tools to help you make a consistent set of choices across the team is critical. The more there is a shared view of the goals and the way to make choices the easier this becomes—a plan is a way of encapsulating the broader goals and giving you a place to both measure relative success and to decide the target was wrong. The presence of a plan does not enforce rigidity any more than the use of agility guarantees you will iterate to success.

Product development is a lot of guesswork. Planning, checkpointing, and deliberate decision making are tools to help you make the most informed guesses you can make.

This Week’s Poll

This week kicks off a new feature of Learning by Shipping — Three Quick Questions. This is a snap poll to share aggregate (non-scientific) reactions to the topic of this post, which will be reported in the next post. Take the poll – Three Quick Questions. Cameron Turner, an expert in big data and measuring how products are used in the real world is helping with these polls. No identifying information is collected or maintained.

–Steven

###

Written by Steven Sinofsky

March 11, 2013 at 1:00 pm

Posted in posts

Tagged with , ,

Focusing on the work, not the methodology

with 25 comments

Want to get developers fired up? Kick off a debate about development methodologies – waterfall, agile, lean, extreme, spiral, unified, etc.  At any given time it seems one method is the right one to use and the other methods, regardless of previous experience, are wrong.  Some talk about having a toolbox of methods to draw on.  Others say everyone must adapt to a new state of the art at each generation.  Is there a practical way to build good software without first having this debate?

There’s a short answer.  Do what feels right until it stops working for the team as a whole, and to do so without debating the issue to death, then iterate on your process in your context.  The clock is running all the time and so debating a meta-topic that lacks a right answer isn’t the best use of time.  There’s no right methodology any more than there is a right coding convention, right programming language, or right user interface design.  Context matters.

We’re all quick to talk about experimentation with code (or on customers) but really the best thing to do is make some quick choices about how to build the software you need and run with it.  Tune the methods when you have the project results in the context of your team and business to guide the changes.  Don’t spend a ton of time up front on the meta-debate about how to do the work you need to do, since all you’re doing is burning clock time.

Methods

Building a software product with more than a couple of people over anything longer than a few months requires some notion of how to coordinate, communicate, and decide.  When you’re working by yourself or on a small enough codebase you can just start typing and what happens is pretty much what you would expect to happen.

Add more people or more time (essentially the same thing) or both and a project quickly changes character.  The left hand does not know what the right hand is doing or more importantly when.  The front and back end are not lining up.  The user experience starts to lack a consistency and coherency.  Fundamentals such as performance, scale, security, privacy are implemented in an uneven fashion.  These are just a natural outcome of projects scaling.

Going as far back as the earliest software projects, practitioners of the art created more formal methods to specify and implement software and become more engineers.  Drawing from more mature engineering processes, these methods were greatly influenced by the physical nature of large scale engineering projects like building planes, bridges or the computers themselves.

As software evolved it became clear to many that the soft part of software engineering could potentially support a much looser approach to engineering.  This coincided with a desire for more software sooner.  Many are familiar with the thesis put forth by Marc Andreessen Why Software is Eating the World (if not, worth a read).  With such an incredible demand for software it is no surprise that many would like projects to get more done in less time and look to a methodological approach to doing so.  The opportunity for reward for building great software as part of a great business is better than ever.  The flexibility of software is a gift engineers in other disciplines would love to have, so there’s every reason to pivot software development around flexibility rather than rigidity.

Definitions

For this post let’s just narrow the focus to the ends of the spectrum we see in this dialog, though not necessarily in practice.  We’ll call the ends of the spectrum agile and waterfall.

In today’s context, waterfall methods are almost always frowned upon.   Waterfall implies slow, bureaucratic, micro-managed, incremental, removed from customers and markets, and more.  In essence, it feels like if you want to insult a product development effort then just label it with a waterfall approach.

Conversely, agile methods are almost always the positive way to start a project.  Agile implies fast, creative, energetic, disruptive, in-touch with customers and markets, and more.  In essence, if you want to praise a product development effort then just label it with an agile approach.

These are the stereotypical ends of a spectrum.  Anything structured and slow tilts towards waterfall and anything creative and fast tilts towards agile.  At each end there are those that espouse the specifics of how to implement a method.  These implementations include everything from the templates for documents, meeting agendas, roles and responsibilities, and often include specific software tools to support the workflow.

It is worth for a moment looking in a mirror and stereotyping the negative view of agile and the positive view of waterfall, just for completeness.  A waterfall project is thoughtful, architectural, planful, and proceeds from planning to execution to completion like a ballet.  On the other hand, agile projects can substitute chaos and activity for any form of progress—“we ship every week” even if it doesn’t move customers forward and might move them backwards.

Of course these extremes are not universally or necessarily true.  Even defining methodologies can turn into a time sink, so it is better to focus on reality of getting code written, tested, deployed/shipped and not debugging the process for doing so…prematurely.

Reality

In practice, no team maintains the character of the ends of this spectrum for very long, if ever.  There is no such thing as a pure waterfall project any more than there is a pure agile project.  In reality, projects have characteristics of both of these endpoints.  The larger or longer a project is the more it becomes a hybrid of both.  Embracing this reality is far better than focusing finite work energy on trying to be a pure expression of a methodology.

In discussions with a bunch of different folks recently I’ve been struck by the zeal at which people describe their projects and offer a contrast (often pointing to me  to talk about waterfall projects!).  One person’s A/B test is another person’s unfinished code in hands of customers.  One person’s architecture for the future is another’s slogging plan.  The challenge with the dialog is how it positions something everyone needs to do as a negative relative to the approach being taken.  Does anyone think you would release code without testing it with a sample of real people?  Does anyone really think that doing something quickly means you always have a poor architecture (or conversely that taking a long time ensures a good architecture)?

We all know the reality is much more subtle.  In fact, like so many product development challenges context is everything.  In the case of development methodologies the context has a number of important variables, for example some include:

  • Skills of team. Your team might be all seasoned and experienced people working in familiar territory.  You might be doing your n-th project together or this might be the n-th time building a similar project or update.  You know how hand-offs between team members work.  You know how to write code that doesn’t break the project.  Fundamentals such as scale, perf, quality are second nature.
  • Number of engineers / size of team.  The more people on a project the more deliberate you will likely need to be.  With a larger team you are going to spend time up front—measure twice, cut once.  The most expensive and time consuming way to build something in software is to keep rebuilding something—lots of motion, no progress. The size of the team is likely to influence the way the development process evolves.  On a larger team, more “tradition” or “convention” will likely be in place to begin with.  Questioning that is good, but also keeping in mind the context of the team is important.  It might feel like a lot of constraints are in place to a newcomer, especially relative to a smaller previous team.  The perspective of top, middle, bottom plays into this–it is likely tops and middles think there is never “enough” and bottoms think “too much”.  On a large team there is likely much less uniformity than any member of the team might think, and this diversity is an important part of large teams.
  • Certainty of project.  You might be working on a project that breaks new ground but in a way that is comfortable for everyone.  This type of product is new to the organization but not new to the industry (for example, a new online store for a company).  Projects like this have a higher degree of certainty about them than when building something new to the industry.
  • Size of code base.  Even with a small number of people, if you happen to have a large code base then you probably need to spend time understanding the implications of changes.  In an existing body of code, research has consistently shown about a 10% regression rate every time you make changes.  And keep in mind that “existing body of code” doesn’t have to mean decade’s old legacy code. It could just mean the brand new code the 5 of you wrote over the past six months but still need to work around.
  • Familiarity of domain.  The team might have a very high degree of familiarity with a domain.  In Dreaming in Code the team was very agile but struggling until they brought in some expertise to assist with the database portion of the project.  This person didn’t require the up-front planning one might think, because of the domain expertise they just dove right in building the require database.  But note, the project had a ton of challenges and so it isn’t clear this was the best plan as discussed in the book.
  • Scale services.  Scaling out an existing service or bringing up a new service for the first time is a big deal.  Whether you’re hosting it yourself or using a service platform, the methods you use to get to that first real world use might be different than those you would use for an app delivered through a store.  The interactions between users, security and privacy, reliability, and so on all have much different profiles when centralized.  Since most new offerings are a combination of apps and services, it is likely the methods will have some differences as well.
  • Realities of business.  Ultimately there is a business goal to a project.  It is easy to say “time to market is everything” and many projects often do.  But there is a balance to competitive dynamics (needing features) and quality goals (no one needs a buggy product) that really dictate what tolerances the marketplace will have for varying levels of quality and features one delivers.  Even the most basic assumption of “continuous updating” needs to be reconciled with business goals (something as straight forward as how to charge for updates to a paid app can have significant business implications for a new company).

The balance with all of these attributes is that they can all be used to justify any methodology.  If the team is junior then maybe more planning is in order.  If the project feels breakthrough then more agility is in order.  Yet as is common with social science, one can easily confuse correlation with causality.  The iPad is well-known to be almost a generation in the making (having started years before the iPhone then shelved).  Facebook is well-known to favor short cycles to get features into testing.  In neither case is it totally clear that one can claim the success of the overall effort is due to the methodology (causal v. correlation).  Rather what makes more sense is to say that within the context of those efforts the methodology is what works for the organization.

We don’t often read about the methodology for projects that do well but not spectacular.  We do read about projects that don’t do well and often we draw a causal relationship between that lack of success and the development methodology.  To me that isn’t quite right—products succeed or fail for a host of reasons.  When a product doesn’t do what the marketplace deems successful, the sequence of steps to build the product are probably pretty low down on the list of causal factors—quality, features, positioning, pricing, and more seem more important.

Many assert that you simply get more done using agile methods (or said another way, waterfall methods get less done) in a period of time, and given the social science nature of our work it is challenging to turn this assertion into a testable hypothesis.  We don’t build the same product twice (say the same way that carpenters know how to frame a house on schedule).  We can’t really measure “amount of work” especially when there can be so much under the hood in software that might not be visible or material for another year or two.

The reality of any project that goes to customers is that a certain amount of work needs to get done.  The methodology dictates the ordering of the work, but doesn’t change the amount of work.  The code needs to be written and tested.  The product needs to work at scale, reliable, secure, accessible, localizable, and more.  The user experience needs to meet the design goals for the product.  Over time the product needs to maintain compatibility relative to expectations—add-ins and customizations need to be maintained for example.

All of this needs to happen.  All of this requires elements of planning, iteration, prototyping, even trial and error.  It also requires a lot of engineering effort in terms of architecture and design.  On any project of scale, some elements of the project are managed with the work detailed up front.  Some elements are best iterated on during the course of the project.  And some can wait until the end where rapid change is not a costly endeavor.

There’s no magic to be had, unfortunately.  You can always do less work, but it doesn’t always take less time once the project starts.  You can’t do more work in less time, even if you add more resources. You can sacrifice quality, polish, details, and more and maybe save some time.  You can plan for a long time but it doesn’t change the amount of time to engineer or the value of real people using the product in real situations.  You can count of fixing things after the product is available to customers, but that doesn’t change the reality of never getting a second chance to make a first impression.  All of this is why product development is so fun—there simply aren’t magic answers.

There are ways to be deliberate in how you approach the challenge.

Checkpoints drive changes

Rather than advocate for a conversion of the whole team to a methodology or debate the best way to do the work, the best bet seems to always be much more organic.  On a small project, ask what would work for each of the team members and just do that.  On a large project, put some bounding principles in place and some supporting tools but manage by results not the process as best you can.

In all cases, the team should establish a rhythm of checkpoints that let everyone know where the project stands relative to goals.  Define a date along with criteria for the project at that date and then do an assessment.  This assessment is honest and transparent.  If things are working then great, just keep going.  If things are not working then that’s a good time to change.  The only failure point you can’t deal with on a project is a failure to be honest as a contributor and honest as a team.  If the dynamic on the team is such that it is better to gloss over or even hide the truth then no methodology will ever yield the results the team needs.

In projects I’ve worked on that have spanned from 3 months to 3 years, the only constants relative to methods are:

  1. Establish the goals of the project, the plan, in such a way that everyone knows the target—execute knowing where you want to end up and adapt when you’re not getting there or you want to adjust the target.
  2. Create a project schedule with milestones that have measureable criteria and honestly assess things relative to the criteria at each of those milestones.

The bigger the project the more structure you put on what is going on when, but ultimately on even the biggest projects one will find an incredible diversity of “methods”.  At least two patterns emerge consistently: (a) that scale services and core “kernel” efforts require much more up front planning relative to other parts of a project and failure to take that into account is extraordinarily difficult to fix later and (b) when done right, user experience implementation benefits enormously from late stage iteration.

Approach

Given all that is going on a project what’s the right way to approach tuning a methodology?

Very few software projects are on schedule from start to finish, and even defining “on schedule” can be a challenge.  Is a 10% error rate acceptable and does that count as on time?  If the product is great but was 25% late, time will probably forget how late it was.  If the product was not great but on time, few will remember being on time.

So more important than being on time is being under control.  What that means is not as much about hitting 4pm on June 17th, but knowing at any given time that you’re going to be on a predictable path to project completion.  Regardless of methodology, projects do reach a completion date—there’s an announcement and people start using what you built.  It might be that you’re ready to start fixing things right away but from a customer perspective that first new experience counts as “completing the project”.  Some say in the agile era products are never done or always changing.  I might suggest that whenever new codepaths are executed by customers the project as experienced some form of completion milestone (remember the engineers releasing the code are acting like they “finished” something).

The approach of defining a milestone and checkpoints against that milestone is the time to look at the methodology.  Every project, from the most extreme waterfall to the highest velocity agile project will need to adjust.  Methodologies should not constrain whether to adjust or not as all require some information upon which to base changes to the plan, changes to how people work, or changes to the team.

Checkpoints should be a lightweight self-assessment relative to the plan.  When the team talks about where they are the real question is not how are things going relative to a methodology but how things are going relative to the goal of the project.  The focus on metrics is about the code, not about the path to the code. Rather than a dialog about the process, the dialog is about how much code got done and how well is it working and connecting to the rest of the project.

That said there are a few tell-tale signs that the process is not working and could be the source of challenges:

  • Unpredictable.  Some efforts become unpredictable.  A team says they are going to be done on Friday and miss the date or the team says a feature is working but it isn’t.  Unpredictability is often a sign that the work is not fully understood—that the upfront planning was not adequate to begin the task.  What contributes to unpredictability is a two-steps forward, one-step back rhythm.  Almost always the answer to unpredictability is the need to slow down before speeding up.
  • Lots of foundation, not a lot of feature.  There’s an old adage that great programmers spend 90% of the time on 10% of the problem (I once interviewed a student who wrote a compiler in a semester project but spent 11 of 12 weeks on the lexical phase).  You can overplan the foundation of a project and fail to leave time for the whole point of the foundation.  The checkpoint is a great time to take a break and make sure that there is breadth not just depth progress.  The luxury of time can often yield more foundation than the project needs.
  • Partnerships not coming together.  In a project where two different teams are converging on a single goal, the checkpoint is the right time to sanity check their convergence.  Since everyone is over-booked you want to make sure that the changes happening on both sides of a partnership are being properly thought through and communicated.  It is easy for teams that are heads down to optimize locally and leave another team in a tough spot.
  • Unable to make changes.  In any project changes need to be made.  Surprisingly both ends of the methodology spectrum can make changes difficult.  Teams moving at a high velocity have a lot of balls in the air so every new change gets tougher to juggle.  Teams that have done a lot of up front work have challenges making changes without going through that process.  In any case, if changes need to be made the rigidity of a methodology should not be the obstacle.
  • Challenging user experience.  User interface is what most people see and judge a product by.  It is sometimes very difficult to separate out whether the UI is just not well done from a UI that does not fit well together.
  • Throwing out code.  If you find you’re throwing out a lot of code you probably want to step back—it might be good or it might be a sign that some better alignment is needed.  We’re all aware that the neat part of software is the rapid pace at which you can start, start over, iterate, and so on.  At some point this “activity” fails to yield “progress”.  If you find all or parts of your project are throwing out more code, particularly in the same part/area of a project then it is a good time to check the methodology.  Are the goals clear?  Is there enough knowledge of the outcome or constraints?
  • Missing the market. The biggest criticism of any “long” project schedule is the potential to miss the market.  You might be heads down executing when all of a sudden things change relative to the competition or a new product entry.  You can also be caught iterating rapidly in one direction and find competition in another.  The methodology used doesn’t prevent either case but a checkpoint offers you a chance to course correct.

It is common and maybe too easy to get into a debate about methodology at the start of a project or when challenges arise.  One approach to shy away from debates about characterizing how the work should be done is to just get some work done and iterate on the process based on observations about what is going wrong.  In other words, treat the development process much like the goal of learning how to improve the product in the market place through feedback.  A time to do this is at deliberate checkpoints in the evolution of the process.

So much of how a product evolves in development is based on the context—the people, the goals, the technologies—using that to your advantage by knowing there is not a high degree of precision can really help you to stay sane in the debates over methods.

This post was challenging to write. It is such a common discussion that comes up and people seem to be certain of both what needs to be done and what doesn’t work.  What are some experiences folks have had in implementing a methodology?  What are the signs you’ve experienced when regardless of the methodology the project is not going in the right direction? Any ideas for how to turn checkpoints into checking up on the project, not revisiting all the choices? (ok that last one is a favorite of mine and a topic for a future post).

–Steven

###

Written by Steven Sinofsky

February 7, 2013 at 8:30 am

Posted in posts

Tagged with , , , ,