Learning by Shipping

products, development, management…

Posts Tagged ‘tradeoffs

Apps: Shrapnel v. Bloatware

imageMuch is being said lately about the trend to unbundle capabilities for the web and apps. Is this a new trend, a pendulum, or another stage in the evolution of providing software solutions for work and life? Are we going to learn what some would say are lessons from a past generation of software and avoid bloatware? Perhaps we will relive some of the experiences from that era and our phones and tablets will be littered with app shrapnel as our PCs once were?

My own personal experience in product choices is marked by a near constant tension over not just bundle v. unbundle from a product perspective, but also from a business perspective. Whether on development tools, Office, Windows, or internet services I’ve experienced the unbundle <> bundle dynamic. I’ve bundled, unbundled, and had the “internal” debates about what to do when, what went well or not. If you’re interested in an early debate about bundling Office you can see the Harvard case study on the choice of “best of breed v. suite” in Finding the Suite Spot ($).

The Pendulum

This HBR article does a good job of bringing forth some of the history and describing the challenges of positioning unbundle/bundle as both a binary choice and a pendulum or Krebs-like cycle of resource conservation. Marc Andreessen does a great job in these two tweetstorms of detailing the bundle/unbundle cycle on the internet and the computer history we both grew up with (http://tweetstorm.io/user/pmarca/481554165454209027 and http://tweetstorm.io/user/pmarca/481739410895941632).

There’s one maxim in business that drives so much of the back and forth or pendulum behavior we tend to see, which is that most strategies have a complementary approach (vertical v. horizontal, direct v. indirect, integrate v. distinct, first v. third-party, product org v. discipline org, quantitative v. qualitative performance evaluation, hack v. plan, etc.) So in business depending on your roots or your history, and most importantly the context you find yourself, you are going down a path of one of more of these attributes.

Over time your competition tends to pick you apart the other way or ways. Equally likely, your ecosystem builds up around you innovating in parts where you are weaker, gaining strength, and showing off new approaches to product or market. Certainly, if you’re a new company entering an established market you will not just copy the approach of the incumbent which is why new products seem to be at the other end of one of these spectrums.

Then as you get in trouble you look around and try to figure out what to do. There’s a good chance the organization will double down on the approach that has always worked—after all as Christensen says, that is the natural energy force in an organization. That happens until a big moment of change (a major competitive success, leadership change, etc.) and then you change approaches. More often than not, your choice is to do the thing you weren’t doing before. If you’re around in the workforce long enough, you start to see things as a series of these evolutionary steps.

This is business, context is everything. There’s never a right answer in absolute, only a right answer given the context.

The moments of change, of breaking the cycle or swinging back the other way, are the moments that unleash significant improvements in the work, the product, or the workplace.

History and Customers

As consumers we adopt new technologies without realizing or thinking about whether they are bundled or unbundled, and our choices and selections for one or other are highly dependent on the context at the time. There are times when bundling is essential to the distribution of technology, just as there are times when unbundling brings with it more choice, flexibility, and opportunity. Obviously the same holds for businesses buying products, only businesses have purchasing power that can make bundled things appear unbundled or vice versa.
It is worth considering a few tech examples:

  • Autos began with minimal electronics, followed by optional electronics, then increasingly elaborate integrated electronics and many now think that smartphones will be the best device for in-car electronics/apps (for example the BMW i series).
  • LinkedIn began as a network for professionals to list their credentials and connect to others professionally. Recently it has bundled more and more content-based functionality.
  • Mobile telephony used to have distinct local, long distance, text and then data plans, which have now been bundled into all-you-can-consume multi-device plans.
  • Word processing used to have optional spell-checking and mail merge which was then bundled into single products which were then subsequently bundled into suites and also now bundle cloud services. Similarly, financial spreadsheets, data analysis, and charting were previously distinct efforts that are now bundled. Today we are seeing new tools that have different feature sets and approaches, representing some unbundling and some bundling.
  • Operating systems were once highly hardware dependent, then abstracted from hardware but with optional graphical interfaces, followed by a period of bundling of OS+Graphics, followed by a bundling of OS, graphical interface, and hardware in a single package. Today with services we’re seeing different combinations of bundling and unbundling innovations.
  • Microprocessors have been on a fairly continuous bundling effort relative to peripherals, graphics, and even storage.
  • Modern smartphones are a wonder of bundling, first at the hardware level (SoC packaging) followed by hardware+software, then through all the devices that were previously distinct (GPS, still camera, video camera, pedometer, game controller, USB storage, and more).

There are countless examples depending on what level in the full consumer offering is being considered (i.e. product, price, place, promotion). Considering just these examples, one can easily see the positives and potential pitfalls of any of these.

Yet in looking these examples and others, one can make a few observations about how customers and teams approach bundling choices for products and services:

  • People like distinct products when exploring new capabilities and product teams like building single purpose tools early in product lifecycle, out of both focus and necessity/resources.
  • People like it when their favorite product adds features that previously required a separate product, especially when their favorite product is growing in usage. Product teams love to add more features to existing products when those features map to obvious needs.
  • People have some threshold for when an integrated product turns into an overwhelming product, but that “line in the sand” is impossible to define a priori and depends a great deal on how products are evolving around your product. Mobile phone plans today are great, but many are very unhappy with Cable TV bundles.
  • Competition can come from a bundle that you were previously not considering **or** competition can come from unbundling the product you make.
  • Product managers often reach a point where they can no longer solve the problem of adding new features while seeing them get used and also getting credit for innovating.
  • Macro factors can radically alter your own views of what could/should be bundled. If your business does not have a software component and your competitors add one, attempting to bundle that functionality could be quite challenging (technically, organizationally). If the platform you target (autos, spectrum, screens) undergoes a major change in capability then so too does your view of bundling or unbundling.

These examples and observations make one thing perfectly clear: whether to bundle or unbundle features depends a great deal on context and customer scenarios and so the choices require a great deal of product management thought. The path to bundle or unbundle is not linear, predictable, or reactionary but a genuine opportunity and need for solid product thought.

Strategic questions

On the one hand, considering whether to bundle or unbundle innovations might just be “do what we can that is differentiated”. In practice there are some key strategy questions that come up time and time again when talking to product folks.

  • Discoverability. The most critical strategic question to bundle or unbundle is whether the new work will be discoverable by intended customers. In a new product, the early waves of innovative features often make sense bundled. Over time, just responding to customers means you’ll be bundling in new capabilities (whether organic or competitive).
  • Usability. When faced with a new feature or business approach, the usability of this approach is a key factor in your choice. If you’re unable to develop a user experience that permits successful execution of the desired outcome, then it doesn’t really matter whether your bundled or unbundled.
  • Depth. When making the choice to bundle or unbundle you have to think through how much you plan on innovating in the spaces. If you’re setting yourself up for a long-term head to head on depth versus believing you are “checking a box” you have different choices. Incumbents often view the best path to fending off a disruptive unbundled feature as adding a checkbox to compete (to avoid the trauma of a major change in approach). Marketing often has an urgency that drives a need for market response and that can be represented as an unbundled “add-on that no one cares about” or “a checkbox that can be communicated” — that might sound cynical until you’ve been through a sales cycle losing out to a “feature as a product”.
  • Business economics. If you charge directly for your product or service (or freemium), then there will be a strong incentive to bundle more and more into your existing offering. Sales will generally prefer to add more features at the current price. Marketing will potentially advocate for a new pricing level to increase revenue. If you choose to unbundle and develop a new product, side-by-side or companion, then you’ll need to consider what your attach rate might be. A bundled solution essentially sees a 100% attach rate to your existing product whereas a whole new product brings with it the need to generate demand and subsequent purchase or usage. An advertising-based service will see increased surface area for an unbundled solution but will also dilute usage. A web-based service allows for cross-linking and easy connection between two different properties, but apps will require separate downloads and minimal cross-app connections.
  • Usage economics. It might sound strange separating out business from usage, since especially in a SaaS world they are the same thing. In practice, if you’re revenue is tied to usage directly (page views, transactions, etc.) then your design needs to factor in how you measure and drive usage of the features, bundled or unbundled. If you’re economics are not tied directly to usage you will have more strategic latitude to consider how your offering plays out bundled v. unbundled (assuming your boss lets you keep working on something no one uses).

Product management approach

Should you add that new feature or capability to your existing product or should you create a new destination (app, site)? Should you break out a feature because unbundling is the new normal or will that just break everything? Those are the core questions any PM faces as a product grows.

One tip: do not claim that one approach (bundle v. unbundle) is good for users and the other approach is only good for business. In other words, bundle v. unbundle cannot be distilled down to pro-user or anti-user, or more importantly marketing v. product. The best product people know that context is everything and that positioning a choice as A against B is counter productive—everyone is on the same team and has the same broad goals. As difficult as it is, working through these questions with as much dialog as possible and as much “walk in the other’s shoes” is absolutely critical.

There are many natural forces at play that will drive one way or another.

For example, most organic product development will tend to expand the existing product as it builds on the infrastructure and momentum already present.

Most new acquisitions will tend towards acquiring unbundled solutions, aka competition, though in the enterprise space one can expect significant calls to integrate even disparate technologies.

Part of being a good PM is to step back and go through a thoughtful process about whether to bundle or unbundle new capabilities. The following are some design choices.

  • Advertising new features in proportion to expected usage. There’s a general view to advertise a new feature in the UX in an excessively prominent manner. You want people to know you fixed or added a feature. At the unbundle extreme this means a whole new app and a trend to shrapnel. In the bundle extreme this means a big UX to drive you to a new thing. The most critical choice is really making sure that you are designing the access to the feature to be in relative proportion to how much you expect your customer base to use something.
  • Plan for “n+1” in all experience choices. As you make the choice to bundle or unbundle, know ahead of time that this will not be the first place you make this choice. If you’re adding a new app today then chances are that will become the way you solve things down the road. If you’re adding new UX access to a feature then plan on more depth in that feature or more peer features. Is the choice you are making scalable for the growth in creativity and innovation you expect?
  • Integrate or connect in one direction, not both. If you bundle or unbundle there will be a relentless push to promote the connection between elements of the product or service. Demo flows, top-level UX, even deep linking between apps. At some extreme if you bundle n items, it might not be unrealistic to go down a path where every n is connected to every other n-1 and vice versa. This is incredibly common in line of business apps/modules.
  • Bundle and innovate, don’t bundle and deprecate. If you make a choice to bundle a capability into your mainline effort, do not bundle it to make it go away. Bundle it and think of it as just as important as other things you do. This dynamic appears when your competition does something you don’t like so you hope to have a checkbox and make the competitor go away. This never happens.
  • Designing for good enough leaves you open to disruption. Closely related to deprecating while bundling is the idea that a “tie is a win”. Once you’re established you often think that you can continue to win against a competitor with an integrated implementation that is “good enough”. That might work in short-term marketing but over time, if the area is important you’ll lose.
  • Expect hardware to be relentlessly bundled. If you connect to hardware in any way, then you’ll be faced with a relentless march towards bundling. Hardware naturally bundles because of the economics of manufacturing, the surplus of transistors, and the need to reduce power and surface area. Never bet on hardware or peripherals staying unbundled for long.
  • Expanding software depth is easy, but breadth often adds more value. Engineers and product managers love to round out features, add more depth, more customization, and more incremental improvements. This is where the customer feedback loop is really clear. In terms of growing the business and attracting new customers, expansion in breadth is almost always a better approach so long as you “bundle” features that seem natural. Over indexing on depth, particularly early in a product life-cycle leaves you open to a competitor that does you plus other valuable things, no matter how much you think you’re unbundling approach is cleaner and simpler.
  • Defined categories do not remain defined for long. In enterprise products the “category” or “magic quadrant” is everything. In practice, these very definitions are always in transition. Be in the lookout for being redefined by an action of bundling or unbundling.
  • Assume sales and marketing will prefer new capability to be bundled, or maybe not. Finally to highlight how contextual this is, there is no default as to how outbound efforts will prefer you approach the problem. It is not necessarily the opposite of what you are doing or the same as a competitor. For example, if your sales force economics are such that they are strongly connected to a single product and sales motion, it will be clear that bundling will be preferred no matter what a competitor is up to. At an extreme, even an unbundled feature will be used as a closer or a discount, particularly in the enterprise. Conversely, even if your competition is highly bundled, you’re own outbound efforts might be structured such that unbundling is a competitive and sales win. You just never know. Most importantly, the first reaction isn’t the way to base your approach—spend the time to engage and debate.

To bundle or unbundle is a complex question that goes beyond the simplistic view that minimal design makes for good products. Take the time to engage broadly across the team, organization, and to project forward where you want to be as these are some of the most critical design choices you will make.

–Steven Sinofsky @stevesi

 

Written by Steven Sinofsky

June 28, 2014 at 3:00 pm

Posted in posts

Tagged with , , ,

Everyone starts with simplicity, no-one ends there and that’s OK

simplicityDesigning a user experience for many millions of people is a unique job that a relatively small number of people practice. The responsibility of such an undertaking is immense, stressful, and one that can be all-consuming. Cold sweats, sick to your stomach, and a constant feeling of messing up are the norm for those that take on these challenges.

But someone has to do it!

This week brought quite a few big design changes that have folks talking including twitter adding mute, gmail testing out a major revamp, and iOS 8 bringing “Surface split-screen to iPad”.

Everyone starts with simplicity, then what?

At introduction almost every successful product champions simplicity as a design and execution goal. Products are declared simple, minimal, and tailored to specific uses. Almost no one argues against these attributes and when marketing goes to position a tech product, invariably these attributes bubble up to the top of the favorable list. That’s because of the inherent and expected complexity of tech products as a starting point.

At introduction almost every successful product champions simplicity as a design and execution goal.

But where to go next? Tech products that are simple can start off well, but three things exist immediately after launch.

  1. A customer need to address feedback and “fix” things that might be simple but are not quite there yet.
  2. A product need to remain competitive with the products that follow your introduction touting the same simplicity but also do a few more things (reading the reviews of your product will always demonstrate examples of wish lists)
  3. A business need to develop new products that can enhance revenue, margins, or maintain price points in the face of commoditization.

Tech products, particularly software products, are unique in that there is an almost natural tendency to organically add or to absorb features from competitive or adjacent products. Unlike physical hardware products that have COGS and BOM challenges, the incremental cost of software is simply limited to R&D (and operational costs for SaaS). That means when faced with the above existential properties, tech products will get new features pretty rapidly.

These new features will do constant battle with the simplicity of the initial release. Some argue that this is just bloat and invariably ruins products. Certainly from a design perspective this is a massive challenge. It takes enormous discipline. On the other hand, there are very few examples of software-based products that remain static. To remain static in features is to open yourself up to commoditization or disruption—a static target is an easy target.

Example: Palm Pilot

The introduction of the Palm Pilot (http://en.wikipedia.org/wiki/PalmPilot) is a fascinating historic example of simplicity leading to isolation and expiration of a product. The designers of the product did an amazing job building an amazing product. All day battery life, simplicity, specific and purpose-built as the first truly modern and truly mobile productivity tool used by the masses.

I remember in 1998 the Palm Pilot was standard issue for all new MBA students when I taught. Shortly after that time, I recall a panel discussion with one of the original designers of the product. At the time the pitch was overarching simplicity and ease of use. Everyone agreed. Then there was an audience question that changed the dynamic.

Most leading edge folks at this time were carrying Motorola flip phones along with the calendar/notes/contacts in their Palm Pilot. The problem was every time they wanted to make a call, it was a multi-step process that involved looking up a number on the Palm Pilot and juggling the two devices while typing into the phone. While this was vastly easier than going back to your desktop or attempting to pull an 8lb laptop out of your bag, it was a usability disaster.

The question was simply—when could I have all the Palm Pilot functionality on my phone? Lots of words about how you could sync (with a cable, not the cloud that didn’t yet exist), but a hardcore answer about how adding a fifth function to the Palm, a phone, would overload the functionality and make the product too complex and unusable. So the phone would never converge with things like your contacts and calendar.

Honest, that was the answer.

The problem was I was sitting there with my pre-production blackberry merrily connecting in real-time to my calendar, contacts, and email on my Exchange server. It was incredibly clear that the need of a non-converged device with a static copy of some of the important mobile tasks was no longer useful.

A pattern for how things evolve in practice

This challenge in software product design happens time and time again. It is the very nature of disruption. The new product does some things brilliantly well and simply, but is “missing” features people value from an existing product or an adjacent product.

Designers face the choice of adding new capabilities and potentially challenging the beauty of the initial release or facing competition and disruption from new competitors without that same strongly held belief. Marketing, channel, business and pricing can defend against these for a while but ultimately the ease and costs of just adding features in software will win.

The tension between user interface design and the realities and capabilities of software leads to a fairly predictable pattern for how tech/software products evolve. We can think of this pattern as evolution in five stages:

  • Introduce
  • Optimize
  • Deliberate
  • Succumb
  • Mature or Renew

Introduce. First you introduce a new product. In your view it is a thing of beauty. Whether you spent 3 years or 3 months, you are convinced it has exactly the right features done exactly the right way, though you know there are ton of things on your “to do” list. Even if you are practicing lean methodologies you are pretty sure you got it right in your heart even though there is a lot of learning to follow. Your design embodies simplicity in design and messaging. Once your product starts to get used and you have the luxury of people relying on your work you begin to see the holes and maybe even misfires in your experience design. Optimize. You have a lot of work to do to reconcile your “to do” list with what actual people using your product. It turns out that what you thought the product was missing is pretty different than what everyone else thought the product was missing. You shrug this off and take the feedback seriously because you have real-world people using your product. Quite often the innovations introduced at this stage are formalizations for how people were using your product. Add-ins, customizations, or just conventions that enhanced usage become the sorts of things you formalize in the product. You very quickly iterate and get to a much more robust, reliable, stable, and usable version of what you had originally envisioned. This becomes the foundation of your product.

Deliberate. Evolving your product at this stage is very fun. While you believe you have a product that embodies your vision, with usage you begin to see broader usage and scenarios as part of your product. There might be third parties that do similar things as you but with a slightly different or much improved take on a specific mechanism you have in your product. Because you have become a leader with your product in “the way” things are done, when you decide to introduce an innovation it comes as a deliberate and thoughtful extension of your experience. Rarely do you see pushback from a broad base of customers when something new is introduced at this stage in your product’s evolution. In fact you often are seen as taking the product to a new level and providing a broader context in which your whole category or class of products should evolve. You are basking in the glow of innovating in the user experience of your space—you have come to define the category and now you’re defining the category to include new elements of user-experience.

Succumb. The feedback your product is receiving is growing, both positive and negative. As your product is used more and more, the usage scenarios and skill-levels of your customers change dramatically. Your product is used in ways you could never imagine and customers are asking for your product to do things you would never have imagined they would ask. If your product becomes essential for some scenario, then people will ask for your product to take on attributes and features of other familiar products (if you share photos, then you’re likely to be asked for photo editing for example; if you communicate, then before long you will be asked for rules and filters; if you type then you will be asked for more and more formatting, spelling, and entry features). If during the previous stage you really believed you had achieved a level of almost Bauhaus minimalism about your product, this is the stage when you feel a relentless pressure to add more. You’re hearing from customers, pundits, press and more about the must-haves and must-dos. This is by far the most stressful time in product development—you can’t just step back and not change things, but you constantly feel like changes are all part of a slippery slope. You constantly find yourself struggling between the minimalist view of the product you have been perfecting and the need from different types of customers for seemingly contradictory types of features. It is why at this stage as a designer you feel like you are succumbing to feedback and introducing features that you know some people will value and others will see the other way or maybe just not even notice—you feel like you’re bloating your simple product. These are the hardest decisions to make and are the price of success. If you try to hang on to simplicity, you will see competitors pass you by or you’ll see engagement stagnate.

Mature or Renew. The natural evolution of most every product involves a fairly long period of incorporating features in the previous stage—you add some new things, incrementally change some existing things, and in general are working to find a path through the maze of contradictory feedback and complex market needs. Over time your product will develop a different personality and unique set of assets, but is going to be far from that original version. While you might have hundreds of millions of customers, at some point the experience of your product is such that the market collectively demands an overhaul. The challenge of course is that the collective market is very different than any one individual or an organization (for enterprise products). The latter two, unlike the aggregate view of the market, do not necessarily embrace change. This point in the evolution of your product is where you face disruption—the telltale signs of reduced engagement, alternative tools and experiences, or just a lack of energy in your ecosystem are signs that your product is overdue for improvements, new features and more. Software affords you the chance to reimagine your product and presents you with the opportunity at this time. Of course with hundreds of millions of customers, a very large number in absolute will not want any change at all. That’s why this stage of evolution is a choice—you can incrementally mature your product design or you can choose to renew your product design. These two are really that very rare of “either-or” choices. As a product designer, you will be faced with a big set of decisions when you have to design what comes next for a mature product. Be careful what you wish for you as your design might be so successful that one day you face the prospect of redesigning it in the context of a significant customer base.

Products reaching a mature stage face a fork in the road.

Products reaching a mature stage face a fork in the road—one where you can renew or watch your product slowly shrink in relevance. This might seem dramatic, but the velocity of change in the technology world combined with the ease of switching shows that one day what might seem like “the way things are done” will risk becoming “the way things used to be” much sooner than expected.

Disruption and technology transitions are part of the context of designing products and experiences.

From search home pages, to photo editors, word processors, operating systems, music players, and more these stages are all part of the evolution of a user experience. The beauty of software interface is that unlike the physical world you are given the chance to move things around, change, and improve the product for little to no manufacturing cost, but at each stage you have to work through the cost of change to customers.

No other product in history has had the ability to be used by so many yet be so flexible in how it is used.

Simplicity in design is what we all strive for and often how we begin a product lifecycle. With success, maintaining simplicity over time while also remaining competitive is where design and product management are really challenged.

The “soft” of software makes this challenge even more acute and the pressures to add or change a product even more difficult to resist.

–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

May 13, 2014 at 11:00 am

Posted in posts

Tagged with , , ,

Look at me! More thoughts on notifications

36458Hunter Walk shared some thoughts on notifications and the challenges he and (certainly through twitter) many people see. Many of the companies I’ve met with see design challenges in how much and when to offer up notifications. There’s a long history of trying different approaches and modalities to notifications and so it seems worth some additional perspective for those familiar with what we see today on modern mobile platforms.

Notifications are one of those features where everyone has an opinion, and rightfully so. The feature is so visible and for just about everyone seems so close to being helpful but yet always off by just a little. There’s a general UX principle that is worth considering, which is anytime you push some feature on your customer you really want it to be right (correct, useful, helpful) for him/her 100% of the time. If not, chances are your customer will recall the negatives of the feature far more than the positives. This applies to notifications, autocorrect, form completion, and more. If you find yourself putting a lot of design energy into how your customer can undo or dismiss your best guess at what was intended, then you’re probably being too aggressive.

Anytime you push some feature on your customer you really want it to be right for him/her 100% of the time.

In some ways, today’s Notification Centers are the extreme case of “we’re collectively going to be wrong so often that we’re just going to put stuff in one place” or “there’s just so much we app developers have to tell you that the platforms are squeezing it all into one place to avoid cluttering up the platform itself”.

It is as if it isn’t enough we have to manage all of our apps, bookmarks, and preferences, we now have to manage all the ways apps tell us stuff we might want to know. Hunter’s raised the complaint (to a chorus of agreement) that after a two weeks, you have to go in and turn off notifications on a new app. Of course this improves battery life, reduces chatter, and then as some noted causes you to start to ignore the app.

What’s an app developer to do?

What to do?

In the PC era a lot of effort went into honing the design of verbs and interaction. It took a decade to develop the right approaches to menus, toolbars, status bars, panes, and more. That’s because many apps were essentially a large set of verbs this was the big design challenge. The rough equivalent to this design challenge is the role of notifications in mobile. That’s because in a mobile world many apps exist to be essentially a stream of information.

Notifications suffer a clear tension between platform pm and the app pm.

Platform pm wants to contain apps to the app experience, extending the walled-garden such that apps don’t interfere with other apps. By definition a notification is a way for one app to interfere with other apps. Platform pm sees notifications as necessary far less frequently than app pm might. This leads to a set of APIs that offer a clear, albeit limited, view of what a notification is and what it can do. This seems reasonable and we all want the platform folks maintaining a global view of consistency in approach and control.

App pm sees the world through the lens of their app. The assumption is that someone downloading and using an app has made a choice to count on that app for the purpose it was designed, and so whether it is an airline app alerting you to a flight (or a potential discount on a future flight), a bank with a balance alert (or advertising a new bank feature), or a communication tool letting you know of some inbound message (or alerting you that a friend is now on line) the app pm sees any of these as worthwhile reasons to interfere with your flow or context. On all platforms, apps often design their own type of notifications that get used while you are using the app (for in app purchase or feature advertising) because the platform is not rich enough. All this seems legitimate, and certainly at the time of initial designs.

Over time the initial designs from both parties tend to lead to an expansion in the ability to interrupt you. Each subsequent release of a platform almost always adds more capabilities to enhance and customize notifications in an effort to offer more while also trying to keep the noise in the system manageable. Each app adds more and more notifications in an effort to more deeply engage with customers and likely to encourage customers to use more surface area of the app.

Many who use iOS 7 have spent quite a bit of time mired notification customization. Here is an overview of the iOS 7 features, both the notifications and notification center, worth a look if you’re on Android or not sure of the impressive depth that Apple designed for notification. Android is probably not quite at the same level of consistency and control, though as you might expect there are several apps that can help you customize notifications of other apps.

Design Challenge

At the extreme we end up with two core design challenges.

Notification spam. This one is easy. Too many apps just think too much of what is going on is important to you. Like too much of any design the burden falls to app product managers to just be more thoughtful. Like so many elements of any platform, when there is a view that making money depends on getting folks to use more of a product or spend more time in a product, the platform starts to look a bit like “surface area to be exploited”. Like the old Start Menu and desktop in Windows, the more places an app can “infuse” itself and invade your space the better. On Android we see this in the share item menu as another bit of surface area to be gamed or exploited.

Notification action. The most common issue with notification is that your flow is interrupted and then you seem to be pulled into a state of distraction until you deal with the inbound notice. We each have our own human based algorithms for how to cope. We always jump on SMS. We (almost) always ignore a ringing phone. We wish we could find a way for some app or another to stop bugging us so we uninstall it.

On iOS there is very little you can do to a notification other than dismiss it or just jump directly to the app or place in the app that generated the notification. Modal or must-act notifications are generally discouraged. The resulting notification center then turns into a list to read that spans a bunch of apps and for some ends up to be a list of stuff you’ve already seen popup or in context or just a reminder to get to the app.

On Android, the design takes a different approach which is to enable notifications that can take actions. This is where the elegance of notifications is really stretched, but in a way that many find appealing. For example, when you receive a new mail message the gmail notification lets you archive it based on reading the initial content or various SMS clients offer you the ability to reply.

On Windows Phone, one has the additional option of pivoting notifications by person on your home screen so you can glance and see that there is activity by person. This has a natural appeal when there are a small set of folks you care deeply about but as a general purpose mechanism it might not scale particularly well.

The core challenge with offering verbs with notifications is almost “classic” in that one can never off the right set of verbs because eventually the design turns into attempting to implement the a substantial number of features of the app in the notification. Mail is a great challenge: delete, file, reply, flag, etc. all become possible verbs. Each usage pattern in aggregate leads to the whole mail experience. The more users you have the larger the group of customers that don’t like the subset of verbs you picked.

Ultimately taking action based on a notification turns into a bit of a frustration in that the notification centers essentially offer a new way to launch all your apps. What was a nice feature turns into a level of indirection almost all the time.

Opportunity

Therein is the opportunity. In a world where many people are almost constantly glancing at their phones and wanting to know more about what is going on in their digital lives and a world where almost every app represents an endless stream of information along with in-app notifications, it seems that notifications need a different level of semantics.

For example, with just a few friends Facebook always has something new to see so why notify you of the obvious. For many, Twitter is essentially a notification engine. Mail certainly is a constant stream, arguably of decreasing importance. In other words, it isn’t even clear what makes sense to notify you about when the natural behavior is to periodically launch apps to see what’s new within the app context and apps are generating new information all the time.

Similarly, if most everyone knows that when you are talking to another human you both have to turn your phones upside down to avoid being distracted (or sharing private information), then there’s a good chance we’ve collectively missed the mark notifications. The iOS “do not disturb” is an awesome feature but yet it seems to undo all the work in both the notification center and in the apps.

My view is that a feature that requires us to customize it before it becomes useful or less annoying is defaulted the wrong way. Of course this is literally impossible with a product used by more than a few people, since any design at all will have both critics and shortcomings. However, it is possible to default to “out of the way” and then provide a mechanism for people to decide what they might want to be notified about once a usage pattern is established.

For example, I might assert that for an app like mail, sms, Facebook, or Twitter the simple iOS badge is enough. We are all in and out of these apps enough during the day that a specific notification is redundant with the in-app notifications already there.

Each app can almost certainly step back and either know a priori or offer a mechanism that puts people in control of their experience with notifications. It is almost certainly the case that if we’re bouncing in and out of apps all the time but really do want to know if SMS comes from a loved one amongst the 100′s of SMS many get each day, that is likely the way to design a feature.

It is easy to imagine using more context (loved the twitter suggestion to not notify while driving/moving fast). It is easy to imagine more machine learning applied to notifications. But I think we can start from a fresh perspective that the mechanisms provided are just being over-used to begin with when we look at modern usage patterns.

–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

February 15, 2014 at 12:00 pm

Posted in posts

Tagged with , ,

The Four Stages of Disruption

iexpectyoutodiemrbondInnovation and disruption are the hallmarks of the technology world, and hardly a moment passes when we are not thinking, doing, or talking about these topics. While I was speaking with some entrepreneurs recently on the topic, the question kept coming up: “If we’re so aware of disruption, then why do successful products (or companies) keep getting disrupted?”

Good question, and here’s how I think about answering it.

As far back as 1962, Everett Rogers began his groundbreaking work defining the process and diffusion of innovation. Rogers defined the spread of innovation in the stages of knowledge, persuasion, decision, implementation and confirmation.

Those powerful concepts, however, do not fully describe disruptive technologies and products, and the impact on the existing technology base or companies that built it. Disruption is a critical element of the evolution of technology — from the positive and negative aspects of disruption a typical pattern emerges, as new technologies come to market and subsequently take hold.

A central question to disruption is whether it is inevitable or preventable. History would tend toward inevitable, but an engineer’s optimism might describe the disruption that a new technology can bring more as a problem to be solved.

Four Stages of Disruption

For incumbents, the stages of innovation for a technology product that ultimately disrupt follow a pattern that is fairly well known. While that doesn’t grant us the predictive powers to know whether an innovation will ultimately disrupt, we can use a model to understand what design choices to prioritize, and when. In other words, the pattern is likely necessary, but not sufficient to fend off disruption. Value exists in identifying the response and emotions surrounding each stage of the innovation pattern, because, as with disruption itself, the actions/reactions of incumbents and innovators play important roles in how parties progress through innovation. In some ways, the response and emotions to undergoing disruption are analogous to the classic stages of grieving.

Rather than the five stages of grief, we can describe four stages that comprise theinnovation pattern for technology products: Disruption of incumbent; rapid and linear evolution; appealing convergence; and complete reimagination. Any product line or technology can be placed in this sequence at a given time.

The pattern of disruption can be thought of as follows, keeping in mind that at any given time for any given category, different products and companies are likely at different stages relative to some local “end point” of innovation.

disruption

Stage One: Disruption of Incumbent

moment of disruption is where the conversation about disruption often begins, even though determining that moment is entirely hindsight. (For example, when did BlackBerry get disrupted by the iPhone, film by digital imaging or bookstores by Amazon?) A new technology, product or service is available, and it seems to some to be a limited, but different, replacement for some existing, widely used and satisfactory solution. Most everyone is familiar with this stage of innovation. In fact, it could be argued that most are so familiar with this aspect that collectively our industry cries “disruption” far more often than is actually the case.

From a product development perspective, choosing whether a technology is disruptive at a potential moment is key. If you are making a new product, then you’re “betting the business” on a new technology — and doing so will be counterintuitive to many around you. If you have already built a business around a successful existing product, then your “bet the business” choice is whether or not to redirect efforts to a new technology. While difficult to prove, most would generally assert that new technologies that are subsequently disruptive are bet on by new companies first. The very nature of disruption is such that existing enterprises see more downside risk in betting the company than they see upside return in a new technology. This is the innovator’s dilemma.

The incumbent’s reactions to potential technology disruptions are practically cliche. New technologies are inferior. New products do not do all the things existing products do, or are inefficient. New services fail to address existing needs as well as what is already in place. Disruption can seem more expensive because the technologies have not yet scaled, or can seem cheaper because they simply do less. Of course, the new products are usually viewed as minimalist or as toys, and often unrelated to the core business. Additionally, business-model disruption has similar analogues relative to margins, channels, partners, revenue approaches and more.

The primary incumbent reaction during this stage is to essentially ignore the product or technology — not every individual in an organization, but the organization as a whole often enters this state of denial. One of the historical realities of disruption is uncovering the “told you so” evidence, which is always there, because no matter what happens, someone always said it would. The larger the organization, the more individuals probably sent mail or had potential early-stage work that could have avoided disruption, at least in their views (see “Disruption and Woulda, Coulda, Shoulda” and the case of BlackBerry). One of the key roles of a company is to make choices, and choosing change to a more risky course versus defense of the current approaches are the very choices that hamstring an organization.

There are dozens of examples of disruptive technologies and products. And the reactions (or inactions) of incumbents are legendary. One example that illustrates this point would be the introduction of the “PC as a server.” This has all of the hallmarks of disruption. The first customers to begin to use PCs as servers — for application workloads such as file sharing, or early client/server development — ran into incredible challenges relative to the mini/mainframe computing model. While new PCs were far more flexible and less expensive, they lacked the reliability, horsepower and tooling to supplant existing models. Those in the mini/mainframe world could remain comfortable observing the lack of those traits, almost dismissing PC servers as not “real servers,” while they continued on their path further distancing themselves from the capabilities of PC servers, refining their products and businesses for a growing base of customers. PCs as servers were simply toys.

At the same time, PC servers began to evolve and demonstrate richer models for application development (rich client front-ends), lower cost and scalable databases, and better economics for new application development. With the rapidly increasing demand for computing solutions to business problems, this wave of PC servers fit the bill. Soon the number of new applications written in this new way began to dwarf development on “real servers,” and the once-important servers became legacy relative to PC-based servers for those making the bet or shift. PC servers would soon begin to transition from disruption to broad adoption, but first the value proposition needed to be completed.

Stage Two: Rapid Linear Evolution

Once an innovative product or technology begins rapid adoption, the focus becomes “filling out” the product. In this stage, the product creators are still disruptors, innovating along the trajectory they set for themselves, with a strong focus on early-adopter customers, themselves disruptors. The disruptors are following their vision. The incumbents continue along their existing and successful trajectory, unknowingly sealing their fate.

This stage is critically important to understand from a product-development perspective. As a disruptive force, new products bring to the table a new way of looking at things — a counterculture, a revolution, an insurgency. The heroic efforts to bring a product or service to market (and the associated business models) leave a lot of room left to improve, often referred to as “low-hanging fruit.” The path from where one is today to the next six, 12, 18 months is well understood. You draw from the cutting-room floor of ideas that got you to where you are. Moving forward might even mean fixing or redoing some of the earlier decisions made with less information, or out of urgency.

Generally, your business approach follows the initial plan, as well, and has analogous elements of insurgency. Innovation proceeds rapidly in this point. Your focus is on the adopters of your product — your fellow disruptors (disrupting within their context). You are adding features critical to completing the scenario you set out to develop.

To the incumbent leaders, you look like you are digging in your heels for a losing battle. In their view, your vector points in the wrong direction, and you’re throwing good money after bad. This only further reinforces the view of disruptors that they are heading in the right direction. The previous generals are fighting the last war, and the disruptors have opened up a new front. And yet, the traction in the disruptor camp becomes undeniable. The incumbent begins to mount a response. That response is somewhere between dismissive and negative, and focuses on comparing the products by using the existing criteria established by the incumbent. The net effect of this effort is to validate the insurgency.

Stage Three: Appealing Convergence

As the market redefinition proceeds, the category of a new product starts to undergo a subtle redefinition. No longer is it enough to do new things well; the market begins to call for the replacement of the incumbent technology with the new technology. In this stage, the entire market begins to “wake up” to the capabilities of the new product.

As the disruptive product rapidly evolves, the initial vision becomes relatively complete (realizing that nothing is ever finished, but the scenarios overall start to fill in). The treadmill of rapidly evolving features begins to feel somewhat incremental, and relatively known to the team. The business starts to feel saturated. Overall, the early adopters are now a maturing group, and a sense of stability develops.

Looking broadly at the landscape, it is clear that the next battleground is to go after the incumbent customers who have not made the move. In other words, once you’ve conquered the greenfield you created, you check your rearview mirror and look to pick up the broad base of customers who did not see your product as market-ready or scenario-complete. To accomplish this, you look differently at your own product and see what is missing relative to the competition you just left in the dust. You begin to realize that all those things your competitor had that you don’t may not be such bad ideas after all. Maybe those folks you disrupted knew something, and had some insights that your market category could benefit from putting to work.

In looking at many disruptive technologies and disruptors, the pattern of looking back to move forward is typical. One can almost think of this as a natural maturing; you promise never to do some of the things your parents did, until one day you find yourself thinking, “Oh my, I’ve become my parents.” The reason that products are destined to converge along these lines is simply practical engineering. Even when technologies are disrupted, the older technologies evolved for a reason, and those reasons are often still valid. The disruptors have the advantage of looking at those problems and solving them in their newly defined context, which can often lead to improved solutions (easier to deploy, cheaper, etc.) At the same time, there is also a risk of second-system syndrome that must be carefully monitored. It is not uncommon for the renegade disruptors, fresh off the success they have been seeing, to come to believe in broader theories of unification or architecture and simply try to get too much done, or to lose the elegance of the newly defined solution.

Stage Four: Complete Reimagination

The last stage of technology disruption is when a category or technology is reimagined from the ground up. While one can consider this just another disruption, it is a unique stage in this taxonomy because of the responses from both the legacy incumbent and the disruptor.

Reimagining a technology or product is a return to first principles. It is about looking at the underlying assumptions and essentially rethinking all of them at once. What does it mean to capture an image,provide transportationshare computationsearch the Web, and more? The reimagined technology often has little resemblance to the legacy, and often has the appearance of even making the previous disruptive technology appear to be legacy. The melding of old and new into a completely different solution often creates whole new categories of products and services, built upon a base of technology that appears completely different.

To those who have been through the first disruption, their knowledge or reference frame seems dated. There is also a feeling of being unable to keep up. The words are known, but the specifics seem like rocket science. Where there was comfort in the convergence of ideas, the newly reimagined world seems like a whole new generation, and so much more than a competitor.

In software, one way to think of this is generational. The disruptors studied the incumbents in university, and then went on to use that knowledge to build a better mousetrap. Those in university while the new mousetrap was being built benefited from learning from both a legacy and new perspective, thus seeing again how to disrupt. It is often this fortuitous timing that defines generations in technologies.

Reimagining is important because the breakthroughs so clearly subsume all that came before. What characterizes a reimagination most is that it renders the criteria used to evaluate the previous products irrelevant. Often there are orders of magnitude difference in cost, performance, reliability, service and features. Things are just wildly better. That’s why some have referred to this as the innovator’s curse. There’s no time to bask in the glory of the previous success, as there’s a disruptor following right up on your heels.

A recent example is cloud computing. Cloud computing is a reimagination ofboth the mini/mainframe and PC-server models. By some accounts, it is a hybrid of those two, taking the commodity hardware of the PC world and the thin client/data center view of the mainframe world. One would really have to squint in order to claim it is just that, however, as the fundamental innovation in cloud computing delivers entirely new scale, reliability and flexibility, at a cost that upends both of those models. Literally every assumption of the mainframe and client/server computing was revisited, intentionally or not, in building modern cloud systems.

For the previous incumbent, it is too late. There’s no way to sprinkle some reimagination on your product. The logical path, and the one most frequently taken, is to “mine the installed base,” and work hard to keep those customers happy and minimize the mass defections from your product. The question then becomes one of building an entirely new product that meets these new criteria, but from within the existing enterprise. The number of times this has been successfully accomplished is diminishingly small, but there will always be exceptions to the rule.

For the previous disruptor and new leader, there is a decision point that is almost unexpected. One might consider the drastic — simply learn from what you previously did, and essentially abandon your work and start over using what you learned. Or you could be more incremental, and get straight to the convergence stage with the latest technologies. It feels like the ground is moving beneath you. Can you converge rapidly, perhaps revisiting more assumptions, and show more flexibility to abandon some things while doing new things? Will your product architecture and technologies sustain this type of rethinking? Your customer base is relatively new, and was just feeling pretty good about winning, so the pressure to keep winning will be high. Will you do more than try to recast your work in this new light?

The relentless march of technology change comes faster than you think.

So What Can You Do?

Some sincerely believe that products, and thus companies, disrupt and then are doomed to be disrupted. Like a Starship captain when the shields are down, you simply tell all hands to brace themselves, and then see what’s left after the attack. Business and product development, however, are social sciences. There are no laws of nature, and nothing is certain to happen. There are patterns, which can be helpful signposts, or can blind you to potential actions. This is what makes the technology industry, and the changes technology bring to other industries, so exciting and interesting.

The following table summarizes the stages of disruption and the typical actions and reactions at each stage:

Stage Disruptor Incumbent
 Disruption      of    Incumbent Introduces new product with a distinct point of view, knowing  it does not solve all the needs of the entire existing market, but advances the state of the art in technology and/or business. New product or service is not relevant to existing customers or market, a.k.a. “deny.”
 Rapid linear  evolution Proceeds to rapidly add features/capabilities, filling out the value proposition after initial traction with select early adopters. Begins to compare full-featured product to new product and show deficiencies, a.k.a. “validate.”
 Appealing  Convergence Sees opportunity to acquire broader customer base by appealing to slow movers. Sees limitations of own new product and learns from what was done in the past, reflected in a new way. Potential risk is being leapfrogged by even newer technologies and business models as focus   turns to “installed base” of incumbent. Considers cramming some element of disruptive features to existing product line to sufficiently demonstrate attention to future trends while minimizing interruption of existing customers, a.k.a. “compete.” Potential risk is failing to see the true value or capabilities of disruptive products relative to the limitations of existing products.
 Complete  Reimagining Approaches a decision point because new entrants to the market can benefit from all your product has demonstrated, without embracing the legacy customers as done previously. Embrace legacy market more, or keep pushing forward? Arguably too late to respond, and begins to define the new product as part of a new market, and existing product part of a larger, existing market, a.k.a. “retreat.”

Considering these stages and reactions, there are really two key decision points to be tuned-in to:

When you’re the incumbent, your key decision is to choose carefully what you view as disruptive or not. It is to the benefit of every competitor to claim they are disrupting your products and business. Creating this sort of chaos is something that causes untold consternation in a large organization. Unfortunately, there are no magic answers for the incumbent.

The business team needs to develop a keen understanding of the dynamics of competitive offerings, and know when a new model can offer more to customers and partners in a different way. More importantly, it must avoid an excess attachment to today’s measures of success.

The technology and product team needs to maintain a clinical detachment from the existing body of work to evaluate if something new is better, while also avoiding the more common technology trap of being attracted to the next shiny object.

When you’re the disruptor, your key decision point is really when and if to embrace convergence. Once you make the choices — in terms of business model or product offering — to embrace the point of view of the incumbent, you stand to gain from the bridge to the existing base of customers.

Alternatively, you create the potential to lose big to the next disruptor who takes the best of what you offer and leapfrogs the convergence stage with a truly reimagined product. By bridging to the legacy, you also run the risk of focusing your business and product plans on the customers least likely to keep pushing you forward, or those least likely to be aggressive and growing organizations. You run the risk of looking backward more than forward.

For everyone, timing is everything. We often look at disruption in hindsight, and choose disruptive moments based on product availability (or lack thereof). In practice, products require time to conceive, iterate and execute, and different companies will work on these at different paces. Apple famously talked about the 10-year project that was the iPhone, with many gaps, and while the iPad appears a quick successor, it, too, was part of that odyssey. Sometimes a new product appears to be a response to a new entry, but in reality it was under development for perhaps the same amount of time as another entry.

There are many examples of this path to disruption in technology businesses. While many seem “classic” today, the players at the time more often than not exhibited the actions and reactions described here.

As a social science, business does not lend itself to provable operational rules. As appealing as disruption theory might be, the context and actions of many parties create unique circumstances each and every time. There is no guarantee that new technologies and products will disrupt incumbents, just as there is no certainty that existing companies must be disrupted. Instead, product leaders look to patterns, and model their choices in an effort to create a new path.

Stages of Disruption In Practice

Digital imaging. Mobile imaging reimagined a category that disrupted film (always available, low-quality versus film), while converging on the historic form factors and usage of film cameras. In parallel, there is a wave of reimagination of digital imaging taking place that fundamentally changes imaging using light field technology, setting the stage for a potential leapfrog scenario.

  • Retail purchasing. Web retailers disrupted physical retailers with selection, convenience, community, etc., ultimatelyconverging on many elements of traditional retailers (physical retail presence, logistics, house brands).
  • Travel booking. Online travel booking is disrupting travel agents, then converging on historic models of aggregation and package deals.
  • Portable music. From the Sony Walkman as a disruptor to the iPod and MP3 players, to mobile phones subsuming this functionality, and now to streaming playback, portable music has seen the disruptors get disrupted and incumbents seemingly stopped in their tracks over several generations. The change in scenarios enabled by changing technology infrastructure (increased storage, increased bandwidth, mobile bandwidth and more) have made this a very dynamic space.
  • Urban transport. Ride sharing, car sharing, and more disruptive traditional ownership of vehicles or taxi services are in the process of converging models (such as Uber adding UberX.
  • Productivity. Tools such as Quip, Box, Haiku Deck, Lucidchart, and more are being pulled by customers beyond early adopters to be compatible with existing tools and usage patterns. In practice, these tools are currently iterating very rapidly along their self-defined disruptive path. Some might suggest that previous disruptors in the space (OpenOffice, Zoho, perhaps even Google Docs) chose to converge with the existing market too soon, as a strategic misstep.
  • Movie viewing. Netflix and others, as part of cord-cutting, with disruptive, low-priced, all-you-can-consume on-demand plans and producing their own content. Previous disruptors such as HBO are working to provide streaming and similar services, while constrained by existing business models and relationships.
  • Messaging/communications apps. SMS, which many consider disruptive to 2000-era IM, is being challenged by much richer interactions that disrupt the business models of carrier SMS and feature sets afforded by SMS.
  • Network infrastructure. Software-defined networking and cloud computing are reimagining the category of networking infrastructure, with incumbent players attempting to benefit from these shifts in the needs of customers. Incumbents at different levels are looking to adopt the model, while some providers see it as fundamentally questioning their assumptions.

– Steven Sinofsky (@stevesi). This story originally appeared on Recode.

Written by Steven Sinofsky

January 7, 2014 at 9:00 pm

On the exploitation of APIs

Not a stepLinkedIn engineer Martin Kleppmann wrote a wonderful post detailing the magical and thoughtful engineering behind the new LinkedIn Intro iOS app.  I was literally verklepmpt reading the post–thinking about all those nights trying different things until he (and the team) ultimately achieved what he set out to do, what his management hoped he would do, and what folks at LinkedIn felt would be great for LinkedIn customers.

The internet has done what the internet does which is to unleash indignation upon Martin, LinkedIn, and thus the cycle begins. The post was updated with caveats and disclaimers.  It is now riding atop of techmeme.  Privacy.  Security. etc.

Whether those concerns are legitimate or not (after all this is a massive public company based on the trust of a network), the reality is this app points out a longstanding architectural challenge in API design.  The rise of modern operating systems (iOS, Android, Windows RT, and more) have inherent advantages over the PC-era operating systems (OS X, Windows, Linux) when it comes to maintaining the integrity as designed of the system overall. Yet we’re not done innovating around this challenge.

History

I remember my very first exploit.  I figured out how to use a disk sector editor on CP/M and modified the operating system to remove the file delete command, ERA.  I managed to do this by just nulling out the “ERA” string in what appeared to me to be the command table.  I was so proud of myself I (attempted) to show my father my success.

The folks that put the command table there were just solving a problem.  It was not an API to CP/M, or was it?  The sector editor was really a tool for recovering information from defective floppies, or was it?  My goal was to make a floppy with WordStar on it that I could give to my father to use but would be safe from him accidentally deleting a file.  My intention was good.  I used information and tools available to me in ways that the system architects clearly did not intend.  I stood on the top step of a ladder.  I used a screwdriver as a pry bar.  I used a wrench as a hammer.

The history of the PC architecture is filled with examples of APIs exposed for one purpose put to use for another purpose.  In fact, the power of the PC platform is a result of inventors bringing technology to market with one purpose in mind and then seeing it get used for other purposes.  Whether hardware or software, unintended uses of extensibility have come to define the flexibility, utility, and durability of the PC architecture.  There are so many examples: the first terminate and stay resident programs in MS-DOS, the Z80 softcard for the Apple ][, drawing low voltage power from USB to power a coffee warmer, all the way to that most favorite shell extension in Windows or OS X extension that adds that missing feature from Finder.

These are easily described and high-level uses of extensibility.  Your everyday computing experience is literally filled with uses of underlying extensibility that were not foreseen by the original designers. In fact, I would go as far as to say that if computers and software were only allowed to do things that the original designers intended, computing would be particularly boring.

Yet it would also be free of viruses, malware, DLL hell, system rot, and TV commercials promising to make your PC faster.

Take for example, the role of extensibility in email, Outlook even in particular.  The original design for Outlook had a wonderful API that enabled one to create an add-in that would automate routine tasks in Outlook.  You could for example have a program that would automatically send out a notification email to the appropriate contacts based on some action you would take.  You could also receive useful email attachments that could streamline tasks just by opening them (for example, before we all had a PDF reader it was very common to receive an executable that when opened would self-extract a document along with a viewer).  These became a huge part of the value of the platform and an important part of the utility of the PC in the workplace at the time.

Then one day in 1999 we all (literally) received email from our friend Melissa.  This was a virus that spread by using these same APIs for an obviously terrible usage.  What this code did was nothing different than all those add-ins did, but it did it at Internet scale to everyone in an unsuspecting way.

Thus was born the age of “consent” on PCs.  When you think about all those messages you see today (“use your location”, “change your default”, “access your address book”) you see the direct descendants of Melissa. A follow on virus professed broad love for all of us, I LOVE YOU.  From that came the (perceived) draconian steps of simply disabling much of the extensibility/utility described above.

What else could be done?  A ladder is always going to have a top step–some people will step on it.  The vast majority will get work done and be fine.

From my perspective, it doesn’t matter how one perceives something on a spectrum from good to “bad”–the challenge is APIs get used for many different things and developers are always going to push the limits of what they do.  LinkedIn Intro is not a virus.  It is not a tool to invade your privacy.  It is simply a clever (ne hack) that uses existing extensibility in new ways.  There’s no defense against this.  The system was not poorly designed.  Even though there was no intent to do what Intro did when those services were designed, there is simply no way to prevent clever uses anymore than you can prevent me from using my screwdriver as a pry bar.

Modern example

I wanted to offer a modern example that for me sums up the exploitation of APIs and also how challenging this problem is.

On Android an app can add one or more sharing targets.  In fact Android APIs were even improved to make it easier in release after release and now it is simply a declarative step of a couple of lines of XML and some code.

As a result, many Play apps add several share targets.  I installed a printing app that added 4 different ways to share (Share link, share to Chrome, share email, share over Bluetooth).  All of these seemed perfectly legitimate and I’m sure the designers thought they were just making their product easier to use.  Obviously, I must want to use the functionality since I went to the Play store, downloaded it and everything.  I bet the folks that designed this are quite proud of how many taps they saved for these key scenarios.

After 20 apps, my share list is crazy.  Of course sharing with twitter is now a lot of scrolling because the list is alphabetical.  Lucky for me the Messages app bubbles up the most recent target to a shortcut in the action bar.  But that seems a bit like a kludge.

Then along comes Andmade Share.  It is another Play app that lets me customize the share list and remove things.  Phew.  Except now I am the manager of a sharing list and every time I install an app I have to go and “fix” my share target list.

Ironically, the Andmade app uses almost precisely the same extensibility to manage the sharing list as is used to pollute it.  So hypothetically restricting/disabling the ability of apps to add share targets also prevents this utility from working.

The system could also be much more rigorous about what can be added.  For example, apps could only add a single share target (Windows 8) or the OS could just not allow apps to add more (essentially iOS).  But 99% of uses are legitimate.  All are harmless.  So even in “modern” times with modern software, the API surface area can be exploited and lead to a degraded user experience even if that experience degrades in a relatively benign way.

Anyone that ever complained about startup programs or shell extensions is just seeing the results of developers using extensibility.  Whether it is used or abused is a matter of perspective.  Whether is degrades the overall system is dependent on many factors and also on perspective (since every benefit has a potential cost, if you benefit from a feature then you’re ok with the cost).

Reality

There will be calls to remove the app from the app store. Sure that can be done. Steps will be taken to close off extensibility mechanisms that got used in ways far off the intended usage patterns. There will be cost and unintended side effects of those actions. Realistically, what was done by LinkedIn (or a myriad of examples) was done with the best of intentions (and a lot of hard work).  Realistically, what was done was exploiting the extensibility of the system in a way never considered by the designers (or most users).

This leads to 5 realities of system design:

  1. Everything is an API.  Every bit of a system is an API.  From the layout of files, to the places settings are stored, to actual published APIs, everything in a system as it is released serves as an interface to people who want to extend, customize, or modify your work. Services don’t escape this because APIs are in a cloud behind REST APIs.  For example, reverse engineering packets or scraping HTML is no different — the HTML used by a site can come to be relied on essentially as an API.  The Windows registry is just a place to store stuff–the fact that people went in and modified it outside the intended parameters is what caused problems, not the existence of a place to store stuff.  Cookies?  Just a mechanism.

  2. APIs can’t tell you the full intent.  APIs are simply tools.  The documentation and examples show you the mainstream or an intended use of an API.  But they don’t tell you all the intended uses or even the limits of using an API.  As a platform provider, falling back on documentation is fairly impossible considering both the history of software platforms (and most of the success of a platform coming from people using it in a creative ways) and the reality that no one could read all the documentation that would have to explain all the uses of a single API when there are literally tens of thousands of extensibility points (plus all the undocumented ones, see #1).

  3. Once discovered, any clever use of an API will be replicated by many actors for good or not.  Once one developer finds a way to get something done by working through the clever mechanism of extensibility, if there’s value to it then others will follow. If one share target is good, then having 5 must be 5 times better.  The system through some means will ultimately need to find a way to control the very way extensibility or APIs are used.  Whether this is through policy or code is a matter of choice. We haven’t seen the last “Intro” at least until some action is taken for iOS.

  4. Platform providers carry the burden of maintaining APIs over time.  Since the vast majority of actors are doing valuable things you maintain an API or extensibility point–that’s what constitutes a platform promise.  Some of your APIs are “undocumented” but end up being conventions or just happenstance.  When you produce a platform, try as hard as you want to define what is the official platform and what isn’t but your implied promise is ultimately to maintain the integrity of everything overall.

  5. Using extensibility will produce good and bad results, but what is good and bad will depend highly on the context.  It might seem easy to judge something broadly on the internet as good or bad.  In reality, downloading an app and opt-ing in.  What should you really warn about and how?  To me this seems remarkably difficult.  I am not sure we’re in a better place because every action on my modern device has a potential warning message or a choice from a very long list I need to manage.

We’re not there yet collectively as an industry on balancing the extensibility of platforms and the desire for safety, security, performance, predictability, and more.  Modern platforms are a huge step in a better direction.

Let’s be careful collectively about how we move forward when faced with a pattern we’re all familiar with.

–Steven

28-10-13 Fixed a couple of typos.

Written by Steven Sinofsky

October 25, 2013 at 9:30 am

Posted in posts

Tagged with , ,

8 steps for engineering leaders to keep the peace

Tug of warWhen starting a new product there’s always so much more you want to do than can be done.  In early days this is where a ton of energy comes from in a new company—the feeling of whitespace and opportunity.  Pretty soon though the need for prioritized lists and realities of resource/time constraints become all too real.  Naturally the founder(s) (or your manager in a larger organization) and others push for more.  And just as naturally, the engineering leader starts to feel the pressure and pushes back.  All at once there is a push to do more and a pull to prioritize.  What happens when “an unstoppable force meets an immovable object”, when the boss is pushing for more and the engineering leader is trying to prioritize?

I had a chance to talk to a couple of folks facing this challenge within early stage companies where a pattern emerges.  The engineering leader is trying hard to build out the platform, improve quality, and focus more on details of design.  The product-focused founder (or manager) is pushing to add features, change designs, and do that all sooner.  There’s pushback between folks.  The engineering leader was starting to worry if pushing back was good.  The founder was starting to wonder if too much was being asked for.  Some say this is a “natural” tension, but my feeling is tension is almost always counter-productive or at least unnecessary.

There’s no precise way to know the level of push or pushback as it isn’t something you can quantify.  But it is critically important to avoid a situation that can result in a clash down the road, a loss of faith in leadership, or a let down by engineering.

As with any challenge that boils down to people, communication is the tool that is readily available to anyone. But not every communication style will work.  Engineers and other analytical types fall into some common traps when trying to cope with the immense pressure of feeling accountable to get the right things done and meet shared goals:

  • Setting expectations by always repeating “some of this won’t get done”.  This doesn’t help because it doesn’t add anything to the dialog as it is essentially a truism of any plan.
  • Debating each idea aggressively.  This breaks down the collaborative nature of the relationship and can get in the way, even though analytical folks like to make sure important topics are debated.
  • Acting in a passive aggressive manner and just tabling some inbound requests. This is almost always a reaction to “overflow” like too much sand poured in a funnel—the challenge is just managing all the inbound requests.  This doesn’t usually work because most ideas keep coming back.

What you can do is get ahead of the situation and be honest.  A suggested approach is all about defining the characteristics of the role you each have and the potential points of “failure” in the relationship.

As the engineering leader, sit down with the founder (or your manager) and kick off a discussion that goes something like this as said from the perspective of the accountable engineering leader:

  1. We both want the best product we can build, as fast as we can.
  2. I share your enthusiasm for the creativity and contributions from you and everyone else.
  3. My role is to provide an engineering cadence that delivers as much as we can, as soon as we can, with the level of quality and polish we can all be proud of.
  4. We’ll work from a transparent plan and a process that decides what to get done.
  5. As part of doing that, I’m going to sometimes feel like I end up saying “no” pretty often.
  6. And even with that, you’re going to push to change or add more.  And almost always we’ll agree that absent constraints those are good pushes.  But I’m not working without constraints.
  7. But what I worry about is that one day when things are not going perfectly (with the builds or sales), you’ll start to worry that I’m an obstacle to getting more done sooner.
  8. So right then and there, I’d like to come back to this conversation and make sure to walk through where we are and what we’re doing to recalibrate.  I don’t want you to feel like I’m being too conservative or that our work to decide what to do in what order isn’t in sync with you.

That’s the basic idea.  To get ahead of what is almost certainly to be a conversation down the road and to set up a framework to talk about the challenge that all engineering efforts have—getting enough done, soon enough.

Why is this so critical?  Because if you’re not talking to each other, there’s a risk you’re talking about each other.

We all know that in a healthy organization bad news travels fast. Unfortunately, when the pressure is on or there’s a shared feeling of missing expectations often the first thing to go is the very communication that can help.  When communication begins to break down there’s a risk trust will suffer.

When trust is reduced and unhealthy cycle potentially starts.  The engineering leader starts to feel a bit like an obstacle and might start over-committing or just reduce the voice of pragmatic concerns.  The manager or founder might start to feel like the engineering leader is slowing progress and might start to work around him/her to influence the work list.

Regardless of how the efficacy of the relationship begins to weaken, there’s always room for adjustment and learning between the two of you.  It just needs to start from a common understanding and a baseline to talk and communicate.

This is such a common challenge, that it is worth an ounce of prevention and an occasional booster conversation.

–Steven Sinofsky

Written by Steven Sinofsky

September 11, 2013 at 6:00 am

Posted in posts

Tagged with , ,

Lessons from org structures

with 8 comments

 

This post is about a discipline (or sometimes called function-based) org structure.  Like many management “principles”, org structures represent a pendulum that swings back and forth between ends of a spectrum.  In this case the ends are usually characterized as a discipline structure or a product / product line / business structure.  In practice things are more nuanced than these end-of-the-spectrum descriptors.

Some have talked about a discipline org structure as a more modern type of organization than the product line structure.  Given how it mimics historic military structures, as far as management goes, it is probably much older than the “product line” organization often attributed to Alfred Sloan. No matter how new or old, discipline organizations are just one way of compromising on a team structure when you have to pick a way to go—there’s no perfect answer otherwise there would be only one org structure. Context matters.

In our book, One Strategy: organization, planning, and decision making we (co-author Marco Iansiti and I) talk a great deal about the org structure used for the Windows team. The approach was somewhere in the middle of the swinging pendulum between discipline-based and product-based, which was consistent with my own history of the spectrum of choices.  Given the book’s emphasis on this type of structure, it is great to see so much support and enthusiasm for the approaches outlined in recent discussions about organizations.

Org structures might sound like a big company thing, but in spending time with new companies it is clear that the lessons of organization apply to the earliest stages. This post offers some lessons learned from a big organization.  Smaller or new organizations sow the seeds of org structure early on and so these lessons will apply equally to any organization with a complex product architecture, multiple-products, or collaboration required across disciplines.  A great example comes up in the challenges in cross-platform development facing many startups.  Do you organize by platform-specific efforts or do you try to keep the apps together and each team targets multiple platforms?  Early on with one app the choices are easy.  As more apps or different schedules arise, the challenges grow to mimic those in very large organizations.

The reality about org structures is that they rarely cause things to happen—for example, and org structure cannot cause (or prevent) agility.  The work processes or a focus on accountability can impact agility far more.  Org structures cannot cause (or prevent) products from working together as that is a function of a plethora of variables throughout a set of engineers.  Org structures are necessary and can be used to enhance or potentially drown out such attributes, but my experience has been that the causal arrow starts with the details of the work, not the structure of the org which tends to be of a correlation than a causation.

Seams always exist

Some have said that the beauty of a discipline organization is that it removes seams.  Ben Thompson offered some good diagrams of before and after comparing a product organization and a discipline organization.  These are entirely correct within the context of information presented. In practice, however, organizations of any size are more complex than just two dimensions of product or job function.  Each of these attributes is a place you want to find a single approach while making tradeoffs given that you can’t do everything in all possible ways when you’re trying to release one product:

  • Product.  It might seem easy to identify a product, but in practice what a product is might be a hardcore technology statement or it might simply be an offering created by the business for business reasons. In My Years with General Motors, Sloan goes into great detail about the creation of product lines and the rationale, which is quite different than the difference between say Search and Android at Google.  GMs product lines were based on a single platform with incremental or even cosmetic differences between essentially identical vehicles (e.g. Trans Am, Firebird, Camaro).  You can define a product as “something people pay for” to yield one approach or you can define a product as “something we build” to yield another approach.
  • Geography.  Teams often have people in multiple locations.  This can just be downtown/suburbs, or across the globe.  Sometimes you organize all the people in a geography in one team and other times you place the multiple geographies within the existing structure.  Many studies have shown that the impact on collaboration of even floors of a building can be significant and so the org structure you pick can accentuate the challenges or potentially increase the management burden.
  • Sub-disciplines.  At one level you can view a discipline org as engineering, marketing, sales, support or perhaps design, manufacturing, operations or maybe R&D, manufacturing, finance, and so on — these are all high-level views of different disciplines.  Different industries have different high-level job functions.  But within each of those there are functions as well. Marketing is a great example with specialties in inbound marketing, outbound marketing, communications, advertising, research, and more.  If you have multiple products then you need to decide how to staff the next level of function—is that by product or sub-discipline.  The tradeoffs involved can significantly impact the goals one might have in efficiency or agility. So even getting to a shared view of what disciplines are being organized is the first step, and a crucial one since it might result in several layers of management starting at the top.
  • Partners or customers.  Delivering a product to a specific set of customers or working with a specific set of partners can often come to define many other attributes of the overall effort.  A product that is tuned to the enterprise might take one approach (to many variables) compared to a product tuned to consumers.  This can impact advertising, features, engineering processes, and more.  Some structures find these variables so important that they come to form a top-level org structure.   There is subtlety and nuance in choosing along these lines since often your best customers or partners have an expectation of senior level people dedicated to their needs.  This can even extend to important customer segments such as education, government, language markets, accessibility, and more.
  • Code / architecture. It is quite common to organize a software project’s resources by what amounts to the code architecture.  Engineers understand that and often skills and tools map easily to such a management structure.  One of the most common startup organizations you see is to organize by client app and service back end.  This places the “seam” inside the company to a great degree but also can make for tricky tradeoffs in what gets done and when.  The larger these respective teams become, the more challenging that seam becomes.  Cross-platform, in other words multiple clients of the service team, will confound these challenges to some degree and also create opportunities for seams between the different platform implementations of the apps (organize by multiple app teams each targeting a platform, or by functional areas of code targeting multiple platforms for example). Even the pace of code changes might be different between these two organizations.  Engineering connecting to other disciplines along the code/architecture lines might mean that structure permeates through to support, sales, marketing as well.
  • Schedule. By far the most complex variable within an organization is the schedule.  My view is that a schedule defines a team. The project schedule defines everything about how people work, collaborate, and ultimately decide things.  Two people on the same schedule share a world view.  Two people at different parts of a product cycle (start/finish, coding/launching, new project/update) will rarely have the ability to really decide, collaborate, or walk in each other’s shoes.  The more experienced you are the more you understand these different mindsets, but it still doesn’t solve the inherent challenges of being at different stages in a project.  This goes beyond engineering and really is about all the disciplines that need to work together. Marketing focused on a holiday season or sustaining a product while engineering is planning a new product is a great example of this even within a product that calls for a careful balance of accountability and operations.

These are just a few examples of seams that can arise.  Anyone who believes you can use org structures to remove seams just needs to keep making a list of all the ways a product is built, sold, supported, and more—there are seams everywhere.  Ultimately, each of these variable represents a dimension upon which you might choose to build an organization, but you can’t organize around all of them equally and simultaneously, even in the smallest organizations.

Picking an organization is really being clear up front about the various tradeoffs involved.  It might mean letting go of some “motions” or it might mean the result is to put in place process and procedures that can help to avoid mitigate downstream challenges created by a seam.

What’s the upside?

What’s the upside for a discipline organization?  There are three things we talked about quite a bit in the book that led to a conclusion that a (largely) discipline organization is optimal for scaling technology product development:

  • Engineering and product development are the high order bit for technology companies.  In tech, tech is what matters most.  Tech rules in a world where the product you built can become not just obsolete but wholly undesirable just a few years after you built it or a product can be disrupted by a competitor seemingly out of the blue.  You want to have the people building things focused on that and the organization needs to lead with technology.  Even in a mature company with global sales, complex pricing and segmentation, demanding installed base, and even with all the pressure to consider all those attributes “up front” you want to have product be top of mind all the time.
  • Fewer managers and deeper expertise can only be achieved by discipline.  In practice you want the best developers, designers, or product managers you can find.  It turns out that those people like to be surrounded by others like them.  You don’t often find a lot of world class developers who want to work for marketing (or vice versa) and in particular you definitely can’t hire a lot of folks out of college who can work for (or be successfully managed by) someone who has not walked in their shoes (or preferably is still walking in their shoes).  Everyone knows and respects the other perspectives and skills to deliver an entire product (so this is not about a hierarchy of roles), but when it comes to day-in-day-out surroundings, focusing on discipline expertise yields the best discipline efforts.  Our measure in the book is literally, how far up the org chart do you go before you get to someone who never did your job (literally), regardless of the job discipline.  Mathematically in any other structure, you will significantly increase the number of managers you have when you push down the responsibility for managing multiple disciplines—and by any study or any measure the more managers you have the worse off you are to some level of optimization.  This comes from needing people to bring together multiple disciplines at more places in a structure.  More general management also means just more management in general.
  • In practice, in a large global organization you cannot really organize by “business”.  In the General Motors examples you can really see this challenge.  While there were businesses or product lines that really evolved out of a shared “platform”, the reality is that the product line leaders did not get to create new platforms or even have control of many of the resources one might assume were part of a business.  There was always a lot of tension over the platform choices given the number of businesses that depended on the platform capabilities.  Even manufacturing was not completely isolated across product lines (for example there is only one UAW to negotiate with).  There was obviously a spectrum of just how far the business/product line went.  But once you have a global organization, overlaying geography means you usually have the geography dominate the org—it means the people in France work for a person in France, no matter what the discipline organization looks like.  Not only does this reduce the notion of a “product” but it by definition implies there will be managers making decisions across disciplines and products outside the role of the product leader. So the upside of a discipline organization is it removes the illusion of “owning a business” which is a fairly liberating construct as we talked about in the posts in the book when it comes to making product choices.  Even companies that have large teams of manufacturing, sales, marketing, human resources, or more will generally centralize these disciplines and with that comes a reduced view of “the business”.

Some lessons learned

Even with the positives of a discipline organization there are also limitations and “gotchas” that exist.  No system is perfect or universal which is why a combination of methods is something we talked about in the book and put into practice.  The following are some lessons learned and considerations to take into account with a discipline styled tech organization:

Ship dates matter.  The most critical element of collaborating across products/teams/groups/people is the schedule and the integrity of the schedule.  Two entities working together are (essentially infinitely) more effective if they share the same schedule, same schedule vocabulary, and same schedule rigor.  Imagine one group that “depends” on another group.  The first group is planning their new work—the sky’s the limit, the schedule is XYZ, and all is great.  The second group is trying to finish, bug counts are high, known work items exceed allocated time, and resources are tight.  The first group shows up and says “we have some ideas and if we could just work on this together we could have an amazing set of scenarios for customers”.  If you’ve ever been the second group you know how this feels—this is just another thing you can’t get done, you’re degrees of freedom are zero.  You have a choice of saying “of course” knowing you can’t get the work done or of saying “no way” and looking like a jerk.  You can try to help design something now, but that always takes the critical path resources.  Nothing in this dialog ends well for anyone.  Meanwhile the first group is seeing their dreams shattered for lack of collaboration—even though they were just at the idea stage.  Whether you ship every month, year, or decade if you’re looking to work in deep integration that crosses your code bases, then doing so at the same time, with the same schedule is a great tool.  This is a lot trickier than it sounds because different products have different ways to schedule (service deployment, hardware ranging, partner bring up, and more all have different schedule “tails”).  Products can have different deadlines as well as dictated by their channel strategies (shipping for holiday means one thing for hardware, another thing for software delivered to hardware partners, and another thing for enterprise products).

Discipline expertise is everything.  In any team size, but particularly in very small and very large organizations there can be a tendency for “jack of all trades” efforts.  This is where people think or act as experts in a variety of disciplines—engineering crossing over into marketing, marketing crossing over into product management, sales crossing over into support, (or one level down where outbound marketing crosses into advertising, etc.).  The reality is that if you’re going to execute along discipline lines then you really want to respect the skills and abilities of those lines.  It turns out this is often the most difficult thing to pull off in a discipline organization.  Something as “simple” as pricing or advertising, clearly marketing responsibilities, are almost trivial for everyone to have an opinion on, especially the more senior they get (we all buy stuff).  A lot of time can be spent by the discipline experts working to get buy-in from parts of the team that probably have enough to worry about.  The essence of this, which is a big part of our book, is not supporting the culture of escalation—that is making sure management does not allow decisions to percolate up the org structure just because of the desire to get buy-in across the different disciplines or because the choices involve other parts of the organization.  Things should be decided closest to the work and decisions should be made within the context of accountability by disciplines in this structure, and those people are responsible for a global view of the issues and challenges.

Org depth and span are critical. The biggest balancing act in orgs of more than about 100 people is to figure out how many managers to have.  At one extreme you have one engineering manager with like 80 reports.  At the other extreme you can end up with I-formations where managers have one direct report.  Neither is particularly healthy.  When you scale up a discipline organization you are also battling the depth of the org tree for the discipline.  While it is very cool to count up 3 or 4 levels and see an engineer, counting up 7 or 8 can get daunting because at that depth it means, ironically, engineering details might be discussed very high up in the org and you might worry those impact you.  So in a sense, adding a seam of general management is somewhat comforting in that it gives you a clear place where your work “ends”.  The other side of this balancing act is how many reports a manager has.  You want this to be a number such that as high up the org as you can get managers “do work”.  In our book, we talked a bunch about the notion of a “pure manager” which was a phrase that drove me bonkers—in the tech part of a tech company you want as few people as possible who do nothing but manage (work or people).  Numerically, our view is that even with managing upwards of 50 people a dev manager should be contributing actual shipping code to a product routinely.  The more people in a function you have the more you have to figure out where the “no work” seam is, and then take that into account when it comes to deciding things at that level.

Collaboration starts at staff meetings at all levels. At first we all tend to reject meetings of any sort as Dilbert-eseque exercises, when meetings are really an integral part of collaboration (see http://www.slate.com/articles/news_and_politics/readme/2002/04/an_ode_to_managers.html).  In orgs of any size there are two kinds of regularly scheduled “rhythm” meetings.  Looking at engineering as an example, first there is the meeting of all the devs working on an area that goes through the schedule and the details of the implementation.  I would describe this as a dev lead and 5-7 individual contributors working on a feature area.  Second, is a meeting of the sub-disciplines of engineering focused on dev, test, product management, design, operations where the focus is on the complete picture of where the project stands.  Some might do this differently—for example just 1:1s plus the sub-disciplines.  One level up this meeting looks like everyone working in development on a large area, and the sub-discipline leaders for all those areas.  At some point the cross-discipline meeting turns into large functional areas of engineering, marketing, etc.  The most critical thing about the meetings that cross (sub-) disciplines is that everyone needs to be working on the same thing and have the same understanding of what is going on.  In other words, it turns out that staff meetings will naturally be effective tools for collaboration if folks are all working on the same product, schedule, architecture, partnerships and more.  Once someone in the meeting has a different part of the seam or someone is managing a portfolio of products, they will necessarily be working at a level of abstraction that is challenging to make commitments, know the details of issues, or otherwise actually decide things.  This is always a scaling challenge.  Historically, it is what has led me to appreciate a mixed model of org structure so it tends to reduce the number of “product portfolios”.  Said another way, a single manager who sees seams in his/her management domain (i.e. code bases, geographies, products) will naturally (necessarily?) tend to organize their teams along those lines and essentially “break” the discipline model.

One final thought on lessons learned, and that has to do with the reality of how and where work gets done in an organization of any size.  It is really critical to view an organization from the bottom up—that is how things are really done.  In a tech product, features you can see as a human in a product are usually done by a very small number of people.  Those people work together day and night and all the time.  From their perspective they would love to have the same manager, sit next to each other, and otherwise not have to work with other people.  From their perspective, anything less is less than optimal.  Yet at any scale, this just isn’t practical as tradeoffs need to be made (even in something as simple as how far you have to walk to you coworkers).  Being able to articulate a clear understanding of how the work gets done, what expectations there are for cross-group work, and why things will be neither gummed up nor designed by a committee “up in the clouds” are all important questions and lessons learned.

In reference to how work gets done, one challenge I’ve experienced has been the proponents of agile methods who almost by definition did not appreciate a discipline-oriented organization.  The root of those methods is to have all those working on something together in org structure, physical proximity, and management—yet the physics of org structures don’t make it possible to solve exclusively for that.  Imagine proposing an org structure that to some argued against being agile.

That’s why context matters so much and there is not a prescriptive answer to the best or ideal org structure.

–Steven

Written by Steven Sinofsky

July 30, 2013 at 9:00 am

Posted in posts

Tagged with ,

Sharing some learning – a few observations from “D11”

with 9 comments

d11-20130528-144203-01167-MThis past week the 11th All Things D Conference, D11, was held.  It is such a great opportunity to attend and to learn from a great combination of interviews, speakers, demonstrations, questions, and attendees.  Attending this conference has been a very valuable learning experience for me over the years and I’ve always made it a point to reflect and share some observations or learnings that stuck with me.  This year is no different.

As with all events these days, so much of what happens at the event is tweeted, live blogged, re-blogged, etc.  That makes it challenging to offer more by way of learning. If you’re interested in the details of the sessions, by all means watch the videos or see the official coverage on the All Things D, D11 Conference site.  All the interviews are done by one or both of Walt Mossberg and Kara Swisher.  There you’ll also find some behind the scenes “KatieCam” videos shot by WSJ writer Katherine Boehret in a more relaxed setting as speakers left the stage and other behind the scenes videos and articles by teh ATD writing team. Definitely check out the amazing photos from Asa Mathat (and team) that really capture the unique qualities of the conference.

“The Dialog”

For me what separates D from other events, if you had to pick one thing, is the dialog that takes place.  While the format is an interview, I see it as more of a dialog.  There are no slides, no setup, and after the interview the dialog continues with audience questions and then even more in the hallways during breaks (not to mention the electronic dialog).  I feel sometimes in an effort to report the event as news, the back and forth or the dialog itself can get a bit de-prioritized.

The dialog is important because the timing of the conference is the same every year.  That means not every speaker has something to announce or launch.  In fact some speakers have announcements already scheduled for the future and even with a lot of pushing they still aren’t going to preempt their organization’s efforts.  This means that speakers sign up to attend knowing there are definitely questions they will get that must go unanswered.  I think that speaks volumes to the appreciation for the dialog and participation that speakers share.

Still, that can be a tiny bit frustrating for folks reading about the accounts—you are hoping for news but don’t get any.  There is a slightly different tone “in the room” which I am hoping to convey through these notes.  The tone is very much about the nuance and subtlety of the topics being raised.  So even if there is not news, the conversation is interesting.  It is an important part of innovation and convergence of industries (the original and ongoing theme of the conference was how media, entertainment, and digital technologies are coming together).  There are gems in most every session if you watch the video—not necessarily news gems, but articulation of challenges and tradeoffs that everyone is facing as they do their work.  Making products is never a stark either/or set of choices and capturing these tradeoffs on stage, in the “hot seat” as it is called, is something I appreciate very much.

Big picture

There were 25 speakers along with demo sessions.  The breadth of topics discussed delivers on the promise of the conference.  Through the lens of product development there were a number of “themes” that surfaced for me:

  • Mobile “era” – No one doubts the era we are in as an industry and across industries.  The tech folks were “mobile first” from apps to advertising, not as a place to port to or also support.  The entertainment folks see mobile as a place to enjoy entertainment or as the screen that accompanies entertainment, not as a competitor to television.  Even attendees were mostly seen on their mobile devices most of the time.  While this might not seem newsworthy, observing the changing perspectives over the years of the conference provides a neat context for this change.
  • Disruption – Most tech conferences are about disruption in some form or another.  This conference came about during a time when disruption was really happening (and to be fair, the WSJ and ATD are/were both part of disruptive dialogs over the years—and the topic of conversation at the show).  The interviews always do a good job of confronting speakers who are viewed as participants in a potential disruption.
  • Sensors – The role of sensors as part of the baseline experience for computing is front and center.  There was a lot of discussion around form factors, wearables, and scenarios but all of this is rooted in devices that know about surroundings, which means products can be designed knowing the computers will have these capabilities.
  • Consumerization – Walt Mossberg has always taken the non-techie, consumer approach to looking at technology which, as he said during the show, was somewhat heretical when he first started his column.  These days the notion of consumers driving the experience and setting the bar does not seem so far-fetched.   You know that is the case when the CEO of Cisco says “bring your own device trumps security”.
  • Embrace of digital – In past years the “content” attendees appeared more on the defense than the offense.  While the business challenges remain in some parts of the content space, I think there is far more of a sense of embrace and partnering going on between the tech and content parties.  In general it felt to me like much more of a healthy dialog rooted in respect than in past years, which is a positive evolution.

Sessions

As mentioned, the sessions are all available on the D11 site along with live blogs done by WSJ/ATD reporters.  Check those out for sure.  I just wanted to offer some additional observations from a small set of sessions that hit close to home from a product development perspective.  Inclusion / omission or number of points below are not indications of quality or importance!

Apple / Tim Cook

  • Measuring what counts – There was a strong focus on measuring usage as a way of looking at success.  This contrasted with the recent debate about market share (units or revenue).  The depth usage of iOS devices is significantly more than competing devices.  It is super interesting to think about how to inform product development when balancing existing depth usage, new users, and growth – very interesting.
  • Relative to Android – The dialog turned to defining “winning” along the lines of usage, customer satisfaction, and even the amount of commerce done on iOS devices.
  • Magic – There was a good discussion about how working across the team needs to focus on the intersection of hardware/software/services as being where the “magic happens”.  Everyone in the product space knows that wherever seams exist there is an opportunity to innovate or for there to be challenges–seams can be found all over the place, especially as a product gets larger or an ecosystem around the product develops.
  • Tradeoffs – As an example of the nuance/subtlety that is hard to capture, Cook tried to walk through some of the tradeoffs that go into making different sized devices for different “segments” (Walt’s description).  He talked about color correctness, white balance, battery life, brightness, and more.  A favorite expression from Cook was “customers expect Apple to weigh all these factors and decide things” along with the humble notion that deciding means shipping and learning.  I personally love when the dialog turns to these types of issues at this “level” in an organization and also externally—real engineering stuff that is worth talking about in an open way.
  • Openness and control – In talking about the difference between iOS and Android (using keyboards as an example), Cook was asked about opening up more.  He talked about the challenges and tradeoffs involved in “putting the customer at risk” with some times of APIs and openness but committed to more openness at the upcoming WWDC.  Again there was a very interesting and subtle discussion about the tradeoffs involved.

Facebook / Sheryl Sandberg

  • Mobile is good for Facebook – There were a lot of numbers and support for how much engagement there is from both users and advertisers on mobile.
  • Increasing engagement – Sandberg shared some numbers that were counter-intuitive for many (as evidenced by the reaction in the section I was sitting) when she talked about the increase in engagement.  Five years ago 50% of people visited every day.  Now 58% visit every day and the number of users is much higher.
  • Priorities – I loved when she talked about how they have 5000 people to build and operate a service for a billion people.  That puts the product development challenge in perspective.
  • Mobile first – There is a strong “pivot” in the development team around mobile first.  Whereas the browser used to be the primary target and the mobile teams would be playing catch-up, now nothing gets done without it being mobile first.
  • Facebook Home – The challenges of doing an offering that is polarizing for sure.  She cited that customer reviews are either 1 star or 5 stars.  Home is a V1 and expect to deliver on the commitment to frequent changes/updates.

Disney Parks and Resorts / Tom Staggs

  • My Magic Plus – This session was about a new way to enjoy a WDW (Walt Disney World) theme park visit—essentially you wear a “magic band” around your wrist (like a Jawbone Up or Fitbit).  As someone who grew up in Orlando watching WDW go from the Magic Kingdom surrounded by orange groves to what it is today, I think the revolution that is going on with this innovation is amazing and far-reaching.
  • Features – Wearing the band provides an experience with reduced anxiety, less waiting, more fun, and far more personal.  And it is just starting.  An amazing example I loved was how you could order the food you want and when you get to the restaurant you sit down and what you ordered just shows up.  Neat.  But what is really neat is that the employees can focus on being “hosts” and not the transactional elements of ordering and getting things right.  Super cool.  It certainly makes that summer job at Disney a lot more fun!
  • Senses and sensors – Of course this is all about location aware, cloud experienced.  But the way Staggs described it was “360-5” as a 360 degree experience for all 5 senses—you’re immersed in the experience beyond the rides.  In general, this was a demonstration that unfolded super well—as I thought of questions they got answered moments later.  So much opportunity on this platform.

Twitter / Dick Costolo

  • “Social soundtrack” – Twitter was described as the second screen for television.  It is viewed as a complement to broadcast.  This was a statement that gets broadened to mean that Twitter is not itself thinking about making content or distributing it.
  • Global town square – The way they think of Twitter is to think about both planned/unplanned events and to provide an unfiltered/inside out platform for the people “the event is happening to”.  This town square is public, real-time, conversational, and distributed.  From a product point of view, the clarity of this framework is incredibly valuable.
  • Advertising – Costolo discussed how advertisers are coming to understand that being part of the conversation is important and how the idea of having a conversation as the canvas versus the ad itself as the canvas is important.
  • Design – Another subtle part of the dialog was around where the openness of the Twitter platform will be.  The idea is that Twitter does want to own the timeline experience for customers but still be open to thousands (100s of thousands) of developers with fairly lightweight rules.  Simplicity is a major focus on the design of the timeline experience.

Glow / Max Levchin

  • Demonstration – this was a demonstration of a new product that brings data and mobility to the challenges of procreation and fertility.
  • App – The app is focused on being a beautiful source of telemetry and information for both the man and woman planning together to conceive a child.
  • Data – Turns out that there is tons of data which is hard for people to get hold of and include in their planning and efforts.  Glow is a way to bring this data to the solution space for people.
  • Funding – The data shows that with the right use of data “infertility” can drop way down and thus the overall cost to the healthcare system is much lower.  To support this the way the product will work is essentially to create a pool for people who are still unable to conceive after using the tool, which is a much smaller number than would be using less data-informed tools.
  • Innovation – This is truly innovative when it comes to the problem space–hearing Levchin describe a typical way physicians handle this sounds almost like “country medicine” compared to using the data, telemetry, and an app.  Combining data, mobility, and more into this app shows how empowering all the technology can be.  We’re all able to start experience this notion of being in so much more control of our lives with these technology tools.

Box / Aaron Levie and Cisco / John Chambers

  • What fun – This was such a fun pairing as the contrast between the people and companies was so interesting.  Yet at the same time, both organizations are developing products for a new world where individuals are far more empowered.  While no one is going to go out and buy their own router, the IT pros that do want to have the capability for you to use the router when you bring in your own device.  A fun part of D in general is when you can see widely different perspectives in a dialog about a problem space each is approaching.
  • IT control – Chambers asserted that the ability for IT to “say no” really changed 4 or 5 years ago and now enterprises need to catch up to consumer technologies and support them.  Chambers even said “BYOD trumps security”.
  • Disruption – Levie offered a wonderful example of how companies are handling disruption.  He said that the three biggest Box customers are companies formed in the 1800’s.  This speaks to how much change is going on among IT pros.

Disney Media / Anne Sweeney  and Producer / I. Marlene King

  • Twitter integration – It was fascinating to hear the content developer view of creating content knowing that Twitter is part of the viewing equation.  There’s a clear perspective that Twitter is contributing to the experience and enjoyment of the show.
  • OMG moments – I loved hearing about the way they essentially create the show to support “OMG” or “jump off the couch” moments, and how that plays into Twitter.
  • Time zones – Turns out that the audience is pretty self-governing when it comes to spoilers and time zones, which was interesting to think about.

Pinterest / Ben Silbermann

  • First appearance – Ben doesn’t often appear or do presentations. It is great to see him.
  • Framing – Another great example of framing the goals of the product: Pinterest aims to help people “discover things they really love and inspire them to experience them in real life.”
  • Early users – From a product development perspective, he spoke about how early users ended up setting the tone of the product when it comes to passion.
  • Last web app? – Kara asked if Silbermann thought that Pinterest might be the “last web first app” or not.  The answer focused on starting off where people were but now today of course the goal is to be able to use the service wherever you are and of course a ton of that is mobile which overtook the PC along the lines of industry trends.

Tesla, SpaceX, Hyper Tube / Elon Musk

  • Along with everyone at D11 and online, this was an incredible treat.
  • “Mars is a fixer upper” – as far as planets go, Musk said Mars is our best bet for life on another planet since it can be fixed up relatively easily.
  • Every tech takes 3 or 4 generations to get it to mass market.  He walked through the original Tesla plan (high price/low volume, mid-price/mid volume, low price/high volume).  He framed this as competing with a hundred years and trillion dollar investment in gas combustion.  This is a great example of how disruption gets talked about in early stages – all the focus on whether electric cars can displace gas cars using the criteria gas cars developed over all this time.  From a product point of view, this perspective is super interesting.

– Steven Sinofsky

# # # # #

Written by Steven Sinofsky

June 2, 2013 at 12:45 pm

Posted in posts

Tagged with ,

Conversation #38– disrupt or die

with 25 comments

thCAM164QKAnyone worth their salt in product development knows that listening to customers through any and all means possible is the means to innovation. Wait a minute, anyone worth their salt in product development knows that listening to customers leads to a faster horse.

Deciding your own product choices within these varying perspectives is perhaps the seminal challenge in product development, tech products or otherwise. This truly is a tyranny of or, but one in which changing the rules of the game is the very objective.

In this discussion, which is such a common dialog in the halls of HBS as well tech companies everywhere it should probably be a numbered conversation (for this blog let’s call this Conversation #38 for shorthand—disrupt or die).

For a recent discussion about why it is so difficult for large companies to face changes in the marketplace, see this post Why Corporate Giants Fail to Change.

“Disrupt or die” or “disrupt and die”?

Failure to evolve a product as technologies change or as customer scenarios change is sure to lead to obsolescence or elimination from the marketplace. It is difficult to go a day in tech product development without hearing about technology disruption or “innovator’s dilemma”. The biggest fear we all have in tech is failing to keep up with the changing landscape of technologies and customers, and how those intersect.

At the same time, hopefully we all get to that lucky moment when our product is being used actively by customers who are paying. We’re in that feedback loop. We are improving the product, more is being sold, and we’re on a roll.

That’s when innovation over time looks like this:

Incremental

In this case as time progresses the product improves in a fairly linear way. Listening to customers becomes a critical skill of the product team. Product improvements are touted as “listening to customers” and things seem to go well. This predictability is comforting for the business and for customers.

That is, until one day when needs change or perhaps in addition a new product from a competitor is released. Seemingly out of nowhere the great feedback loop we had looks like it won’t help. If we’re fortunate enough to be in tune to changing dynamics outside our core (and growing) customer base we have time to react and change our own product’s trajectory.

That’s when innovation looks like this:

New Product

This is a time when the market is receptive to a different point of view, and a different product — one that redefines, or reimagines, the category. Sometimes customers don’t even realize they are making a category choice, but all of a sudden they are working differently. People just have stuff to get done and find tools that help.

We’re faced with what seems like an obvious choice—adjust the product feature set and focus to keep up with the new needs of customers. Failing to do so risks losing out on new sales, depth usage, or even marginalization. Of course features/capabilities is a long list that can include price, performance, battery life, reliability, simplicity, APIs, different integration points or service connections, and any other attributes that might be used by a new entrant to deliver a unique point of view around a similar scenario.

Many folks will be quick to point out that such is only the case if a new product is a “substitute” for the product people are newly excited about. There is truth to this. But there is also a reality shown time and time again which gets to the heart of tech bets. It is almost always the case that a new product that is “adjacent” to your product has some elements of more expensive, more complex in some dimensions, less functional, or less than ideal. Then what seems like an obvious choice, which is to adjust your own product, quickly looks like a fool’s bet. Why would you chase an inferior product?  Why go after something that can’t really replace you?

The examples of this are too numerous to count. The iPhone famously sucked at making phone calls (a case where the category of “mobile phone” was under reinvention and making calls turned out to be less important). Solid State storage is famously more expensive and lower capacity than spindle drives (a case where the low power, light weight, small size are more valued in mobile devices). Of course tablets are famously unable to provide apps to replace some common professional PC experiences (a case where the value of mobility, all day battery life, always connected seem more valued than a set of platform capabilities). Even within a large organization we can see how limited feature set cloud storage products are being used actively by employees as “substitutes” for enterprise portals and file shares (a case where cross-organization sharing, available on the internet, and mobile access are more valued than the full enterprise feature set). The list goes on and on.

As product managers we all wish it was such a simple choice when we face these situations. Simply leapfrog the limited feature set product with some features on our profitable product. Unfortunately, not every new product that might compete with us is going to disrupt us. So in addition to facing the challenges of evolving the product, we also have to decide which competitors to go after. Often it takes several different attempts by competitive products to offer just enough in the way of new / different approaches to begin to impact an established product.

Consider for example of how much effort the Linux community put into desktop Linux. And while this was going on, Android and iOS were developed and offered a completely different approach that brings new scenarios to life. A good lesson is that usually a head-on alternative will quite often struggle and might even result in missing other disruptive technologies. Having a unique point of view is pretty important.

The reality of this situation is that it is only apparent in hindsight. While it is going on the changes are so small, the product features so minimal, and the base of the customers choosing a new path so narrow that you don’t realize what is going on. In fact, the new product is also on an incremental innovation path, having attained a small amount of traction, and that incremental innovation rapidly accumulates. There is a tipping point.

That is what makes acting during such a “crisis” so urgent. Since no one is first all the time (almost by definition when you’re the leader), deciding when and how to enter a space is the critical decision point. The irony is that the urgency to act comes at a time when it appears from the inside to be the least urgent.

Choosing to innovate means accepting the challenges

We’ve looked at the landscape and we’ve decided as a team that our own product needs to change course. There is a real risk that our product (business) will be marginalized by a new entry adjacent to us.

We get together and we come up with the features and design to go after these new scenarios and capabilities.

The challenge is that some of what we need to do involves changing course—this is by definition what is going on. You’re Apple and you decide that making phone calls is not the number 1 feature of your new mobile phone or your new tablet won’t run OS X apps. Those are product challenges. You also might face all sorts of challenges in pricing, positioning, and all the things that come from having a stable business model. For example, your competitor offers a free substitute for what you are selling.

The problem is your existing customers have become conditioned to expect improvements along the path we were traveling together. Worse, they are by definition not expecting an “different” product in lieu of a new version of their favorite product. These customers have built up not just expectations, but workflows, extensions, and whole jobs around your product.

But this is not about your existing and best customers, no matter how many, it is about the foundation of your product shifting and you’re seeing new customers use a new product or existing customers use your product less and less.

Moving forward the product gets built and it is time to get it into market for some testing or maybe you just release it.

BOOM!

All that work your marketing team has done over the years to establish what it means to “win” in the space that you were winning is now used against you. All the “criteria” you established against every competitor that came along are used to show that the new product is not a winning product. Except it is not winning in the old way. What you’ve done is become your own worst enemy.

But even then, the new way appears to be the less than optimal way—more expensive, less features, more clicks, or simply not the same at doing things the product used to do.

The early adopters or influential users (that was an old term in the literature, “IEU” or sometimes “lead user”) are immediately taken aback by the change in direction. The workflows, keystroke memory, add-ins, and more are just not the same or no longer optimal–there’s no regard for the new scenarios or capabilities when the old ones are different. Worse, they project their views across all customer segments. “I can’t figure this out, so imagine how hard it will be for my parents” or “this will never be acceptable in the enterprise” are common refrains in tech.

This happens no matter who a product is geared towards or how complex the product was in the first place. It is not how it does anything but the change in how it did things people were familiar with. This could be in user experience, pricing, performance, platform requirements or more.

You’re clearly faced with a set of choices that just don’t look good. In Lean Startup, Eric Ries talks in detail about the transition from early users of a new product to a wider audience. In this context, what happens is that the early users expect (or tolerate) a very different set of features and have very different expectations about what is difficult or easy. His conclusion is that it is painful to make the transition, but at some point your learning is complete and it is time to restart the process of learning by focusing on the broader set of customers.

In evolving an existing product, the usage of a pre-release is going to look a lot like the usage of the current release. The telemetry proves this for you, just to make this an even more brutal challenge. In addition, because of the years of effort the enthusiasts put into doing things a certain way and all that work establishing criteria for how a product should work, the obvious thing to do when testing a new release is to try everything out the old release did and compare to the old product (the one you are changing course of) and then maybe some new stuff.  This looks a lot like what Eric describes for startups. For products in market, the moment is pretty much like the startup moment since your new product is sort of a startup, but for a new trajectory.

Remember what brought us here, two things:

  • The environment of usage or business around the product was changing and a bet was made that changes were material to the team. With enough activity in the market, someone will always argue that this change is different and the old and new will coexist and not cannibalize each other (tell that to PalmPilot owners who swore phones would be separate from calendar and contacts, or GPS makers who believe in stand-alone units, or…).
  • A reminder that if Henry Ford had asked customers what they wanted from a car they would have said a faster horse. The market was conditioned to ask for and/or expect improvements along a certain trajectory and no matter what you are changing that trajectory.

All the data is flowing in that shows the new product is not the old product on the old path. Not every customer is interested in doing new things, especially the influential testers who generally focus on the existing ways of doing things, have domain expertise, and are often the most connected to the existing product and all that it encompasses. There is an irony in that for tech these customers are also the most tech-savvy.

Pretty quickly, listening to customers is looking exceedingly difficult.

If you listen to customers (and vector back to the previous path in some way: undo, product modes, multiple products/SKUs, etc.) you will probably cede the market to the new entrants or at least give them more precious time. If technology product history is any guide, pundits will declare you will be roadkill in fairly short order as you lack a strategic response. There’s a good chance your influential customers will rejoice as they can go back and do what they always did.  You will then be left without an answer for what comes next for your declining usage patterns.

If you don’t listen to customers (and stick to your guns) you are going to “alienate” folks and cede the market to someone who listens. If technology product history is any guide, pundits will declare that your new product is not resonating with the core audience. Pundits will also declare that you are stubborn and not listening to customers.

All of this is monumentally difficult simply because you had a successful product. Such is the price of success. Disrupting is never easy, but it is easier if you have nothing to lose.

Many folks will be quick to say that new products are fine but they should just have the old product’s way of doing things. This can seem like asking for a Prius with a switch to turn off the battery (my 2002 Prius came with a training DVD, parking attendant reference card, and more!). There are many challenges with the “side by side” approach. The most apparent is that it only delays the change (meaning delays your entry into the new market or meeting of new scenarios). Perhaps in a world of cloud-services this is more routine where you have less of a “choice” in the change, but the operational costs are real. In client code/apps the challenge becomes very quickly doing things twice. The more complex the changes are the more costly this becomes. In software nothing is free.

Product development is a social science.

People and time

In this numbered conversation, “disrupt or die” there are a few factors that are not often discussed in detail when all the debates happen.

First, people adapt. The assumption, especially about complex tech products, is that people have difficulty or lack of desire to change. While you can always overshoot the learning people can or are willing to do, people are the most adaptable part of a system. One way to think about this is that every successful product in use today, those that we all take for granted, were introduced to a customer base that had to change behavior.  We would not be where we are today without changing and adapting.  If one reflects, the suboptimal change (whether for the people that are customers or the people running a business) is apparent with every transition we have made.  Even today’s tablets are evidence of this.  Some say they are still for “media consumption” and others say they are “productivity tools”.  But behind the scenes, people (and developers) are rapidly and actively changing and adapting to the capabilities of tablets because the value proposition is so significantly improved in some dimensions.

Second, time matters. Change is only relative to knowledge people have at a moment in time and the customers you have at the moment. New people are entering the customer base all the time and there is a renewal in skills, scenarios, and usage patterns. Five years ago almost no one used a touch screen for very much.  Today, touch is a universally accepted (and expected) input method.  The customer base has adapted and also renewed around touch.  Universities are the world’s experts at understanding this notion of renewal. They know that any change to policy at a university is met with student resistance (especially in the spring). They also know that next year, 25% of the “customer base” will be replaced. And in 3 summers all the students on campus will only know the new way. One could call that cynical. One could also call that practical.

Finally time means that major product change, disruption, is always a multi-step process. Whether you make a bet to build a new product that disrupts the market dynamics or change an existing product that disrupts your own product, it rarely happens in one step.  Phones added copy/paste and APIs and even got better at the basics.  The pivot is the tool of the new endeavor until there is some traction.  Feedback, refinement, and balancing the need to move to a new space with the need to satisfy the installed base are the tools of the established product “pivoting” in response to a changed world.  It takes time and iteration–just the same way it took time and iteration to get to the first summit.  Never lose sight of the fact that disrupting is also product development and all the challenges that come from that remain–just because you’re disrupting does not mean what you do will be perfect–but that’s a given we all work with all the time.  We always operate knowing there is more change to come, improvements and fixes, as we all to learn by shipping.

Part of these factors almost always demonstrate, at least in the medium term, that disruption is not synonymous with elimination.  Those championing disruption often over-estimate progress towards elimination in the short term.  Though history has shown the long term to be fairly predictable.  Black cars are still popular.  They just aren’t the only cars.

Product development choices are based on social science. There is never a right answer. Context is everything.  You cannot A/B test your way to big bets or decisions about technology disruption. That’s what makes all of this so fun!!

Go change the rules of the game!

–Steven Sinofsky

Note.  I believe “disrupt or die” is the name of a highly-regarded management class at General Electric’s management school.

Written by Steven Sinofsky

May 8, 2013 at 1:00 pm

Posted in posts

Tagged with , , ,

Forking responsibly

with 2 comments

Twilight movie posterIn a previous post, the topic of surviving legacy code was discussed.  Browsers (or rendering engines within browsers) represent an interesting case of mission critical code as described in the post.  A few folks noticed yesterday that Google has started a new rendering engine based on the WebKit project (“This was not an easy decision.” according to the post)

Relative to moving legacy code forward this raises some interesting product development challenges.  This blog focuses on product development and the tradeoffs that invariably arise, and definitely not about being critical or analyzing choices made by others, as there are many other places to gain those perspectives. It is worth looking at actions through the lens of the product development discipline.

In this specific case there is an existing code base, legacy code, and a desire to move the code base forward.  Expressed in the announcement, however briefly, is the architectural challenge faced by maintaining the multi-process architecture.  Relative to the taxonomy from the previous post, this is a clear case of the challenges of moving an architecture forward.  The challenge is pretty cut and dry.

The approach taken is one that looks very much a break in the evolution of the code base, a “fork” as described some.  Also at work are efforts after forking to delete unused code, which is another technique for managing legacy code described previously.   These are perfectly reasonable ways to move a code base forward, but also come with some challenges worth discussing.

What the fork?

(OK, I couldn’t resist that, or the title of this post).

Forking a code base is not just something one can do in the open source world, though there is somewhat of a special meaning there.  It is a general practice applicable to any code base.  In fact, robust source code control systems are deliberate in supporting forks because that is how one experiments on a code base, evolves it asynchronously, or just maintains distinct versions of the code.

A fork can be a temporary state, or sometimes called a branch when there are several and the intent to be temporary is clear.  This is what one does to experiment on an alternate implementation or experiment on a new feature.  After the experiment the changes are merged back in (or not) and the branch is closed off.  Evolution of the code base moves forward as a singular effort.

A fork can also be permanent.  This is where one can either reap significant benefits or introduce significant challenges, or both, in evolving the code.  One can imagine forks that look like one of these two:

Dinner forkRoad fork

In the first case, the two paths stay in parallel.  That’s an interesting approach.  It is essentially saying that the code will do the same thing, but differently.  In code one would use this approach if you wanted to maintain two variations of the same product but have different teams working on them.  The differences between the two forks are known and planned.  There’s a routine process for sharing changes as each of the branches evolve.  In many ways, one could view the current state of webkit as this state since at no point is there a definitive version in use by every party.  You might just call this type of fork a parallel evolution.

In the second case, the two paths diverge and diverge more over time.  This too is an interesting approach.  This type of fork is a one-time operation and then the evolution of each of the branches proceeds at the discretion of each development team.  This approach says that the goals are no longer aligned and different paths need to be followed.  There’s no limitation to sharing or merging changes, but this would happen opportunistically, not systematically.  Comments from both resulting efforts of the WebKit fork reinforce the loosely coupled nature of the fork, including deleting the code unused by the respective forks along with a commitment to stay in communication.

For any given project, both of these could be appropriate.  In terms of managing legacy code, both are making the statement that the existing code is no longer on the right evolutionary path—whether this is a technical, business, or engineering challenge.

Forking is a revolutionary change to a code base.  It is sort of the punctuation in a punctuated equilibrium.  It is an admission that the path the code and team were on is no longer working.

Maintaining functionality

The most critical choice to make when forking code is to have an understanding of where the functionality goes.  In the taxonomy of managing legacy code, a fork is a reboot, not a recast.

From a legacy code perspective, the choice to fork is the same as a choice to rewrite.  Forking is just an expedient way to get started.  Rather than start from an empty source tree, one can visualize the fork as a tree copy of all the existing code to a new project and a fast start.  This isn’t cheating.  It can be a big asset or a big liability.

As an asset, if you start from all the same existing code then the chances of being compatible in terms of features, performance, and quality are pretty high.  Early in the project your code base looks a lot like the one you started from.  The differences are the ones you immediately introduce—deleting code you don’t think you need, rewriting some parts critical to you, refactoring/restructuring for better engineering.  All of these are software changes and that means, definitionally, there will be regressions relative to the starting point in the neighborhood of 10%.

On the other hand, a fork done this way can also introduce a liability.  If you start from the same code you were just using, then you bring with it all the architecture and features that you had before of course.  The question becomes what were you going away from?  What was it that could not be worked into the code base the way it stood?  The answers to these questions can provide insights into the balance between maintaining exact functionality out of the gate and how fast and well you can evolve towards your new goals down the road.

In both cases, the functionality of the other fork is not standing still (though on a project where your team controls both forks, you can decide resource levels or amount of change tolerated in one or the other fork).   The functionality of the two code bases will necessarily diverge just because everything would need to be done twice and the same way, which will prove to be impossible.  In the case of WebKit it is worth noting that it was derived from a fork of KHTML, which has since had a challenging path (see http://en.wikipedia.org/wiki/WebKit).

Point of view required

As said, the process of rebooting via any means is a perfectly viable way to move forward in the face of legacy code challenges.  What makes it possible to understand a decision to fork is having (or communicating) a point of view as to why a fork (a reboot, rewrite) is the right approach.  A point of view simply says what problem is being solved and why the approach solves the problem in a robust manner.

To arrive at such a conclusion, the team needs to have an open and honest dialog about the direction things need to go and the capabilities of the team and existing code to move forward.  Not everyone will ever agree—engineers are notoriously polarizing, or some might say “religious”, at moments like this.  Those that wrote the code are certain they know how to move it forward.  Those that did not write the code cannot imagine how it could possibly move forward.  All want ways to code with minimal distraction from their highest priorities. Open minds, experimentation, and sharing of data are the tools for the team to use to work (and work it is) to a shared approach for the fork to work.

If the team chooses a reboot the critical information to articulate is the point of view of “why”.  In other words, what are assumptions about the existing code are no longer valid in some new direction or strategy.   Just as critically are the new bets or new assumptions that will drive decision making.

This is not a story for the outside world, but is critical to the successful engineering of the code.  You really need to know what is different—and that needs to map to very clear choices where one set of assumptions leads to one implementation and another set of assumptions leads to very different choices.  Open source turns this engineering dialog into an externally visible dialog between engineers.

Every successful fork is one that has a very clear set of assumptions that are different from the original code base.

If you don’t have a different set of assumptions that are so clearly different to the developers doing the work, then the chances are you will just be forked and not really drive a distinct evolutionary path in terms of innovation.

Knowing this point of view – what are the pillars driving a change in code evolution – turns into the story that will get told when the next product releases.  This story will not only need to explain what is new, but ultimately as a matter of engineering, will need to explain to all parties why some things don’t quite work the way they do with the other fork, past or present at time of launch.

If you don’t have this point of view when you start the project, you’re not going to be able to create one later in the project.  The “narrative” of a project gets created at the start.  Only marketing and spin can create a story different than the one that really took place.

–Steven

Written by Steven Sinofsky

April 5, 2013 at 9:00 am

Posted in posts

Tagged with ,