Learning by Shipping

products, development, management…

Posts Tagged ‘product management

The Four Stages of Disruption

iexpectyoutodiemrbondInnovation and disruption are the hallmarks of the technology world, and hardly a moment passes when we are not thinking, doing, or talking about these topics. While I was speaking with some entrepreneurs recently on the topic, the question kept coming up: “If we’re so aware of disruption, then why do successful products (or companies) keep getting disrupted?”

Good question, and here’s how I think about answering it.

As far back as 1962, Everett Rogers began his groundbreaking work defining the process and diffusion of innovation. Rogers defined the spread of innovation in the stages of knowledge, persuasion, decision, implementation and confirmation.

Those powerful concepts, however, do not fully describe disruptive technologies and products, and the impact on the existing technology base or companies that built it. Disruption is a critical element of the evolution of technology — from the positive and negative aspects of disruption a typical pattern emerges, as new technologies come to market and subsequently take hold.

A central question to disruption is whether it is inevitable or preventable. History would tend toward inevitable, but an engineer’s optimism might describe the disruption that a new technology can bring more as a problem to be solved.

Four Stages of Disruption

For incumbents, the stages of innovation for a technology product that ultimately disrupt follow a pattern that is fairly well known. While that doesn’t grant us the predictive powers to know whether an innovation will ultimately disrupt, we can use a model to understand what design choices to prioritize, and when. In other words, the pattern is likely necessary, but not sufficient to fend off disruption. Value exists in identifying the response and emotions surrounding each stage of the innovation pattern, because, as with disruption itself, the actions/reactions of incumbents and innovators play important roles in how parties progress through innovation. In some ways, the response and emotions to undergoing disruption are analogous to the classic stages of grieving.

Rather than the five stages of grief, we can describe four stages that comprise theinnovation pattern for technology products: Disruption of incumbent; rapid and linear evolution; appealing convergence; and complete reimagination. Any product line or technology can be placed in this sequence at a given time.

The pattern of disruption can be thought of as follows, keeping in mind that at any given time for any given category, different products and companies are likely at different stages relative to some local “end point” of innovation.

disruption

Stage One: Disruption of Incumbent

moment of disruption is where the conversation about disruption often begins, even though determining that moment is entirely hindsight. (For example, when did BlackBerry get disrupted by the iPhone, film by digital imaging or bookstores by Amazon?) A new technology, product or service is available, and it seems to some to be a limited, but different, replacement for some existing, widely used and satisfactory solution. Most everyone is familiar with this stage of innovation. In fact, it could be argued that most are so familiar with this aspect that collectively our industry cries “disruption” far more often than is actually the case.

From a product development perspective, choosing whether a technology is disruptive at a potential moment is key. If you are making a new product, then you’re “betting the business” on a new technology — and doing so will be counterintuitive to many around you. If you have already built a business around a successful existing product, then your “bet the business” choice is whether or not to redirect efforts to a new technology. While difficult to prove, most would generally assert that new technologies that are subsequently disruptive are bet on by new companies first. The very nature of disruption is such that existing enterprises see more downside risk in betting the company than they see upside return in a new technology. This is the innovator’s dilemma.

The incumbent’s reactions to potential technology disruptions are practically cliche. New technologies are inferior. New products do not do all the things existing products do, or are inefficient. New services fail to address existing needs as well as what is already in place. Disruption can seem more expensive because the technologies have not yet scaled, or can seem cheaper because they simply do less. Of course, the new products are usually viewed as minimalist or as toys, and often unrelated to the core business. Additionally, business-model disruption has similar analogues relative to margins, channels, partners, revenue approaches and more.

The primary incumbent reaction during this stage is to essentially ignore the product or technology — not every individual in an organization, but the organization as a whole often enters this state of denial. One of the historical realities of disruption is uncovering the “told you so” evidence, which is always there, because no matter what happens, someone always said it would. The larger the organization, the more individuals probably sent mail or had potential early-stage work that could have avoided disruption, at least in their views (see “Disruption and Woulda, Coulda, Shoulda” and the case of BlackBerry). One of the key roles of a company is to make choices, and choosing change to a more risky course versus defense of the current approaches are the very choices that hamstring an organization.

There are dozens of examples of disruptive technologies and products. And the reactions (or inactions) of incumbents are legendary. One example that illustrates this point would be the introduction of the “PC as a server.” This has all of the hallmarks of disruption. The first customers to begin to use PCs as servers — for application workloads such as file sharing, or early client/server development — ran into incredible challenges relative to the mini/mainframe computing model. While new PCs were far more flexible and less expensive, they lacked the reliability, horsepower and tooling to supplant existing models. Those in the mini/mainframe world could remain comfortable observing the lack of those traits, almost dismissing PC servers as not “real servers,” while they continued on their path further distancing themselves from the capabilities of PC servers, refining their products and businesses for a growing base of customers. PCs as servers were simply toys.

At the same time, PC servers began to evolve and demonstrate richer models for application development (rich client front-ends), lower cost and scalable databases, and better economics for new application development. With the rapidly increasing demand for computing solutions to business problems, this wave of PC servers fit the bill. Soon the number of new applications written in this new way began to dwarf development on “real servers,” and the once-important servers became legacy relative to PC-based servers for those making the bet or shift. PC servers would soon begin to transition from disruption to broad adoption, but first the value proposition needed to be completed.

Stage Two: Rapid Linear Evolution

Once an innovative product or technology begins rapid adoption, the focus becomes “filling out” the product. In this stage, the product creators are still disruptors, innovating along the trajectory they set for themselves, with a strong focus on early-adopter customers, themselves disruptors. The disruptors are following their vision. The incumbents continue along their existing and successful trajectory, unknowingly sealing their fate.

This stage is critically important to understand from a product-development perspective. As a disruptive force, new products bring to the table a new way of looking at things — a counterculture, a revolution, an insurgency. The heroic efforts to bring a product or service to market (and the associated business models) leave a lot of room left to improve, often referred to as “low-hanging fruit.” The path from where one is today to the next six, 12, 18 months is well understood. You draw from the cutting-room floor of ideas that got you to where you are. Moving forward might even mean fixing or redoing some of the earlier decisions made with less information, or out of urgency.

Generally, your business approach follows the initial plan, as well, and has analogous elements of insurgency. Innovation proceeds rapidly in this point. Your focus is on the adopters of your product — your fellow disruptors (disrupting within their context). You are adding features critical to completing the scenario you set out to develop.

To the incumbent leaders, you look like you are digging in your heels for a losing battle. In their view, your vector points in the wrong direction, and you’re throwing good money after bad. This only further reinforces the view of disruptors that they are heading in the right direction. The previous generals are fighting the last war, and the disruptors have opened up a new front. And yet, the traction in the disruptor camp becomes undeniable. The incumbent begins to mount a response. That response is somewhere between dismissive and negative, and focuses on comparing the products by using the existing criteria established by the incumbent. The net effect of this effort is to validate the insurgency.

Stage Three: Appealing Convergence

As the market redefinition proceeds, the category of a new product starts to undergo a subtle redefinition. No longer is it enough to do new things well; the market begins to call for the replacement of the incumbent technology with the new technology. In this stage, the entire market begins to “wake up” to the capabilities of the new product.

As the disruptive product rapidly evolves, the initial vision becomes relatively complete (realizing that nothing is ever finished, but the scenarios overall start to fill in). The treadmill of rapidly evolving features begins to feel somewhat incremental, and relatively known to the team. The business starts to feel saturated. Overall, the early adopters are now a maturing group, and a sense of stability develops.

Looking broadly at the landscape, it is clear that the next battleground is to go after the incumbent customers who have not made the move. In other words, once you’ve conquered the greenfield you created, you check your rearview mirror and look to pick up the broad base of customers who did not see your product as market-ready or scenario-complete. To accomplish this, you look differently at your own product and see what is missing relative to the competition you just left in the dust. You begin to realize that all those things your competitor had that you don’t may not be such bad ideas after all. Maybe those folks you disrupted knew something, and had some insights that your market category could benefit from putting to work.

In looking at many disruptive technologies and disruptors, the pattern of looking back to move forward is typical. One can almost think of this as a natural maturing; you promise never to do some of the things your parents did, until one day you find yourself thinking, “Oh my, I’ve become my parents.” The reason that products are destined to converge along these lines is simply practical engineering. Even when technologies are disrupted, the older technologies evolved for a reason, and those reasons are often still valid. The disruptors have the advantage of looking at those problems and solving them in their newly defined context, which can often lead to improved solutions (easier to deploy, cheaper, etc.) At the same time, there is also a risk of second-system syndrome that must be carefully monitored. It is not uncommon for the renegade disruptors, fresh off the success they have been seeing, to come to believe in broader theories of unification or architecture and simply try to get too much done, or to lose the elegance of the newly defined solution.

Stage Four: Complete Reimagination

The last stage of technology disruption is when a category or technology is reimagined from the ground up. While one can consider this just another disruption, it is a unique stage in this taxonomy because of the responses from both the legacy incumbent and the disruptor.

Reimagining a technology or product is a return to first principles. It is about looking at the underlying assumptions and essentially rethinking all of them at once. What does it mean to capture an image,provide transportationshare computationsearch the Web, and more? The reimagined technology often has little resemblance to the legacy, and often has the appearance of even making the previous disruptive technology appear to be legacy. The melding of old and new into a completely different solution often creates whole new categories of products and services, built upon a base of technology that appears completely different.

To those who have been through the first disruption, their knowledge or reference frame seems dated. There is also a feeling of being unable to keep up. The words are known, but the specifics seem like rocket science. Where there was comfort in the convergence of ideas, the newly reimagined world seems like a whole new generation, and so much more than a competitor.

In software, one way to think of this is generational. The disruptors studied the incumbents in university, and then went on to use that knowledge to build a better mousetrap. Those in university while the new mousetrap was being built benefited from learning from both a legacy and new perspective, thus seeing again how to disrupt. It is often this fortuitous timing that defines generations in technologies.

Reimagining is important because the breakthroughs so clearly subsume all that came before. What characterizes a reimagination most is that it renders the criteria used to evaluate the previous products irrelevant. Often there are orders of magnitude difference in cost, performance, reliability, service and features. Things are just wildly better. That’s why some have referred to this as the innovator’s curse. There’s no time to bask in the glory of the previous success, as there’s a disruptor following right up on your heels.

A recent example is cloud computing. Cloud computing is a reimagination ofboth the mini/mainframe and PC-server models. By some accounts, it is a hybrid of those two, taking the commodity hardware of the PC world and the thin client/data center view of the mainframe world. One would really have to squint in order to claim it is just that, however, as the fundamental innovation in cloud computing delivers entirely new scale, reliability and flexibility, at a cost that upends both of those models. Literally every assumption of the mainframe and client/server computing was revisited, intentionally or not, in building modern cloud systems.

For the previous incumbent, it is too late. There’s no way to sprinkle some reimagination on your product. The logical path, and the one most frequently taken, is to “mine the installed base,” and work hard to keep those customers happy and minimize the mass defections from your product. The question then becomes one of building an entirely new product that meets these new criteria, but from within the existing enterprise. The number of times this has been successfully accomplished is diminishingly small, but there will always be exceptions to the rule.

For the previous disruptor and new leader, there is a decision point that is almost unexpected. One might consider the drastic — simply learn from what you previously did, and essentially abandon your work and start over using what you learned. Or you could be more incremental, and get straight to the convergence stage with the latest technologies. It feels like the ground is moving beneath you. Can you converge rapidly, perhaps revisiting more assumptions, and show more flexibility to abandon some things while doing new things? Will your product architecture and technologies sustain this type of rethinking? Your customer base is relatively new, and was just feeling pretty good about winning, so the pressure to keep winning will be high. Will you do more than try to recast your work in this new light?

The relentless march of technology change comes faster than you think.

So What Can You Do?

Some sincerely believe that products, and thus companies, disrupt and then are doomed to be disrupted. Like a Starship captain when the shields are down, you simply tell all hands to brace themselves, and then see what’s left after the attack. Business and product development, however, are social sciences. There are no laws of nature, and nothing is certain to happen. There are patterns, which can be helpful signposts, or can blind you to potential actions. This is what makes the technology industry, and the changes technology bring to other industries, so exciting and interesting.

The following table summarizes the stages of disruption and the typical actions and reactions at each stage:

Stage Disruptor Incumbent
 Disruption      of    Incumbent Introduces new product with a distinct point of view, knowing  it does not solve all the needs of the entire existing market, but advances the state of the art in technology and/or business. New product or service is not relevant to existing customers or market, a.k.a. “deny.”
 Rapid linear  evolution Proceeds to rapidly add features/capabilities, filling out the value proposition after initial traction with select early adopters. Begins to compare full-featured product to new product and show deficiencies, a.k.a. “validate.”
 Appealing  Convergence Sees opportunity to acquire broader customer base by appealing to slow movers. Sees limitations of own new product and learns from what was done in the past, reflected in a new way. Potential risk is being leapfrogged by even newer technologies and business models as focus   turns to “installed base” of incumbent. Considers cramming some element of disruptive features to existing product line to sufficiently demonstrate attention to future trends while minimizing interruption of existing customers, a.k.a. “compete.” Potential risk is failing to see the true value or capabilities of disruptive products relative to the limitations of existing products.
 Complete  Reimagining Approaches a decision point because new entrants to the market can benefit from all your product has demonstrated, without embracing the legacy customers as done previously. Embrace legacy market more, or keep pushing forward? Arguably too late to respond, and begins to define the new product as part of a new market, and existing product part of a larger, existing market, a.k.a. “retreat.”

Considering these stages and reactions, there are really two key decision points to be tuned-in to:

When you’re the incumbent, your key decision is to choose carefully what you view as disruptive or not. It is to the benefit of every competitor to claim they are disrupting your products and business. Creating this sort of chaos is something that causes untold consternation in a large organization. Unfortunately, there are no magic answers for the incumbent.

The business team needs to develop a keen understanding of the dynamics of competitive offerings, and know when a new model can offer more to customers and partners in a different way. More importantly, it must avoid an excess attachment to today’s measures of success.

The technology and product team needs to maintain a clinical detachment from the existing body of work to evaluate if something new is better, while also avoiding the more common technology trap of being attracted to the next shiny object.

When you’re the disruptor, your key decision point is really when and if to embrace convergence. Once you make the choices — in terms of business model or product offering — to embrace the point of view of the incumbent, you stand to gain from the bridge to the existing base of customers.

Alternatively, you create the potential to lose big to the next disruptor who takes the best of what you offer and leapfrogs the convergence stage with a truly reimagined product. By bridging to the legacy, you also run the risk of focusing your business and product plans on the customers least likely to keep pushing you forward, or those least likely to be aggressive and growing organizations. You run the risk of looking backward more than forward.

For everyone, timing is everything. We often look at disruption in hindsight, and choose disruptive moments based on product availability (or lack thereof). In practice, products require time to conceive, iterate and execute, and different companies will work on these at different paces. Apple famously talked about the 10-year project that was the iPhone, with many gaps, and while the iPad appears a quick successor, it, too, was part of that odyssey. Sometimes a new product appears to be a response to a new entry, but in reality it was under development for perhaps the same amount of time as another entry.

There are many examples of this path to disruption in technology businesses. While many seem “classic” today, the players at the time more often than not exhibited the actions and reactions described here.

As a social science, business does not lend itself to provable operational rules. As appealing as disruption theory might be, the context and actions of many parties create unique circumstances each and every time. There is no guarantee that new technologies and products will disrupt incumbents, just as there is no certainty that existing companies must be disrupted. Instead, product leaders look to patterns, and model their choices in an effort to create a new path.

Stages of Disruption In Practice

Digital imaging. Mobile imaging reimagined a category that disrupted film (always available, low-quality versus film), while converging on the historic form factors and usage of film cameras. In parallel, there is a wave of reimagination of digital imaging taking place that fundamentally changes imaging using light field technology, setting the stage for a potential leapfrog scenario.

  • Retail purchasing. Web retailers disrupted physical retailers with selection, convenience, community, etc., ultimatelyconverging on many elements of traditional retailers (physical retail presence, logistics, house brands).
  • Travel booking. Online travel booking is disrupting travel agents, then converging on historic models of aggregation and package deals.
  • Portable music. From the Sony Walkman as a disruptor to the iPod and MP3 players, to mobile phones subsuming this functionality, and now to streaming playback, portable music has seen the disruptors get disrupted and incumbents seemingly stopped in their tracks over several generations. The change in scenarios enabled by changing technology infrastructure (increased storage, increased bandwidth, mobile bandwidth and more) have made this a very dynamic space.
  • Urban transport. Ride sharing, car sharing, and more disruptive traditional ownership of vehicles or taxi services are in the process of converging models (such as Uber adding UberX.
  • Productivity. Tools such as Quip, Box, Haiku Deck, Lucidchart, and more are being pulled by customers beyond early adopters to be compatible with existing tools and usage patterns. In practice, these tools are currently iterating very rapidly along their self-defined disruptive path. Some might suggest that previous disruptors in the space (OpenOffice, Zoho, perhaps even Google Docs) chose to converge with the existing market too soon, as a strategic misstep.
  • Movie viewing. Netflix and others, as part of cord-cutting, with disruptive, low-priced, all-you-can-consume on-demand plans and producing their own content. Previous disruptors such as HBO are working to provide streaming and similar services, while constrained by existing business models and relationships.
  • Messaging/communications apps. SMS, which many consider disruptive to 2000-era IM, is being challenged by much richer interactions that disrupt the business models of carrier SMS and feature sets afforded by SMS.
  • Network infrastructure. Software-defined networking and cloud computing are reimagining the category of networking infrastructure, with incumbent players attempting to benefit from these shifts in the needs of customers. Incumbents at different levels are looking to adopt the model, while some providers see it as fundamentally questioning their assumptions.

— Steven Sinofsky (@stevesi). This story originally appeared on Recode.

Written by Steven Sinofsky

January 7, 2014 at 9:00 pm

Avoiding mobile app bloat

Nemo BloatrBack in the pre-web days, acquiring software was difficult and expensive.  Learning a given program (app) was difficult and time consuming.  Within this context there was an amazing amount of innovation.  At least in part, these constraints also contributed to the oft-cited (though not well-defined) concept of bloatware.  Even these constraints do not seem particularly true on today’s modern mobile platforms, we are starting to see a rise in app bloat.  It is early and with enough self-policing and through the power of reviews/ratings we might collectively avoid bloatware on our mobile devices.

Product managers have a big responsibility to develop feature lists/themes that make products easier to use, more functional, and better overall–with very finite resources.  Focusing these efforts in ways that deliberately deprioritize what could lead to bloatware is an opportunity to break from past industry cycles and do a better job for modern mobile platforms.  There are many forms of bloat across user experience, performance, resource usage and more.  This post looks at some forms of UX bloat.

This post was motivated by a conversation with a developer considering building an “all in one” app to manage many aspects of the system and files.  This interesting post by Benedict Evans, http://ben-evans.com/benedictevans/2013/9/21/atomisation-and-bundling about unbundling capability and Chris Dixon’s thoughtful post on “the internet is for snacking” http://cdixon.org/2013/09/14/the-internet-is-for-snacking/ serve as excellent motivators.

History

The first apps people used on PCs tended to be anchor apps–that is a given individual would use one app all day, every day.  This might have been a word processor or a spreadsheet, commonly.  These apps were fairly significant investments to acquire and to gain proficiency.

There was plenty of competition in these anchor apps.  This resulted in an explosion in features as apps competed for category leadership.  Software Digest used to evaluate all the entries in a category with lists of hundreds of features in a checklist format, for example.  This is all innovation goodness modulo whether any given individual valued any given feature. By and large, the ability for a single (difficult to acquire and gain proficiency) product to do so many things for so many people was what determined the top products in any given category.  

Two other forms of innovation would also take place as a direct result of this anchor status and need to continue to maintain such status.

First, the fact that people would be in an app all day created an incentive for an ISV to “pull into” the app any supporting functionality so the person would not need to leave the app (and enter the wild world of the OS or another app that might be completely different).  This led to an expansion of functionality like file management, for example.  This situation also led to a broad amount of duplication of OS capabilities from data access to security and even managing external devices such as printers or storage.

As you can imagine, over time the amount of duplication was significant and the divergence of mechanisms to perform common tasks across different apps and the OS itself became obvious and troublesome.  As people use more and more programs this began to strain the overall system and experience in terms of resources and cognitive load.

Second, because software was so hard to use in the early days of these new apps and paradigms there was a great deal of innovation in user experience.  Evolving from command line to keyboard shortcuts to graphical interface.  Then within the graphical interface from menus to toolbars to palettes to context menus and more.  Even within one phase such as early GUI there were many styles of controls and affordances.  At each innovation junction the new, presumably easier mechanism was added to the overall user experience.

At an extreme level this just created complete redundancy in various mechanisms. When toolbars were introduced there was a raging debate over whether a toolbar button should always be redundant with a menu command or never be redundant with a menu command.  Similarly, the same debate held for context menus (also called shortcut menus, which tells you where that landed).  Note a raging debate means that well-meaning people had opposing viewpoints that each asserted were completely true, each unable to definitively prove any point of view. I recall some of the earliest instrumented studies (special versions of apps that packaged up telemetry that could later be downloaded from a PC enlisted in the study) that showed before/after the addition of redundant toolbar buttons, keyboard shortcuts, and shortcut menus.  Each time a new affordance was added the existing usage patterns were split again–in other words every new way to access a command was used frequently by some set of people.  This provided a great deal of validation for redundancy as a feature.  It should be noted that the whole system surrounding a release of a new mechanism further validated the redundant approach–reviews, marketing, newsgroups, enthusiasts, as well as telemetry showed ample support for the added ways of doing tasks.

As you can imagine, over time the UX became arguably bloated and decidedly redundant.  Ironically, for complex apps this made it even more difficult to add new features since each brand new feature needed to have several entry points and this toolbars, palettes, menus, keyboard shortcuts, and more were rather overloaded.  Command location became an issue.  The development of the Office ribbon (see JensenH’s blog for tons of great history – http://blogs.msdn.com/b/jensenh/) started from the principle of flattening the command hierarchy and removing redundant access to commands in order to solve this real-estate problem.

By the time modern mobile apps came on the scene it was starting to look like we would have a new world of much simpler and more streamlined tools.  

Mobile apps and the potential for bloat

Mobile app platforms would seem to have the foundation upon which to prevent bloat from taking place if you consider the two drivers of bloat previously discussed. Certainly one could argue that the inherent nature of the platforms intend for apps to be focused in purpose.

First, apps are easy to get and not so expensive.  If you have an app that takes photos and you want to do some photo editing, there are a plethora of available photo editing apps.  If you want to later tag or manage photos, there are specialized apps that do just that.  While there are many choices, the web provides a great way to search for and locate apps and the reviews and ratings provide a ton more guidance than ever before to help you make a choice.  The relative safety, security, and isolation of apps reduces the risk of trial.

Second, because the new mobile platforms operate at a higher level of abstraction the time to learn and app is substantially reduced.  Where classic apps might feel like you’re debugging a document, new apps building on higher level concepts get more done with fewer gestures, sometimes in a more focused domain (compare old style photo editing to instagram filters, for example).  Again the safety afforded the platforms makes it possible to try things out and undo operations (or even whole apps) as well.  State of the art software engineering means even destructive operations almost universally provide for undo/redo semantics (something missing from the early days).

Given these two realities, one might hope that modern mobile apps are on a path to stay streamlined.

While there is a ton of great work and the modern focus on design and simplicity abounds in many apps, it is also fair to say that common design patterns are arising that represent the seeds of bloat.  Yet the platforms provide capabilities today that can be used effectively by ISVs to put people in control of their apps to avoid redundancy.  Are these being used enough?  That isn’t clear.

One example that comes to mind is the share to verb that is commonly used.  Many apps can be both the source and sink of sharing.

For example, a mail program might be able to attach a photo from within the program.  Or the photo viewer might be able to mail a photo.

It seems routine then that there should be an “attach” verb within the mail program along with the share verb from the photo viewer.  On most platforms this is the case at least with third party mail programs as well.  This seems fast, convenient, efficient.

As you play this out over time the the mail program starts to need more than attach of a photo but potentially lists of data types and objects.  As we move away from files or as mobile platforms emphasize security the ability for one app to enumerate data created by another makes this challenging and thus the OS/apps need to implement namespaces or brokers.

The other side of this, share to, becomes an exceedingly long list of potential share targets.  It becomes another place for ISVs to work to gain visibility.  Some platforms allow ISVs to install multiple share targets per app and so apps show up more than once.  On Android there is even a third party app that is quite popular that enables you the ability to offer to manage this list of share targets.  Windows provides this natively and apps can only install as a single share to target to avoid this “spamming”.

As an app creator, the question is really how critical it is to provide circular access to your data types?  Can you allow the system to provide the right level of access and allow people to use the native paradigms for sharing?  This isn’t always possible and the limitations (and controls) can make this impossible, so this is also a call to OS vendors to think through this “cycle” more completely.

In the meantime, limited screen real estate is being dedicated to commands redundant with the OS and OS capabilities might be overloaded with capabilities available elsewhere.

A second example comes from cross-platform app development.  This isn’t new and many early GUI apps had this same goal (cross-platform or cross-OS version).  When you need to be cross-platform you tend to create your own mechanisms for things that might be available in the platform.  This leads to inconsistencies or redundancies, which in turn make it difficult for people to use the models inherent in the platform.

In other words, your single-app view centered around making your app easier by putting everything in the context of your app drives the feature “weight” of your app up and the complexity of the overall system up as well.  This creates a situation where everyone is acting in the interest of their app, but in a world of people using many different apps the overall experience degrades.

Whether we’re talking about user/password management, notifications, sounds, permissions/rights, and more the question for you as an ISV is whether your convenience or ease of access, or desire to do things once and work across platforms is making things locally easier at the expense of the overall platform or not?

Considering innovation

Every development team deals with finite resources and a business need to get the most bang for the buck.  The most critical need for any app is to provide innovative new features in the specific domain of the app–if you’re a photo editing app then providing more editing capabilities seems more innovative than being able to also grab a picture from the camera directly (this is sort of a canonical example of redundancy–do many folks start in the editor when taking a picture, yet almost all the editors enable this because it is not a lot of extra code).

Thinking hard about what you’re using your finite resources to deliver is a job of product management.  Prioritizing domain additions over redundancy and bloat can really help to focus.  One might also look to reviewers (in the App Stores or outside reviewers) to consider redundancy as not always more convenient but somewhat of potential challenge down the road.

Ironically, just as with the GUI era it is enthusiasts who can often drive features of apps.  Enthusiasts love shortcuts and connections along with pulling functionality into their favorite apps.  You can see this in reviews and comments on apps.  Enthusiasts also tend to have the skills and global view of the platforms to navigate the redundancy without getting lost.  So this could also be a case of making sure not to listen too closely to the most engaged…and that’s always tricky.

Designers and product managers looking to measure the innovation across the set of features chosen for a release might consider a few things that don’t necessarily count as innovation for apps on modern mobile platforms:

  • Adding more access points to previously existing commands.  Commands should have one access point, especially on small screen devices.  Multiple access points means that over time you’ll be creating a screen real estate challenge and at some point some people will want everything everywhere, which won’t be possible.
  • Making it possible to invoke a command from both inside-out and outside-in.  When it comes to connecting apps with each other or apps to data, consider the most fluid and normal path and optimize for that–is it going from data to app or from app to data, is your app usually the source or the sink?  It is almost never the case that your app or app data is always the starting point and the finishing point for an operation.  Again, filling out this matrix leads to a level of bloat and redundancy across the system and a lack of predictability for customers.
  • Duplicating functionality that exists elsewhere for convenience.  It is tempting to pull in commonly changed settings or verbs into your app as a point of efficiency.  The challenge with this is where does it end?  What do you do if something is not longer as common as it once was or if the OS dramatically changes the way some functionality is accessed.  Whenever possible, rely on the native platform mechanisms even when trying to be cross-platform.
  • Thinking your app is an anchor so it needs to provide access to everything from within your app.  Everyone building an app wants their app to be the one used all the time. No one builds an app thinking they are an edge case.  This drives apps to have more and more capability that might not be central to the raison d’être for your app.  In the modern mobile world, small tools dominate and the platforms are optimized for swiftly moving between tools.  Think about how to build your app to be part of an overall orchestra of apps.  You might even consider breaking up your app if the tasks themselves are discrete rather than overloading one app.
  • Reminding yourself it is your app, but the person’s device.  “Taking over” the device as though your app is the only thing people will use isn’t being fair to people.  Just because the OS might let you add entry points or gain visibility does not mean you should take advantage of every opportunity.

These all might be interesting features and many might be low cost ways to lengthen the change log.  The question for product managers is whether this was the best use of resources today and whether it builds the experience foundation for your app that scales down the road?

Where do you want your innovation energy to go–your domain or potential bloat?

–Steven Sinofsky

Written by Steven Sinofsky

September 24, 2013 at 6:00 pm

Posted in posts

Tagged with ,

Continuous Productivity: New tools and a new way of working for a new era

with 74 comments

553698_10101017579649025_101860817_nWhat happens when the tools and technologies we use every day become mainstream parts of the business world?  What happens when we stop leading separate “consumer” and “professional” lives when it comes to technology stacks?  The result is a dramatic change in the products we use at work and as a result an upending of the canon of management practices that define how work is done.

This paper says business must embrace the consumer world and see it not as different, less functional, or less enterprise-worthy, but as the new path forward for how people will use technology platforms, how businesses will organize and execute work, and how the roles of software and hardware will evolve in business. Our industry speaks volumes of the consumerization of IT, but maybe that is not going far enough given the incredible pace of innovation and depth of usage of the consumer software world.  New tools are appearing that radically alter the traditional definitions of productivity and work.  Businesses failing to embrace these changes will find their employees simply working around IT at levels we have not seen even during the earliest days of the PC.   Too many enterprises are either flat-out resisting these shifts or hoping for a “transition”—disruption is taking place, not only to every business, but within every business.

Paradigm shift

Continuous productivity is an era that fosters a seamless integration between consumer and business platforms.  Today, tools and platforms used broadly for our non-work activities are often used for work, but under the radar.  The cloud-powered smartphone and tablet, as productivity tools, are transforming the world around us along with the implied changes in how we work to be mobile and more social. We are in a new era, a paradigm shift, where there is evolutionary discontinuity, a step-function break from the past. This constantly connected, social and mobile generational shift is ushering a time period on par with the industrial production or the information society of the 20th century. Together our industry is shaping a new way to learn, work, and live with the power of software and mobile computing—an era of continuous productivity.

Continuous productivity manifests itself as an environment where the evolving tools and culture make it possible to innovate more and faster than ever, with significantly improved execution. Continuous productivity shifts our efforts from the start/stop world of episodic work and work products to one that builds on the technologies that start to answer what happens when:

  • A generation of new employees has access to the collective knowledge of an entire profession and experts are easy to find and connect with.
  • Collaboration takes place across organization and company boundaries with everyone connected by a social fiber that rises above the boundaries of institutions.
  • Data, knowledge, analysis, and opinion are equally available to every member of a team in formats that are digital, sharable, and structured.
  • People have the ability to time slice, context switch, and proactively deal with situations as they arise, shifting from a world of start/stop productivity and decision-making to one that is continuous.

Today our tools force us to hurry up and wait, then react at all hours to that email or notification of available data.  Continuous productivity provides us a chance at a more balanced view of time management because we operate in a rhythm with tools to support that rhythm.  Rather than feeling like you’re on call all the time waiting for progress or waiting on some person or event, you can simply be more effective as an individual, team, and organization because there are new tools and platforms that enable a new level of sanity.

Some might say this is predicting the present and that the world has already made this shift.  In reality, the vast majority of organizations are facing challenges or even struggling right now with how the changes in the technology landscape will impact their efforts.  What is going on is nothing short of a broad disruption—even winning organizations face an innovator’s dilemma in how to develop new products and services, organize their efforts, and communicate with customers, partners, and even within their own organizations.  This disruption is driven by technology, and is not just about the products a company makes or services offered, but also about the very nature of companies.

Today’s socialplace

The starting point for this revolution in the workplace is the socialplace we all experience each and every day.

We carry out our non-work (digital) lives on our mobile devices.  We use global services like Facebook, Twitter, Gmail, and others to communicate.  In many places in the world, local services such as Weibo, MixIt, mail.ru, and dozens of others are used routinely by well over a billion people collectively.  Entertainment services from YouTube, Netflix to Spotify to Pandora and more dominate non-TV entertainment and dominate the Internet itself.  Relatively new services such as Pinterest or Instagram enter the scene and are used deeply by tens of millions in relatively short times.

While almost all of these services are available on traditional laptop and desktop PCs, the incredible growth in usage from smartphones and tablets has come to represent not just the leading edge of the scenario, but the expected norm.  Product design is done for these experiences first, if not exclusively. Most would say that designing for a modern OS first or exclusively is the expected way to start on a new software experience.  The browser experience (on a small screen or desktop device) is the backup to a richer, more integrated, more fluid app experience.

In short, the socialplace we are all familiar with is part of the fabric of life in much of the world and only growing in importance. The generation growing up today will of course only know this world and what follows. Around the world, the economies undergoing their first information revolutions will do so with these technologies as the baseline.

Historic workplace

Briefly, it is worth reflecting on and broadly characterizing some of the history of the workplace to help to place the dramatic changes into historic context.

Mechanized productivity

The industrial revolution that defined the first half of the 20th century marked the start of modern business, typified by high-volume, large-scale organizations.  Mechanization created a culture of business derived from the capabilities and needs of the time. The essence of mechanization was the factory which focused on ever-improving and repeatable output.  Factories were owned by those infusing capital into the system and the culture of owner, management, and labor grew out of this reality.  Management itself was very much about hierarchy. There was a clear separation between labor and management primarily focused on owners/ownership.

The information available to management was limited.  Supply chains and even assembly lines themselves were operated with little telemetry or understanding of the flow of raw materials through to sales of products. Even great companies ultimately fell because they lacked the ability to gather insights across this full spectrum of work.

Knowledge productivity

The problems created by the success of mechanized production were met with a solution—the introduction of the computer and the start of the information revolution.  The mid-20th century would kick off a revolution in business, business marked by global and connected organizations.  Knowledge created a new culture of business derived from the information gathering and analysis capabilities of first the mainframe and then the PC.

The essence of knowledge was the people-centric office which focused on ever-improving analysis and decision-making to allocate capital, develop products and services, and coordinate the work across the globe.  The modern organization model of a board of directors, executives, middle management, and employees grew out of these new capabilities.  Management of these knowledge-centric organizations happened through an ever-increasing network of middle-managers.  The definition of work changed and most employees were not directly involved in making things, but in analyzing, coordinating, or servicing the products and services a company delivered.

The information available to management grew exponentially.  Middle-management grew to spend their time researching, tabulating, reporting, and reconciling the information sources available.  Information spanned from quantitative to qualitative and the successful leaders were expert or well versed in not just navigating or validating information, but in using it to effectively influence the organization as a whole.  Knowledge is power in this environment.  Management took over the role of resource allocation from owners and focused on decision-making as the primary effort, using knowledge and the skills of middle management to inform those choices.

A symbol of knowledge productivity might be the meeting.  Meetings came to dominate the culture of organizations:  meetings to decide what to meet about, meetings to confirm that people were on the same page, meetings to follow-up from other meetings, and so on.  Management became very good at justifying meetings, the work that went into preparing, having, and following up from meetings.  Power derived from holding meetings, creating follow-up items and more.  The work products of meetings—the pre-reading memos, the presentations, the supporting analytics began to take on epic proportions.  Staff organizations developed that shadowed the whole process.

The essence of these meetings was to execute on a strategy—a multi-year commitment to create value, defend against competition, and to execute.  Much of the headquarters mindset of this era was devoted to strategic analysis and planning.

The very best companies became differentiated by their use of information technologies in now legendary ways such as to manage supply chain or deliver services to customers.  Companies like Wal-Mart pioneered the use of technology to bring lower prices and better inventory management.  Companies like the old MCI developed whole new products based entirely on the ability to write software to provide new ways of offering existing services.

Even with the broad availability of knowledge and information, companies still became trapped in the old ways of doing things, unable to adapt and change.  The role of disruption as a function not just of technology development but as management decision-making showed the intricate relationship between the two. With this era of information technology came the notion of companies too big and too slow to react to changes in the marketplace even with information right there in front of collective eyes.

The impact of software, as we finished the first decade of the 21st century, is more profound than even the most optimistic software people would have predicted.  As the entrepreneur and venture capitalist Marc Andreessen wrote two years ago, “software is eating the world”.  Software is no longer just about the internal workings of business or a way to analyze information and execute more efficiently, but has come to define what products a business develops, offers, and serves.  Software is now the product, from cars to planes to entertainment to banking and more. Every product not only has a major software component but it is also viewed and evaluated through the role of software.  Software is ultimately the product, or at least a substantial part of differentiation, for every product and service.

Today’s workplace: Continuous Productivity

Today’s workplace is as different as the office was from the factory.

Today’s organizations are either themselves mobile or serving customers that are mobile, or likely both.  Mobility is everywhere we look—from apps for consumers to sales people in stores and the cash registers to plane tickets.  With mobility comes an unprecedented degree of freedom and flexibility—freedom from locality, limited information, and the desktop computer.

The knowledge-based organization spent much energy on connecting the dots between qualitative sampling and data sourced on what could be measured. Much went into trying get more sources of data and to seek the exact right answer to important management decisions.  Today’s workplace has access to more data than ever before, but along with that came understanding that just because it came from a computer it isn’t right.  Data is telemetry based on usage from all aspects of the system and goes beyond sampling and surveys.  The use of data today substitutes algorithms seeking exact answers with heuristics informed by data guessing the best answer using a moment’s worth of statistical data.  Today’s answers change over time as more usage generates more data.  We no longer spend countless hours debating causality because what is happening is right there before our eyes.

We see this all the time in the promotion of goods on commerce sites, the use of keyword search and SEO, even the way that search itself corrects spellings or maps use a vast array of data to narrow a potentially very large set of results from queries.  Technologies like speech or vision have gone from trying to compute the exact answer to using real-time data to provide contextually relevant and even more accurate guesses.

The availability of these information sources is moving from a hierarchical access model of the past to a much more collaborative and sharing-first approach.  Every member of an organization should have access to the raw “feeds” that could be material to their role.  Teams become the focus of collaborative work, empowered by the data to inform their decisions.  We see the increasing use of “crowds” and product usage telemetry able to guide improved service and products, based not on qualitative sampling plus “judgment” but on what amounts to a census of real-world usage.

Information technology is at the heart of all of these changes, just as it was in the knowledge era.  The technologies are vastly different.  The mainframe was about centralized information and control.  The PC era empowered people to first take mainframe data and make better use of it and later to create new, but inherently local or workgroup specific information sources.  Today’s cloud-based services serve entire organizations easily and can also span the globe, organizations, and devices.  This is such a fundamental shift in the availability of information that it changes everything in how information is collected, shared, and put to use. It changes everything about the tools used to create, analyze, synthesize, and share information.

Management using yesterday’s techniques can’t seem keep up with this world. People are overwhelmed by the power of their customers with all this information (such as when social networks create a backlash about an important decision, or we visit a car dealer armed with local pricing information).  Within organizations, managers are constantly trying to stay ahead of the curve.  The “young” employees seem to know more about what is going on because of Twitter and Facebook or just being constantly connected.  Even information about the company is no longer the sole domain of management as the press are able to uncover or at least speculate about the workings of a company while employees see this speculation long before management is communicating with employees.  Where people used to sit in important meetings and listen to important people guess about information, people now get real data from real sources in real-time while the meeting is taking place or even before.

This symbol of the knowledge era, the meeting, is under pressure because of the inefficiency of a meeting when compared to learning and communicating via the technology tools of today.  Why wait for a meeting when everyone has the information required to move forward available on their smartphones?  Why put all that work into preparing a perfect pitch for a meeting when the data is changing and is a guess anyway, likely to be further informed as the work progresses?  Why slow down when competitors are speeding up?

There’s a new role for management that builds on this new level of information and employees skilled in using it.  Much like those who grew up with PC “natively” were quick to assume their usage in the workplace (some might remember the novelty of when managers first began to answer their own email), those who grow up with the socialplace are using it to do work, much to the chagrin of management.

Management must assume a new type of leadership that is focused on framing the outcome, the characteristics of decisions, and the culture of the organization and much less about specific decision-making or reviewing work.  The role of workplace technology has evolved significantly from theory to practice as a result of these tools. The following table contrasts the way we work between the historic norms and continuous productivity.

Then Now, Continuous Productivity
Process Exploration
Hierarchy, top down or middle out Network, bottom up
Internal committees Internal and external teams, crowds
Strategy-centric Execution-centric
Presenting packaged and produced ideas, documents Sharing ideas and perspectives continuously, service
Data based on snapshots at intervals, viewed statically Data always real-time, viewed dynamically
Process-centric Rhythm-centric
Exact answers Approximation and iteration
More users More usage

Today’s workplace technology, theory

Modern IT departments, fresh off the wave of PC standardization and broad homogenization of the IT infrastructure developed the tools and techniques to maintain, ne contain, the overall IT infrastructure.

A significant part of the effort involved managing the devices that access the network, primarily the PC.  Management efforts ran the gamut from logon scripts, drive scanning, anti-virus software, standard (or only) software load, imaging, two-factor authentication and more.  Motivating this has been the longstanding reliability and security problems of the connected laptop—the architecture’s openness so responsible for the rise of the device also created this fragility.  We can see this expressed in two symbols of the challenges faced by IT: the corporate firewall and collaboration.  Both of these technologies offer good theories but somewhat backfire in practice in today’s context.

With the rise of the Internet, the corporate firewall occupied a significant amount of IT effort.  It also came to symbolize the barrier between employees and information resources.  At some extremes, companies would routinely block known “time wasters” such as social networks and free email.  Then over time as the popularity of some services grew, the firewall would be selectively opened up for business purposes.  YouTube and other streaming services are examples of consumer services that transitioned to an approved part of enterprise infrastructure given the value of information available.  While many companies might view Twitter as a time-wasting service, the PR departments routinely use it to track news and customer service might use it to understand problems with products so it too becomes an expected part of infrastructure.  These “cracks” in the notion of enterprise v. consumer software started to appear.

Traditionally the meeting came to symbolize collaboration.  The business meeting which occupied so much of the knowledge era has taken on new proportions with the spread of today’s technologies.  Businesses have gone to great lengths to automate meetings and enhance them with services.  In theory this works well and enables remote work and virtual teams across locations to collaborate.  In practical use, for many users the implementation was burdensome and did not support the wide variety of devices or cross-organization scenarios required.  The merger of meetings with the traditional tools of meetings (slides, analysis, memos) was also cumbersome as sharing these across the spectrum of devices and tools was also awkward. We are all familiar with the first 10 minutes of every meeting now turning into a technology timesink where people get connected in a variety of ways and then sync up with the “old tools” of meetings while they use new tools in the background.

Today’s workspace technology, practice

In practice, the ideal view that IT worked to achieve has been rapidly circumvented by the low-friction, high availability of a wide variety of faster-to-use, easier-to-use, more flexible, and very low-cost tools that address problems in need of solutions.  Even though this is somewhat of a repeat of the introduction of PCs in the early 1990’s, this time around securing or locking down the usage of these services is far more challenging than preventing network access and isolating a device.  The Internet works to make this so, by definition.

Today’s organizations face an onslaught of personally acquired tablets and smartphones that are becoming, or already are, the preferred device for accessing information and communication tools.  As anyone who uses a smartphone knows, accessing your inbox from your phone quickly becomes the preferred way to deal with the bulk of email.  How often do people use their phones to quickly check mail even while in front of their PC (even if the PC is not in standby or powered off)?  How much faster is it to triage email on a phone than it is on your PC?

These personal devices are seen in airports, hotels, and business centers around the world.  The long battery life, fast startup time, maintenance-free (relatively), and of course the wide selection of new apps for a wide array of services make these very attractive.

There is an ongoing debate about “productivity” on tablets.  In nearly all ways this debate was never a debate, but just a matter of time.  While many look at existing scenarios to be replicated on a tablet as a measure of success of tablets at achieving “professional productivity”, another measure is how many professionals use their tablets for their jobs and leave their laptops at home or work.  By that measure, most are quick to admit that tablets (and smartphones) are a smashing success.  The idea that tablets are used only for web browsing and light email seems as quaint as claiming PCs cannot do the work of mainframes—a common refrain in the 1980s.  In practice, far too many laptops have become literally desktops or hometops.

While the use of tools such as AutoCAD, Creative Suite, or enterprise line of business tools will be required and require PCs for many years to come, the definition of professional productivity will come to include all the tasks that can be accomplished on smartphones and tablets.  The nature of work is changing and so the reality of the tools in use are changing as well.

Perhaps the most pervasive services for work use are cloud-based storage products such as DropBox, Hightail (YouSendIt), or Box.  These products are acquired easily by consumers, have straightforward browser-based interfaces and apps on all devices, and most importantly solve real problems required by modern information sharing.  The basic scenario of sharing large files with a customers or partners (or even fellow employees) across heterogeneous devices and networks is easily addressed by these tools.  As a result, expensive and elaborate (or often much richer) enterprise infrastructure goes unused for this most basic of business needs—sharing files.  Even the ubiquitous USB memory stick is used to get around the limitations of enterprise storage products, much to the chagrin of IT departments.

Tools beyond those approved for communication are routinely used by employees on their personal devices (except of course in regulated industries).  Tools such as WhatsApp or WeChat have hundreds of millions of users.  A quick look at Facebook or Twitter show that for many of those actively engaged the sharing of work information, especially news about products and companies, is a very real effort that goes beyond “the eggs I had for breakfast” as social networks have sometimes been characterized.  LinkedIn has become the goto place for sales people learning about customers and partners and recruiters seeking to hire (or headhunt) and is increasingly becoming a primary source of editorial content about work and the workplace.  Leading strategists are routinely read by hundreds of thousands of people on LinkedIn and their views shared among the networks employees maintain of their fellow employees.  It has become challenging for management to “compete” with the level and volume of discourse among employees.

The list of devices and services routinely used by workers at every level is endless.  The reality appears to be that for many employees the number of hours of usage in front of approved enterprise apps on managed enterprise devices is on the decline, unless new tablets and phones have been approved.  The consumerization of IT appears to be very real, just by anecdotally observing the devices in use on public transportation, airports, and hotels.  Certainly the conversation among people in suits over what to bring on trips is real and rapidly tilting towards “tablet for trips”, if not already there.

The frustration people have with IT to deliver or approve the use of services is readily apparent, just as the frustration IT has with people pushing to use insecure, unapproved, and hard to manage tools and devices.  Whenever IT puts in a barrier, it is just a big rock in the information river that is an organization and information just flows around it.  Forward-looking IT is working diligently to get ahead of this challenge, but the models used to reign in control of PCs and servers on corporate premises will prove of limited utility.

A new approach is needed to deal with this reality.

Transition versus disruption

The biggest risks organizations face is in thinking the transition to a new way of working will be just that, a transition, rather than a disruption.  While individuals within an organization, particularly those that might be in senior management, will seek to smoothly transition from one style of work to another, the bulk of employees will switch quickly. Interns, new hires, or employees looking for an edge see these changes as the new normal or the only normal they’ve ever experienced.  Our own experience with PCs is proof of how quickly change can take place.

In Only the Paranoid Survive, Andy Grove discussed breaking the news to employees of a new strategy at Intel only to find out that employees had long ago concluded the need for change—much to the surprise of management.  The nature of a disruptive change in management is one in which management believes they are planning a smooth transition to new methods or technologies only to find out employees have already adopted them.

Today’s technology landscape is one undergoing a disruptive change in the enterprise—the shift to cloud based services, social interaction, and mobility.  There is no smooth transition that will take place.  Businesses that believe people will gradually move from yesterday’s modalities of work to these new ways will be surprised to learn that people are already working in these new ways. Technologists seeking solutions that “combine the best of both worlds” or “technology bridge” solutions will only find themselves comfortably dipping their toe in the water further solidifying an old approach while competitors race past them.  The nature of disruptive technologies is the relentless all or nothing that they impose as they charge forward.

While some might believe that continuing to focus on “the desktop” will enable a smoother transition to mobile (or consumer) while the rough edges are worked out or capabilities catch up to what we already have, this is precisely the innovator’s dilemma – hunkering down and hoping things will not take place as quickly as they seem to be for some.  In fact, to solidify this point of view many will point to a lack of precipitous decline or the mission critical nature in traditional ways of working.  The tail is very long, but innovation and competitive edge will not come from the tail.  Too much focus on the tail will risk being left behind or at the very least distract from where things are rapidly heading. Compatibility with existing systems has significant value, but is unlikely to bring about more competitive offerings, better products, or step-function improvements in execution.

Culture of continuous productivity

The culture of continuous productivity enabled by new tools is literally a rewrite of the past 30 years of management doctrine.  Hierarchy, top-down decision making, strategic plans, static competitors, single-sided markets, and more are almost quaint views in a world literally flattened by the presence of connectivity, mobility, and data. The impact of continuous productivity can be viewed through the organization, individuals and teams, and the role of data.

The social and mobile aspects of work, finally, gain support of digital tools and with those tools the realization of just how much of nearly all work processes are intrinsically social.   The existence and paramount importance of “document creation tools” as the nature of work appear, in hindsight, to have served as a slight detour of our collective focus.  Tools can now work more like we like to work, rather than forcing us to structure our work to suit the tools.  Every new generation of tools comes with promises of improvements, but we’ve already seen how the newest styles of work lead to improvements in our lives outside of work. Where it used to be novel for the person with a PC to use those tools to organize a sports team or school function, now we see the reverse and we see the tools for the rest of life being used to improve our work.

This existence proof makes this revolution different.  We already experience the dramatic improvements in our social and non-work “processes”.  With the support and adoption of new tools, just as our non-work lives saw improvements we will see improvements in work.

The cultural changes encouraged or enabled by continuous productivity include:

  • Innovate more and faster.  The bottom line is that by compressing the time between meaningful interactions between members of a team, we will go from problem to solution faster.  Whether solving a problem with an existing product or service or thinking up a new one, the continuous nature of communication speeds up the velocity and quality of work. We all experience the pace at which changes outside work take place compared to the slow pace of change within our workplaces.
  • Flatten hierarchy. The difficulty in broad communication, the formality of digital tools, and restrictions on the flow of information all fit perfectly with a strict hierarchical model of teams.  Managers “knew” more than others.  Information flowed down.  Management informed employees.  Equal access to tools and information, a continuous multi-way dialog, and the ease and bringing together relevant parties regardless of place in the organization flattens the hierarchy.  But more than that, it shines a light on the ineffectiveness and irrelevancy of a hierarchy as a command structure.

  • Improve execution.  Execution improves because members of teams have access to the interactions and data in real-time.  Gone are the days of “game of telephone” where information needed to “cascade” through an organization only to be reinterpreted or even filtered by each level of an organization.
  • Respond to changes using telemetry / data.  With the advent of continuous real-world usage telemetry, the debate and dialog move from deciding what the problems to be solved might be to solving the problem.  You don’t spend energy arguing over the problem, but debating the merits of various solutions.

  • Strengthen organization and partnerships.  Organizations that communicate openly and transparently leave much less room for politics and hidden agendas.  The transparency afforded by tools might introduce some rough and tumble in the early days as new “norms” are created but over time the ability to collaborate will only improve given the shared context and information base everyone works from.
  • Focus on the destination, not the journey.  The real-time sharing of information forces organizations to operate in real-time. Problems are in the here and now and demand solutions in the present. The benefit of this “pressure” is that a focus on the internal systems, the steps along the way, or intermediate results is, out of necessity, de-emphasized.

Organization culture change

Continuously productive organizations look and feel different from traditional organizations. As a comparison, consider how different a reunion (college, family, etc.) is in the era of Facebook usage. When everyone gets together there is so much more that is known—the reunion starts from shared context and “intimacy”.  Organizations should be just as effective, no matter how big or how geographically dispersed.

Effective organizations were previously defined by rhythms of weekly, monthly and quarterly updates.  These “episodic” connection points had high production values (and costs) and ironically relatively low retention and usage.  Management liked this approach as it placed a high value on and required active management as distinct from the work.  Tools were designed to run these meetings or email blasts, but over time these were far too often over-produced and tended to be used more for backward looking pseudo-accountability.

Looking ahead, continuously productive organizations will be characterized by the following:

  • Execution-centric focus.  Rather than indexing on the process of getting work done, the focus will shift dramatically to execution. The management doctrine of the late 20th century was about strategy.  For decades we all knew that strategy took a short time to craft in reality, but in practice almost took on a life of its own. This often led to an ever-widening gap between strategy and execution, with execution being left to those of less seniority.  When everyone has the ability to know what can be known (which isn’t everything) and to know what needs to be done, execution reigns supreme.  The opportunity to improve or invent will be everywhere and even with finite resources available, the biggest failure of an organization will be a failure to act.
  • Management framing context with teams deciding. Because information required discovery and flowed (deliberately) inefficiently management tasked itself with deciding “things”. The entire process of meetings degenerated into a ritualized process to inform management to decide amongst options while outside the meeting “everyone” always seemed to know what to do. The new role of management is to provide decision-making frameworks, not decisions.  Decisions need to be made where there is the most information. Framing the problem to be solved out of the myriad of problems and communicating that efficiently is the new role of management.
  • Outside is your friend.  Previously the prevailing view was that inside companies there was more information than there was outside and often the outside was viewed as being poorly informed or incomplete. The debate over just how much wisdom resides in the crowd will continue and certainly what distinguishes companies with competitive products will be just how they navigate the crowd and simultaneously serve both articulated and unarticulated needs.  For certain, the idea that the outside is an asset to the creation of value, not just the destination of value, is enabled by the tools and continuous flow of information.
  • Employees see management participate and learn, everyone has the tools of management.  It took practically 10 years from the introduction of the PC until management embraced it as a tool for everyday use by management.  The revolution of social tools is totally different because today management already uses the socialplace tools outside of work. Using Twitter for work is little different from using Facebook for family.  Employees expect management to participate directly and personally, whether the tool is a public cloud service or a private/controlled service. The idea of having an assistant participate on behalf of a manager with a social tool is as archaic as printing out email and typing in handwritten replies. Management no longer has separate tools or a different (more complete) set of books for the business, but rather information about projects and teams becomes readily accessible.
  • Individuals own devices, organizations develop and manage IP. PCs were first acquired by individual tech enthusiasts or leading edge managers and then later by organizations.  Over time PCs became physical assets of organizations.  As organizations focused more on locking down and managing those assets and as individuals more broadly had their own PCs, there was a decided shift to being able to just “use a computer” when needed.  The ubiquity of mobile devices almost from the arrival of smartphones and certainly tablets, has placed these devices squarely in the hands of individuals. The tablet is mine. And because it is so convenient for the rest of my life and I value doing a good job at work, I’m more than happy to do work on it “for free”.  In exchange, organizations are rapidly moving to tools and processes that more clearly identify the work products as organization IP not the devices.  Cloud-based services become the repositories of IP and devices access that through managed credentials.

Individuals and teams work differently

The new tools and techniques come together to improve upon the way individuals and teams interact.  Just as the first communication tools transformed business, the tools of mobile and continuous productivity change the way interactions happen between individuals and teams.

  • Sense and respond.  Organizations through the PC era were focused on planning and reacting cycles.  The long lead time to plan combined with the time to plan a reaction to events that were often delayed measurements themselves characterized “normal”.  New tools are much more real-time and the information presented represents the whole of the information at work, not just samples and surveys.  The way people will work will focus much more on everyone being sensors for what is going on and responding in real-time.  Think of the difference between calling for a car or hailing a cab and using Uber or Lyft from either a consumer perspective or from the business perspective of load balancing cars and awareness of the assets at hand as representative to sensing and responding rather than planning.
  • Bottom up and network centric.  The idea of management hierarchy or middle management as gatekeepers is being broken down by the presence of information and connectivity.  The modern organization working to be the most productive will foster an environment of bottom up—that is people closest to the work are empowered with information and tools to respond to changes in the environment.  These “bottoms” of the organization will be highly networked with each other and connected to customers, partners, and even competitors.  The “bandwidth” of this network is seemingly instant, facilitated by information sharing tools.
  • Team and crowd spanning the internal and external.  The barriers of an organization will take on less and less meaning when it comes to the networks created by employees.  Nearly all businesses at scale are highly virtualized across vendors, partners, and customers.  Collaboration on product development, product implementation, and product support take place spanning information networks as well as human networks.  The “crowd” is no longer a mob characterized by comments on a blog post or web site, but can be structured and systematically tapped with rich demographic information to inform decisions and choices.
  • Unstructured work rhythm.  The highly structured approach to work that characterized the 20th century was created out of a necessity for gathering, analyzing, and presenting information for “costly” gatherings of time constrained people and expensive computing.  With the pace of business and product change enabled by software, there is far less structure required in the overall work process.  The rhythm of work is much more like routine social interactions and much less like daily, weekly, monthly staff meetings.  Industries like news gathering have seen these radical transformations, as one example.

Data becomes pervasive (and big)

With software capabilities come ever-increasing data and information.  While the 20th century enabled the collection of data and to a large degree the analysis of data to yield ever improving decisions in business, the prevalence of continuous data again transforms business.

  • Sharing data continuously.  First and foremost, data will now be shared continuously and broadly within organizations. The days when reports were something for management and management waited until the end of the week or month to disseminate filtered information are over.  Even though financial data has been relatively available, we’re now able to see how products are used, trouble shoot problems customers might be having, understand the impact of small changes, and try out alternative approaches.  Modern organizations will provide tools that enable the continuous sharing of data through mobile-first apps that don’t require connectivity to corporate networks or systems chained to desktop resources
  • Always up to date.  The implication of continuously sharing information means that everyone is always up to date.  When having a discussion or meeting, the real world numbers can be pulled up right then and there in the hallway or meeting room.  Members of teams don’t spend time figuring out if they agree on numbers, where they came from or when they were “pulled”.  Rather the tools define the numbers people are looking at and the data in those tools is the one true set of facts.
  • Yielding best statistical approach informed by telemetry (induction).  The notion that there is a “right” answer is antiquated as the printed report.  We can now all admit that going to a meeting with a printed out copy of “the numbers” is not worth the debate over the validity or timeframe of those numbers (“the meeting was rescheduled, now we have to reprint the slides.”)  Meetings now are informed by live data using tools such as Mixpanel or live reporting from Workday, Salesforce and others.  We all know now that “right” is the enemy of “close enough” given that the datasets we can work with are truly based on census and not surveys.  This telemetry facilitates an inductive approach to decision-making.
  • Valuing more usage.  Because of the ability to truly understand the usage of products—movies watched, bank accounts used, limousines taken, rooms booked, products browsed and more—the value of having more people using products and services increases dramatically.  Share matters more in this world because with share comes the best understanding of potential growth areas and opportunities to develop for new scenarios and new business approaches.

New generation of productivity tools, examples and checklist

Bringing together new technologies and new methods for management has implications that go beyond the obvious and immediate.  We will all certainly be bringing our own devices to work, accessing and contributing to work from a variety of platforms, and seeing our work take place across organization boundaries with greater ease.  We can look very specifically at how things will change across the tools we use, the way we communicate, how success is measured, and the structure of teams.

Tools will be quite different from those that grew up through the desktop PC era.  At the highest level the implications about how tools are used are profound.  New tools are being developed today—these are not “ports” of existing tools for mobile platforms, but ideas for new interpretations of tools or new combinations of technologies.  In the classic definition of innovator’s dilemma, these new tools are less functional than the current state-of-the-art desktop tools.  These new tools have features and capabilities that are either unavailable or suboptimal at an architectural level in today’s ubiquitous tools.  It will be some time, if ever, before new tools have all the capabilities of existing tools.  By now, this pattern of disruptive technologies is familiar (for example, digital cameras, online reading, online videos, digital music, etc.).

The user experience of this new generation of productivity tools takes on a number of attributes that contrast with existing tools, including:

  • Continuous v. episodic. Historically work took place in peaks and valleys.  Rough drafts created, then circulated, then distributed after much fanfare (and often watering down).  The inability to stay in contact led to a rhythm that was based on high-cost meetings taking place at infrequent times, often requiring significant devotion of time to catching up. Continuously productive tools keep teams connected through the whole process of creation and sharing.  This is not just the use of adjunct tools like email (and endless attachments) or change tracking used by a small number of specialists, but deep and instant collaboration, real-time editing, and a view that information is never perfect or done being assembled.
  • Online and shared information.  The old world of creating information was based on deliberate sharing at points in time.  Heavyweight sharing of attachments led to a world where each of us became “merge points” for work. We worked independently in silos hoping not to step on each other never sure where the true document of record might be or even who had permission to see a document.  New tools are online all the time and by default.  By default information can be shared and everyone is up to date all the time.
  • Capture and continue  The episodic nature of work products along with the general pace of organizations created an environment where the “final” output carried with it significant meaning (to some).  Yet how often do meetings take place where the presenter apologizes for data that is out of date relative to the image of a spreadsheet or org chart embedded in a presentation or memo?  Working continuously means capturing information quickly and in real-time then moving on.  There are very few end points or final documents.  Working with customers and partners is a continuous process and the information is continuous as well.
  • Low startup costs.  Implementing a new system used to be a time consuming and elaborate process viewed as a multi-year investment and deployment project.  Tools came to define the work process and more critically make it impossibly difficult to change the work process.  New tools are experienced the same way we experience everything on the Internet—we visit a site or download an app and give it a try.  The cost to starting up is a low-cost subscription or even a trial.  Over time more features can be purchased (more controls, more depth), but the key is the very low-cost to begin to try out a new way to work.  Work needs change as market dynamics change and the era of tools preventing change is over.
  • Sharing inside and outside.  We are all familiar with the challenges of sharing information beyond corporate boundaries.  Management and IT are, rightfully, protective of assets.  Individuals struggle with the basics of getting files through firewalls and email guards.  The results are solutions today that few are happy with.  Tools are rapidly evolving to use real identities to enable sharing when needed and cross-organization connections as desired.  Failing to adopt these approaches, IT will be left watching assets leak out and workarounds continue unabated.
  • Measured enterprise integration.  The PC era came to be defined at first by empowerment as leading edge technology adopters brought PCs to the workplace.  The mayhem this created was then controlled by IT that became responsible to keep PCs running, information and networks secure, and enforce consistency in organizations for the sake of sharing and collaboration.  Many might (perhaps wrongly) conclude that the consumerization wave defined here means IT has no role in these tasks.  Rather the new era is defined by a measured approach to IT control and integration.  Tools for identity and device management will come to define how IT integrates and controls—customization or picking and choosing code are neither likely nor scalable across the plethora of devices and platforms that will be used by people to participate in work processes. The net is to control enterprise information flow, not enterprise information endpoints.
  • Mobile first.  An example of a transition between the old and new, many see the ability to view email attachments on mobile devices as a way forward.  However, new tools imply this is a true bridge solution as mobility will come to trump most everything for a broad set of people.  Deep design for architects, spreadsheets for analysts, or computation for engineers are examples that will likely be stationary or at least require unique computing capabilities for some time. We will all likely be surprised by the pace at which even these “power” scenarios transition in part to mobile.  The value of being able to make progress while close to the site, the client, or the problem will become a huge asset for those that approach their professions that way.
  • Devices in many sizes. Until there is a radical transformation of user-machine interaction (input, display), it is likely almost all of us will continue to routinely use devices of several sizes and those sizes will tend to gravitate towards different scenarios (see http://blog.flurry.com/bid/99859/The-Who-What-and-When-of-iPhone-and-iPad-Usage), though commonality in the platforms will allow for overlap.  This overlap will continue to be debated as “compromise” by some.  It is certain we will all have a device that we carry and use almost all the time, the “phone”.  A larger screen device will continue to better serve many scenarios or just provide a larger screen area upon which to operate.  Some will find a small tablet size meeting their needs almost all of the time.  Others will prefer a larger tablet, perhaps with a keyboard.  It is likely we will see somewhat larger tablets arise as people look to use modern operating systems as full-time replacements for existing computing devices.  The implications are that tools will be designed for different device sizes and input modalities.

It is worth considering a few examples of these tools.  As an illustration, the following lists tools in a few generalized categories of work processes.  New tools are appearing almost every week as the opportunity for innovation in the productivity space is at a unique inflection point.  These examples are just a few tools that I’ve personally had a chance to experience—I suspect (and hope) that many will want to expand these categories and suggest additional tools (or use this as a springboard for a dialog!)

The architecture and implementation of continuous productivity tools will also be quite different from the architecture of existing tools.  This starts by targeting a new generation of platforms, sealed-case platforms.

The PC era was defined by a level of openness in architecture that created the opportunity for innovation and creativity that led to the amazing revolution we all benefit from today.  An unintended side-effect of that openness was the inherent unreliability over time, security challenges, and general futzing that have come to define the experience many lament.  The new generation of sealed case platforms—that is hardware, software, and services that have different points of openness, relative to previous norms in computing, provide for an experience that is more reliable over time, more secure and predictable, and less time-consuming to own and use.  The tradeoff seems dramatic (or draconian) to those versed in old platforms where tweaking and customizing came to dominate.  In practice the movement up the stack, so to speak, of the platform will free up enormous amounts of IT budget and resources to allow a much broader focus on the business.  In addition, choice, flexibility, simplicity in use, and ease of using multiple devices, along with a relative lack of futzing will come to define this new computing experience for individuals.

The sealed case platforms include iOS, Android, Chromebooks, Windows RT, and others.  These platforms are defined by characteristics such as minimizing APIs that manipulate the OS itself, APIs that enforce lower power utilization (defined background execution), cross-application security (sandboxing), relative assurances that apps do what they say they will do (permissions, App Stores), defined semantics for exchanging data between applications, and enforced access to both user data and app state data.  These platforms are all relatively new and the “rules” for just how sealed a platform might be and how this level of control will evolve are still being written by vendors.  In addition, devices themselves demonstrate the ideals of sealed case by restricting the attachment of peripherals and reducing the reliance on kernel mode software written outside the OS itself.  For many this evolution is as controversial as the transition automobiles made from “user-serviceable” to electronic controlled engines, but the benefits to the humans using the devices are clear.

Building on the sealed case platform, a new generation of applications will exhibit a significant number of the following attributes at the architecture and implementation level.  As with all transitions, debates will rage over the relative strength or priority of one or more attributes for an app or scenario (“is something truly cloud” or historically “is this a native GUI”).  Over time, if history is any guide, the preferred tools will exhibit these and other attributes as a first or native priority, and de-prioritize the checklists that characterized the “best of” apps for the previous era.

The following is a checklist of attributes of tools for continuous productivity:

  • Mobile first. Information will be accessed and actions will be performed mobile first for a vast majority of both employees and customers.  Mobile first is about native apps, which is likely to create a set of choices for developers as they balance different platforms and different form factors.
  • Cloud first.  Information we create will be stored first in the cloud, and when needed (or possible) will sync back to devices.  The days of all of us focusing on the tasks of file management and thinking about physical storage have been replaced by essentially unlimited cloud storage.  With cloud-storage comes multi-device access and instant collaboration that spans networks.  Search becomes an integral part of the user-experience along with labels and meta-data, rather than physical hierarchy presenting only a single dimension.  Export to broadly used interchange formats and printing remain as critical and archival steps, but not the primary way we share and collaborate.
  • User experience is platform native or browser exploitive.  Supporting mobile apps is a decision to fully use and integrate with a mobile platform.  While using a browser can and will be a choice for some, even then it will become increasingly important to exploit the features unique to a browser.  In all cases, the usage within a customer’s chosen environment encourages the full range of support for that platform environment.
  • Service is the product, product is the service.  Whether an internal IT or a consumer facing offering, there is no distinction where a product ends and a continuously operated and improving service begins.  This means that the operational view of a product is of paramount importance to the product itself and it means that almost every physical product can be improved by a software service element.
  • Tools are discrete, loosely coupled, limited surface area.  The tools used will span platforms and form factors.  When used this way, monolithic tools that require complex interactions will fall out of favor relative to tools more focused in their functionality.  Doing a smaller set of things with focus and alacrity will provide more utility, especially when these tools can be easily connected through standard data types or intermediate services such as sharing, storage, and identity.
  • Data contributed is data extractable.  Data that you add to a service as an end-user is easily extracted for further use and sharing.  A corollary to this is that data will be used more if it can also be extracted a shared.  Putting barriers in place to share data will drive the usage of the data (and tool) lower.
  • Metadata is as important as data.  In mobile scenarios the need to search and isolate information with a smaller user interface surface area and fewer “keystrokes” means that tools for organization become even more important.  The use of metadata implicit in the data, from location to author to extracted information from a directory of people will become increasingly important to mobile usage scenarios.
  • Files move from something you manage to something you use when needed.  Files (and by corollary mailboxes) will simply become tools and not obsessions.  We’re all seeing the advances in unlimited storage along with accurate search change the way we use mailboxes.  The same will happen with files.  In addition, the isolation and contract-based sharing that defines sealed platforms will alter the semantic level at which we deal with information.  The days of spending countless hours creating and managing hierarchies and physical storage structures are over—unlimited storage, device replication, and search make for far better alternatives.  
  • Identity is a choice.  Use of services, particularly consumer facing services, requires flexibility in identity.  Being able to use company credentials and/or company sign-on should be a choice but not a requirement.  This is especially true when considering use of tools that enable cross-organization collaboration. Inviting people to participate in the process should be as simple as sending them mail today.
  • User experience has a memory and is aware and predictive.  People expect their interactions with services to be smart—to remember choices, learn preferences, and predict what comes next.  As an example, location-based services are not restricted to just maps or specific services, but broadly to all mobile interactions where the value of location can improve the overall experience.
  • Telemetry is essential / privacy redefined.  Usage is what drives incremental product improvements along with the ability to deliver a continuously improving product/service.  This usage will be measured by anonymous, private, opt-in telemetry.  In addition, all of our experiences will improve because the experience will be tailored to our usage.  This implies a new level of trust with regard to the vendors we all use.  Privacy will no doubt undergo (or already has undergone) definitional changes as we become either comfortable or informed with respect to the opportunities for better products.   
  • Participation is a feature.  Nearly every service benefits from participation by those relevant to the work at hand.  New tools will not just enable, but encourage collaboration and communication in real-time and connected to the work products.  Working in one place (document editor) and participating in another (email inbox) has generally been suboptimal and now we have alternatives.  Participation is a feature of creating a work product and ideally seamless.
  • Business communication becomes indistinguishable from social.  The history of business communication having a distinct protocol from social goes back at least to learning the difference between a business letter and a friendly letter in typing class.  Today we use casual tools like SMS for business communication and while we will certainly be more respectful and clear with customers, clients, and superiors, the reality is the immediacy of tools that enable continuous productivity will also create a new set of norms for business communication.  We will also see the ability to do business communication from any device at any time and social/personal communication on that same device drive a convergence of communication styles.
  • Enterprise usage and control does not make things worse. In order for enterprises to manage and protect the intellectual property that defines the enterprise and the contribution employees make to the enterprise IP, data will need to be managed.  This is distinctly different from managing tools—the days of trying to prevent or manage information leaks by controlling the tools themselves are likely behind us.  People have too many choices and will simply choose tools (often against policy and budgets) that provide for frictionless work with coworkers, partners, customers, and vendors.  The new generation of tools will enable the protection and management of information that does not make using tools worse or cause people to seek available alternatives.  The best tools will seamlessly integrate with enterprise identity while maintaining the consumerization attributes we all love.

What comes next?

Over the coming months and years, debates will continue over whether or not the new platforms and newly created tools will replace, augment, or see occasional use relative to the tools with which we are all familiar.  Changes as significant as those we are experiencing right now happen two ways, at first gradually and then quickly, to paraphrase Hemingway. Some might find little need or incentive to change. Others have already embraced the changes.  Perhaps those right now on the cusp, realize that the benefits of their new device and new apps are gradually taking over their most important work and information needs.  All of these will happen.  This makes for a healthy dialog.

It also makes for an amazing opportunity to transform how organizations make products, serve customers, and do the work of corporations.  We’re on the verge of seeing an entire rewrite of the management canon of the 20th century.  New ways of organizing, managing, working, collaborating are being enabled by the tools of the continuous productivity paradigm shift.

Above all, it makes for an incredible opportunity for developers and those creating new products and services.  We will all benefit from the innovations in technology that we will experience much sooner than we think.

–Steven Sinofsky

Written by Steven Sinofsky

August 20, 2013 at 7:00 am

Juggling multiple platforms and the bumpy road ahead

with 24 comments

Cross-platform cycleTargeting multiple operating systems has been an industry goal or non-goal depending on your perspective since some of the earliest days of computing.  For both app developers and platform builders, the evolution of their work follow typical patterns—patterns where their goals might be aligned or manageable in the short term but become increasingly divergent over time.

While history does not always repeat itself, the ingredients for a repeat of cross-platform woes currently exist in the domain of mobile apps (mobile means apps developed for modern sealed-case platforms such as iOS, Android, Windows RT, Windows Phone, Blackberry, etc.)  The network effects of platforms and the “winner take all” state that many believe is reached (or perhaps desirable) influences the behavior and outcome of cross-platform app development as well as platform development.

Today app developers generally write apps targeting several of the mobile platforms.  If you look at number of “sockets” over the past couple of years there was an early dominance of iOS followed by a large growth of Android.  Several other platforms currently compete for the next round of attention.  Based on apps in respective app stores these are two leaders for the new platforms. App developers today seeking the most number of client sockets target at least iOS and Android, often simultaneously. It is too early to pick a winner.

Some would say that the role of the cloud services or the browser make app development less about the “client” socket.  The data, however, suggests that customers prefer the interaction approach and integration capability of apps and certainly platform builders touting the size of app stores further evidences that perspective.  Even the smallest amount of “dependency” (for customers or technical reasons) on the client’s unique capabilities can provide benefits or dramatically improve the quality of the overall experience.

In discussions with entrepreneurs I have had, it is clear the approach to cross-platform is shifting from “obviously we will do multiple platforms” to thinking about which platform comes first, second, or third and how many to do.  Chris Dixon recently had some thoughts about this in the context of modern app development in general (tablets and/or phones).  I would agree that tablets drive a different type of app over time simply because the scenarios can be quite different even with identically capable devices under the hood.  The cross-platform question only gets more difficult if apps take on unique capabilities or user experiences for different sized screens, which is almost certainly the case.

History

The history of cross-platform development is fairly well-known by app developers.

The goal of an app developer is to acquire as many customers as possible or to have the highest quality engagement with a specific set of customers.  In an environment where customers are all using one platform (by platform we mean set of APIs, tools, languages that are used to build an app) the choice for a developer is simple, which is to target the platform APIs in a fully exploitive manner.

The goal of being the “best” app for the platform is a goal shared by both app and platform developers.  The reason for this is that nearly any app will have app competitors and one approach to differentiation will be to be the app that is best on the platform—at the very least this will garner the attention of the platform builder and result in amplification of the marketing and outreach of a given app (for example, given n different banking apps, the one that is used in demonstrations or platform evangelism will be the one that touts the platform advantages).

Once developers are faced with two or more platforms to target, the discussion typically starts with attempting to measure the size of the customer base for each platform (hence the debate today about whether market share or revenue define a more successful platform).  New apps (at startups or established companies) will start with a dialog that depending on time or resources jumps through incredible hoops to attempt to model the platform dynamics.  Questions such as which customers use which platforms, velocity of platform adoption, installed base, likelihood of reaching different customers on platforms, geography of usage, and pretty much every variable imaginable.  The goal is to attempt to define the market impact of either support multiple platforms or betting on one platform.  Of course none of these can be known.  Observer bias is inherent in the process only because this is all about forecasting a dynamic system based on the behavior of people. But basing a product plan on a rapidly evolving and hard to define “market share” metric is fraught with problems.

During this market sizing debate, the development team is also looking at how challenging cross platform support can be.  While mostly objective, just as with the market sizing studies, bias can sneak in.  For example, if the developers’ skills align with one platform or a platform makes certain architectural assumptions that are viewed favorably then different approaches to platform choices become easy or difficult.

Developers that are fluent in HTML might suggest that things be done in a browser or use a mobile browser solution.  Even the business might like this approach because it leverages an existing effort or business model (serving ads for example).  Some view the choices Facebook made for their mobile apps as being influenced by these variables.

As the dialog continues, developers will tend to start to see the inherent engineering costs in trying to do a great job across multiple platforms.  They will start to think about how hard it is to keep code bases in sync or where features will be easier/harder or appear first or even just sim-shipping across platforms.  Very quickly developers will generally start to feel pulled in an impossible direction by having to be across multiple platforms and that it is just not viable to have a long-term cross-platform strategy.

The business view will generally continue to drive a view that the more sockets there are the better.  Some apps are inherently going to drive the desire or need for cross-platform support.  Anything that is about communications for example will generally argue for “going where the people are” or “our users don’t know the OS of their connected endpoints” and thus push for supporting multiple platforms.  Apps that are offered as free front ends for services (online banking, buying tickets, or signing up for yoga class) will also feel pressures to be where the customers are and to be device agnostic.  As you keep going through scenarios the business folks will become convinced that the only viable approach is to be on all the popular platforms.

That puts everyone in a very tense situation—everyone is feeling stressed about achieving success. Something has to give though.

We’ve all been there.

Pattern

The industry has seen this cross-platform movie several times.  It might not always be the same and each generation brings with it new challenges, new technologies, and potentially different approaches that could lead to different outcomes. Knowing the past is important.

Today’s cross-platform challenge can be viewed differently primarily because of a few factors when looking at the challenge from an app developer / ISV:

  • App Services.  Much of the functionality for today’s apps resides on software as a service infrastructure.  The apps themselves might be viewed as fairly lightweight front ends to these services, at least for some class of apps or some approaches to app building. This is especially true today as most apps are still fairly “first generation”.
  • Languages and tools.  Today’s platforms are more self-contained in that the languages and tools are also part of the platform.  In previous generations there were languages that could be used across different platforms (COBOL, C, C++) and standards for those languages even if there were platform-specific language extensions.  While there are ample opportunities for shared libraries of “engine” code in many of today’s platforms, most modern platforms are designed around a heavy tilt in favor of one language, and those are different across platforms.  Given the first point, it is fair to say that the bulk of the code (at least initially) on the device will be platform specific anyway.
  • Integration.  Much of what goes on in apps today is about integration with the platform.  Integration has been increasing in each generation of platform shifts.  For example, in the earliest days there was no cross-application sharing, then came the basics through files, then came clipboard sharing.  Today sharing is implicit in nearly every app in some way.

Even allowing for this new context, there is a cycle at work in how multiple, competing platforms evolve.

This is a cycle so you need to start somewhere.

Initially there is a critical mass around one platform. As far as modern platforms go when iOS was introduced it was (and remains) unique in platform and device attributes so mobile apps had one place to go and all the new activity was on that platform.  This is a typical first-mover scenario in a new market.

Over time new platforms emerge (with slightly different characteristics) creating a period of time where cross-platform work is the norm.  This period is supported by the fact that platforms are relatively new and are each building out the base infrastructure which tends to look similar across the new platforms.

There are solid technical reasons why cross-platform development seems to work in the early days of platform proliferation.  When new platforms begin to emerge they are often taking similar approaches to “reinventing” what it means to be a platform.  For example, when GUI interfaces first came about the basics of controls, menus, and windows were close enough that knowledge of one platform readily translated to other platforms.  It was technically not too difficult to create mapping layers that allowed the same code to be used to target different platforms.

During this phase of platform evolution the platforms are all relatively immature compared to each other.  Each is focused on putting in place the plumbing that approaches platform design in this new shared view.  In essence the emerging platforms tend to look more similar that different. The early days of web browsers–which many believed were themselves platforms–followed this pattern.  There was a degree of HTML that was readily shared and consistent across platforms.  At least this was the case for a while.

During this time there is often a lot of re-learning that takes place.  The problems solved in the previous generation of platforms become new again.  New solutions to old problems arise, sometimes frustrating developers.  But this “new growth” also brings with it a chance to revisit old assumptions and innovate in new ways, even if the problem space is the same.

Even with this early commonality, things can be a real challenge.  For example, there is a real desire for applications to look and feel “native”.  Sometimes this is about placement of functionality such as where settings are located.  It could be about the look or style of graphical elements or the way visual aspects of the platform are reflected in your app.  It could also be about highly marketed features and how well your app integrates as evidence for supporting the platform.

Along the way things begin to change and the platforms diverge because of two factors.  First, once the plumbing common to multiple early platforms is in place, platform builders begin to express their unique point of view of how platform services experiences should evolve.  For example, Android is likely to focus on unique services and how the client interacts with and uses those services. To most, iOS has shown substantially more innovation in client-side innovation and first-party experiences. The resulting APIs exposed to developers start to diverge in capabilities and new API surface areas no longer seem so common between platforms.

Second, competition begins to drive how innovation progresses.  While the first mover might have one point of view, the second (or third) mover might take the same idea of a service or API but evolve it slightly differently.  It might integrate with backends differently or it might have a very different architecture.  The role of voice input/reco, maps, or cloud storage are examples of APIs that are appearing on platforms but the expression of those APIs and capabilities they support are evolving in different ways that there are no obvious mappings between them.

Challenges

As the platforms diverge developers start to make choices about what APIs to support on each platform or even which platforms to target.  With these choices come a few well known challenges.

  • Tools and languages.  Initially the tools and languages might be different but things seem manageable.  In particular, developers look to put as much code in common languages (“platform agnostic code”) or implement code as a web service (independent of the client device). This is a great strategy but does not allow for the reality that a good deal of code (and differentiation) will serve as platform-specific user experience or integration functionality.  Early on tools are relatively immature and maybe even rudimentary which makes it easier to build infrastructure around managing a cross-platform project.  Over time the tools themselves will become more sophisticated and diverge in capabilities.  New IDEs or tools will be required for the platforms in order to be competitive and developers will gravitate to one toolset, resulting in developers themselves less able to context switch between platforms. At the very least, managing two diverging code bases using different tools becomes highly challenging–even if right now some devs think they have a handle on the situation.
  • User interaction and design (assets).  Early in platform evolution the basics of human interaction tend to be common and the approaches to digital assets can be fairly straight forward.  As device capabilities diverge (DPI, sensors, screen sizes) the ability for the user interaction to be common also diverges.  What works on one platform doesn’t seem right on another.  Tablet sized screens introduce a whole other level of divergence to this challenge. Alternate input mechanisms can really divide platform elements (voice, vision, sensors, touch metaphors).
  • Platform integration.  Integrating with a platform early on is usually fairly “trivial”.  Perhaps there are a few places you put preferences or settings, or connect to some platform services such as internationalization or accessibility.  As platforms evolve, where and how to integrate poses challenges for app developers.  Notifications, settings, printing, storage, and more are all places where finding what is “common” between platforms will become increasingly difficult to impossible.  The platform services for identity, payment, and even integration with third party services will become increasingly part of the platform API as well. When those APIs are used other benefits will accrue to developers and/or end-users of apps—and these APIs will be substantially different across platforms.
  • More code in the service.  The new platforms definitely emphasize code in services to provide a way to be insulated from platform changes.  This is a perfect way to implement as much of your own domain as you can.  Keep in mind that the platforms themselves are evolving and growing and so you can expect services provided by the platform to be part of the core app API as well.  Storage is a great example of this challenge.  You might choose to implement storage on your own to avoid a platform dependency.  Such an approach puts you in the storage business though and probably not very competitively from a feature or cost perspective.  Using a third party API can pose the same challenge as any cross-platform library. At the same time, the platforms evolve and likely will implement storage APIs and those APIs will be rich and integrate with other services as well.
  • Cross-platform libraries.  One of the most common approaches developers attempt (and often provided by third parties as well) is to develop or use a library that abstracts away platform differences or claims to map a unique “meta API” to multiple platforms.  These cross—platform libraries are conceptually attractive but practically unworkable over time. Again, early on this can work. Over time the platform divergence is real.  There’s nothing you can do to make services that don’t exist on one platform magically appear on another or APIs that are architecturally very different morph into similar architectures.  Worse, as an app developer you end up relying on essentially a “shadow” OS provided by a team that has a fraction of the resources for updates, tooling, documentation, etc. even if this team is your own dev team. As a counter example, games commonly use engines across platforms, but they rely on a very narrow set of platform APIs and little integration. Nevertheless, there are those that believe this can be a path (as it is for games).  It is important to keep in mind that the platforms are evolving rapidly and the customer desire for well-integrated apps (not just apps that run).
  • Multiple teams.  Absent the ability to share app client code (because of differing languages), keeping multiple teams in sync on the same app is extremely challenging.  Equally challenging is having one team time slice – not only is that mentally inefficient, maintaining up to date skills and knowledge for multiple platforms is challenging.  Even beyond the basics of keeping the feature set the same, there are problems to overcome.  One example is just timing of releases.  It might be hard enough to keep features in sync and sim ship, but imagine that the demand spike for a new release of your app when the platform changes (and maybe even requires a change to your app).  You are then in a position to need a release for one platform.  But if you are halfway done with new features for your app you have a very tricky disclosure and/or code management challenge.  These challenges are compounded non-linearly as the number of platforms increases.

These are a few potential challenges.  Not every app will run into these and some might not even be real challenges for a particularly app domain.  By and large, these are the sorts of things that have dogged developers working cross-platform approaches across clients, servers, and more over many generations.

What’s next?

The obvious question will continue to be debated, which is if there is a single platform winner or not.  Will developers be able to pick a platform and reach their own business and product goals by focusing on one platform as a way of avoiding the issues associated with supporting multiple platforms?

The only thing we know for sure is that the APIs, tools, and approaches of different platforms will continue to evolve and diverge.  Working across platforms will only get more difficult, not easier.

The new platforms moved “up the stack” and make it more difficult for developers to isolate themselves from the platform changes.  In the old days, developers could re-implement parts of the platform within the app and use that across platforms or even multiple apps.  Developers could hook the system and customize the behavior as they saw fit.  The more sealed nature of platforms (which delivers amazing benefits for end-users) makes it harder for developers to create their own experience and transport it across platforms.  This isn’t new.  In the DOS era, apps implemented their own printing subsystems and character-based user models all of which got replaced by GUI APIs all to the advantage of developers willing to embrace the richer platforms and to the advantage of customers that gained a new level of capabilities across apps.

The role of app stores and approval processes, the instant ability for the community to review apps, and the need to break through in the store will continue to drive the need to be great apps on the chosen platforms.

Some will undoubtedly call for standards or some homogonization of platforms.  Posix in the OS world,  Motif in the GUI world, or even HTML for browsers have all been attempts at this.  It is a reasonable goal given we all want our software investments to be on as many devices as possible (and this desire is nothing new).  But is it reasonable to expect vendors to pour billions into R&D to support an intentional strategy of commoditization or support for a committee design? Vendors believe we’re just getting started in delivering innovation and so slowing things down this way seems counter-intuitive at best.

Ultimately, the best apps are going to break through and I think the best apps are going to be the ones that work with the platform not in spite of it and the best apps won’t duplicate code in the platform but work with platform.

It means there some interesting choices ahead for many players in these ecosystems.

–Steven

# # # # #

Written by Steven Sinofsky

July 8, 2013 at 5:00 am

Designing for scale and the tyranny of choice

with 16 comments

Movie still from American Graffiti showing fancy hot rod carA post by Alex Limi, of Mozilla, Checkboxes that kill your product, is a fascinating read for anyone in the position to choose or implement the feature set of a software project.  What is fascinating is of course the transparency and admission of the complexity of a modern software product.  Along with this is a bit of a realization that those making the choices in a product are in some ways the cause of the challenge.  Things are not quite so simple but are also not so difficult.

Simple

By now we are all familiar with the notion that the best designs are the simplest and most focused designs.  Personified by Apple and in particular the words of Steve Jobs, so much of what makes good products is distilling them down to their essence.  So much of what makes a good product line is only shipping the best products, the smallest set of products.  So much has been written, including even in Smithsonian Magazine, about the love of simplicity that inspired and is expressed in the design language of Apple’s products based on a long history of design.

It is exceedingly difficult to argue against a simply designed product…so long as it does what you want or when it does more than competitive products.

In fact it is so difficult to argue against simplicity that this post won’t even attempt to.  Let’s state emphatically that software should always do only what you need it to do, with the fewest number of steps, and least potential for errors due to complex choices and options.

On the other hand, good luck with that.

Anyone can look at any software product (or web site or hardware product) and remove things, decide things are not valuable to “anyone” or simply find a new way to prioritize, sort, or display functionality, content, capability.  That’s really easy for anyone who can use a product to do.  It is laudable when designers look back at their own products and reflect on the choices and rationale behind what, even with the best intentions, became undesired complexity, or paperclips.

The easiest type of simplicity is the kind that you place on a product after it is complete, hindsight is rather good when it comes to evaluating simplicity.  This is simplicity by editing.  You look at a product and point out the complexity and assume that it is there because someone made some poor assumptions, could not decide, didn’t understand customers, or a whole host of other reasons.

In fact, many choices in products that result in complexity are there because of deliberate choices with a known cost.  Having options and checkboxes costs code and code costs time in development in testing.  Adding buttons, hinges, or ports is expensive in materials, weight, or even battery life.  Yet designers add these anyway.  While data is not a substitute for strategy, looking at usage data and seeing that nearly every bit of surface area is executed, validates these choices (one could go through Limi’s post and reverse engineer the rationale and point to the reasons for baggage).

It is enormously difficult in practice to design something with simplicity in mind and express that in a product.  It is an order of magnitude more difficult than that to maintain that over time as you hope for your asset to remain competitive and state of the art.

Difficult

Software is a unique product in that the cost of complexity is rarely carried by the customer.  The marginal cost for more code is effectively zero.  While you can have lots of options, you can also effectively hide them all and not present them front and center.  While you can have extra code, it is entirely possible to keep it out of the execution path if you do the work.  While you can inherit the combinatorics of a complex test matrix, you can use data and equivalence classing to make good engineering assumptions about what will really matter.  Because of these mitigations, software is especially difficult to design simply and maintain in a simple state even if you accomplish a simple design once.

Here are seven reasons why simplicity in software design is incredibly difficult:

  • New feature: enable/disable.  You add a new feature to your product but are worried about the acceptance of the feature.  Perhaps because your new feature is an incredibly innovative, but different, way to do something everyone does or perhaps because your new feature is based on a technology that you know has limits, you decide to add the checkbox.  The easy thing to do is to just add a “do you want to use this” or the first time you see the feature in action you offer up an option to “keep doing this”.  Of course you also have to maintain a place to undo that choice or offer it again. Play this out over the next release and evolution of the feature and you can see where this leads.
  • New feature: can’t decide. You add a new feature and it clearly has a modality where some people think it should go left and others think it should go right (or scroll up or down, for example).  So of course the easy thing to do is just add an option to allow people to choose.  Play this out over time and imagine what happens if you decide to add a new way or you enhance one of left or right and you can see the combinatorics exploding right before your eyes.
  • New way of doing something: enable compatibility.  You add a new way to do something to your product as it evolves.  Just to be safe you think it would be best to also have the old way of doing something stick around so you add back that option—of course software makes this easy because you just leave the old code around.  But it isn’t so easy because you’re also adding new features that rely on the new foundation, so do you add those twice? Play this out as the new way of doing something evolves and people start to ask to evolve the old thing as well and the tyranny of options gets to you quickly.
  • Remove feature: re-enable. As your product evolves you realize that a feature is no longer valid, useful, or comes at too high a cost (in complexity, data center operations, etc.) to maintain so you decide to remove it.  Just to be safe you think it is a good idea (or customers require it to be a good idea) to leave in an option to re-enable that old feature.  No big deal.  Of course it is important to do this because telemetry shows that some people used the feature (no feature is used by zero people).  Play this out and you have to ask yourself if you can ever really remove a feature, even if there is a material cost to the overall system for it to be there.
  • Environmental choice: customize.  Your product is used in a wide variety of environments from consumer to enterprise, desktop to mobile, managed to unmanaged, private network to internet, first time to experienced people, developers or end-users, and so on.  The remarkable thing about software is the ability to dynamically adjust itself to a different usage style with the simple addition of some code and customization.  The depth and breadth of this customization potential makes for a remarkably sticky and useful product so adding these customizations seems like a significant asset.  Play this out over time and the combinatorics can overwhelm even the largest of IT administrators or test managers.  Even if you do the work to design the use of these customizations so they are simple, the ability to evolve your designs over time with these constraints itself becomes a constraint—one that is likely highly valued by a set of customers.
  • Personality: customize.  You design a product with a personality that reflects the design language across every aspect of the product from user interface, documentation, packaging, web site, branding and logos, and more.  Yet no matter what you do, a modern product should also reflect the personality of the owner or human using it.  You see no problem offering some set of options for this (setting some background or color choices), but of course over time as your product evolves there is a constant demand for more of these.  At some extremes you have requests to re-skin the entire product and yet no matter what you try to do it might never be enough customization. Play this out over time and you face challenges in evolving your own personality as it needs to incorporate customizations that might not make sense anymore.  Personality starts to look a lot like features with code not just data.
  • Competitive: just in case. The above design choices reflect complexity added during the development of the product.  It is also possible to make choices that do not arise out of your own choices, but out of choices that come from responding to the market.  Your main competitor takes a different approach to something you offer and markets the heck out of it.  You get a lot of pressure to offer the feature that same way.  The natural reaction is to put in a quick checkbox that renders some element of the UI your way as well as competitor’s way.  You battle it out, but rest assured you have the objection-handler in place so sales and marketing don’t stress.  Play this out and you can see how these quick checkboxes turn into features you have to design around over time.

Of course we all have our favorite illustrations of each of these.  You can imagine these at a very gross level or even at a very fine level.  The specifics don’t really matter because each of us can see immediately when we’re hitting up against a choice like this.  Play the design choice out over the evolution of the product/feature and see where it goes.

It is important to see that at the time these are not dumb motivations.  These are all legitimate product design approaches and tradeoffs.  Another way people see simple is that while you’re designing it you know how it will definitely not appeal to a set of customers.  You can take a bet on convincing people or you can be a bit safer.  Product development is uncertain and only hindsight is 20/20.  For every successful product that is simple, there are a lot of simplicity approaches that did not pan out over time. Minimal can be simple, or just a minimal number of customers.

What can you do?

Evolution

Software is definitely in a new era.  The era of excess configurability or even infinite customization is behind us.  The desire for secure, robust, long battery life along with incredible innovations in hardware that bring so many peripherals on board means that designers can finally look at the full package of software+hardware through a different lens.

If you draw an analogy to the evolution of the automobile, then one might see where the software world is today.  And because we see software and hardware inextricably connected today, let’s say that this applies to the entire package of the device in your hand or bag.

In the golden era, as some would say, of automobiles it was the height of hip to know the insides of your car.  A fun after school project for a guy in high school would be to head home, pop the hood on the Chevy, and tune the engine.  Extra money earned on the side would go to custom parts, tools, and tweaking your wheels.  You expressed yourself through your car.

Then along came the innovations in quality and reliability from car makers in the 80’s.  They saw a different approach.

When I was 16 my father took me to look at cars.  We stopped by the dealer and during the pitch he asked the salesman to pop open the hood. I am sure the look on my face was priceless.  I had literally no idea what to look for or what to see.  Turns out my father didn’t either.  Electronic fuel injection, power steering, and a whole host of other things had replaced the analog cars he knew and loved (and currently drove).  Times had changed.

I have not looked under the hood of a car since.  My expectation of a car is that it just works.  I don’t open the hood.  I don’t service it myself.  I don’t replace parts myself.  I can adjust the seats, set the radio presets, and put an Om sticker on the back.  I want the car’s design to express my personality, but I don’t want to spend my time and energy worrying if I broke the car doing so.  Technology has advanced to the point where popping the hood on a car is no longer a hobby.  The reliability of being able to drive a 2002 Prius for over 100,000 miles without worrying comes with fewer options and customizations, but I got a car that cost less to operate, took less time as an owner to maintain, and was safer in every way.  Sold.

Today’s sealed Ultrabooks and tablets, app stores, and even signed drivers represent this evolution.  Parts that done wear out, peripherals that you don’t need to tune or adjust at the software level, thin, light, robust, reliable.  Sold.

Approach – Point of View

How can you approach this in the products you design? As you can imagine there is a balance.  The balance is between your point of view and making sure you truly meet customer needs.

A point of view has to be one of the best tools of design  A point of view is the reason for being, the essence, the very nature of a product.  In a world where just about every product (but not all) is made of similar ingredients and solve problems that can kind-of, sort-of be solved in other ways, what distinguishes one product from another is a unique point of view that is followed through in the design.

A point of view says who the product is for and why.  A point of view says the benefits of a product.  A point of view says why this product is better, faster, and differentiated in the marketplace.

A point of view also guides you in deciding how to be simple.  Simplicity comes from adhering to your point of view.  If you have a clear point of view then simplicity follows from that.  Is something consistent with your point of view?  If so then it sounds like a candidate.  If not, then why are you considering it?  Is your point of view changing (it can, but be careful)?

But we don’t all have the luxury of declaring a point of view and sticking to it.  You can share your point of view with customers, or potential customers.  You can articulate your point of view to the market.  You can also adapt and change.  The market also adapts and changes.

That’s why product development is so exciting and interesting.  The answers are not so simple and the journey is complex, even if the goal is product simplicity.

–Steven

PS: Interested in a Harvard Business teaching case study on this topic, then perhaps check out http://www.hbs.edu/faculty/Pages/item.aspx?num=34113 (Microsoft Office 2007).  This is a paid link for which I receive no compensation.

Written by Steven Sinofsky

March 19, 2013 at 11:30 am

Posted in posts

Tagged with , ,

%d bloggers like this: