Posts Tagged ‘strategy’
I was talking with a founder/CEO of an enterprise startup about what it is like to disrupt a sizable incumbent. In the case we were talking about the disrupting technology was losing traction and the incumbent was regaining control of the situation, back off their heels, and generally felt like they had fended off the “attack” on a core business. This causes a lot of consternation at the disrupting startup as deals aren’t won, reviews and analyst reports swing the wrong way, and folks start to question the direction. If there really is a product/market fit, then hold on and persevere because almost always the disruption is still going to happen. Let’s look at why.
The most important thing to realize about a large successful company reacting to a disruptive market entry is that every element of the company just wants to return to “normal” as quickly as possible. It is that simple.
Every action about being disrupted is dictated by a desire to avoid changing things and to maintain the status quo.
If the disruption is a product feature, the motion is figuring out how to tell customers the feature isn’t that important (best case) or how to quickly add something along the lines of the feature and move on (worst case). If the disruption is a pricing change then every effort is about how to “manage customers” without actually changing the price. If the disruption is a new and seemingly important adjacent product, then the actions focus on how to point out that such a product isn’t really necessary. Across the spectrum of potential activities, it is why the early competitive responses are often dismissive or outwardly ignore the challenger. Aside from the normal desire to avoid validating a new market entry by commenting, it takes a lot of time for a large enterprise to go through the work to formulate a response and gain consensus. Therefore an articulate way of changing very little has a lot of appeal.
Status quo is the ultimate goal of the incumbent.
Once a disruptive product gains enough traction that a more robust response is required, the course of action is almost always one that is designed to reduce changes to plans, minimize effort overall, and to do just enough to “tie”. Why is that? Because in a big company “versus” a small company, enterprise customers tend to see “a tie as a win to the incumbent”. Customers have similar views about having their infrastructure disrupted and wish to minimize change, so goals are aligned. The idea of being able to check off that a given scenario is handled by what you already own makes things much easier.
Keep in mind that in any organization, large or small, everyone is at or beyond capacity. There’s no bench, no free cycles. So any change in immediate work necessarily means something isn’t going to get done. In a large organization these challenges are multiplied by scale. People worry about their performance reviews; managers worry about the commitments to other groups; sales people worry about quarterly quotas. All of these worries are extremely difficult to mitigate because they cross layers of managers and functions.
As much as a large team or leader would like to “focus” or “wave a wand” to get folks to see the importance of a crisis, the reality of doing so is itself a massive change effort that takes a lot of time.
This means that the actions taken often follow a known pattern:
- Campaign. The first thing that takes place is a campaign of words and positioning. The checklist of features, the benefits of the existing product, the breadth of features of the incumbent compared to the new product, and so on. If the new product is cheaper, then the focus turns to value. Almost always the campaign emphasizes the depth, breadth, reliability, and comfort of the incumbent’s offer. A campaign might also be quite negative and focus on a fear, compatibility with existing infrastructure, or conventional wisdom weakness of a disruptor, or the might introduce a pretty big leap of repositioning of the incumbent product. A good example of this is how on-premises server products have competed with SaaS by highlighting the lack of flexibility or potential security issues around the cloud. This approach is quick to wind up and easy to wind down. Once it starts to work you roll it out all over the world and execute. Once the deals are won back then the small tiger team that created the campaign goes back to articulating the product as originally intended, aka normal.
- Partnership. Quite often there can be a competitive response of best practices or a third-party tool/add-on that appears to provide some similar functionality. The basic idea is to use someone else to offer the benefit articulated by a disruptive product. Early in the SaaS competition, the on-premises companies were somewhat quick to partner with “hosting” companies who would simply build out a dedicated rack of servers and run the traditional software “as a service”. This repotting plants approach to SaaS has the benefit that once the immediate crisis is mitigated, either the need to actually offer and support the partnership ends or the company just becomes committed to this new sales channel for existing products. Again, everything else continues as it was.
- Special effort. Every once in a while the pressure is so great internally to compete that the engineering team signs up for a “one off” product change or special feature. Because the engineering team was already booked, a special effort is often something carefully negotiated and minimized in scope and effort. Engineering minimizes it internally to avoid messing up dependencies and other features. Sales will be specific in what they expect the result to do because while the commitment is being made they will likely begin to articulate this to red-hot customer situations. At the extreme, it is not uncommon for the engineering team to suggest to the sales organization that a consultant or third-party can use some form of extensibility in the product to implement something that looks like the missing work. The implications of doing enterprise work in a way that minimizes impact is that, well, the impact is minimized. Without the proper architecture or an implementation at the right level in the stack, the effort ultimately looks incomplete or like a one-off. Almost all the on-premise products attempting to morph into cloud products exhibit this in the form of features that used to be there simply not being available in the “SaaS version”. With enough wins, it is almost likely that the special effort feature doesn’t ever get used. Again, the customer is just as likely to be happy with the status quo.
All of these typical responses have the attribute that they can be ignored by the vast majority of resources on a business. Almost no one has to change what they are doing while the business is responding to a disruptive force. Large incumbents love when they can fend off competitors with minimal change.
Large incumbents love when they can fend off competitors with minimal change.
Once the initial wave of competitive wins settles in and the disruptive products lose, there is much rejoicing. The teams just get back to what they were doing and declare victory. Since most of the team didn’t change anything, folks just assume that this was just another competitor with inferior products, technology, approaches that their superior product fended off. Existing customers are happy. All is good.
Or is it?
This is exactly where the biggest opportunity exists for a disruptive market entry. The level of complacency that settles into an incumbent after the first round of victories is astounding. There’s essentially a reinforcing feedback loop because there was little or no dip in revenue (in fact if revenue was growing before then it still is), product usage is still there, customers go back to asking for features the same as they were before, sales people are making quota, and so on. Things went back to normal for the incumbent.
In fact, just about every disruption happens this way–the first round or first approaches don’t quite take hold.
Why is this?
- Product readiness can improve. Obviously the most common is that the disruptive product simply isn’t ready. The feature set, scale, enterprise controls, or other attributes are deficient. A well-run new product will have done extensive early customer work knowing what is missing and will balance launching with these deficiencies and with the ability to continue to develop the product. In a startup environment, a single company rarely gets a second shot with customers so calibrating readiness is critical. Relative to the broader category of disruption, the harsh reality is that if the disruptor’s idea or approach is the right one but the entry into the market was premature, the learning will apply to the next entry. That’s why the opportunity for disruption is still there. It is why time to market is not always the advantage and being able to apply learning from failures (your own or another entry) can be so valuable.
- Missing ingredient gets added. Often a disruptive product makes a forward-looking bet on some level of enterprise infrastructure or capability as a requirement for the new product to take hold. The incumbent latches on to this missing ingredient and uses it to create an overall state of lack of readiness. If there’s one thing that disruptors know, it is not to bet against Moore’s law. If your product takes more compute, more storage, or more bandwidth, these are most definitely short-term issues. Obviously there’s no room for sloppy work, but by and large time is on your side. So much of the disruption around mobile computing was slowed down by the enterprise issues around managing budgets and allocation of “mobile phones”. Companies did not see it as likely that even better phones would become essential for life outside of work and overwhelm the managed phone process. Similarly, the lack of high-speed mobile networks was seen as a barrier, but all the while the telcos are spending billions to build them out.
- Conventional wisdom will change. One of the most fragile elements of change are the mindsets of those that need to change. This is even more true in enterprise computing. In a world where the average tenure of a CIO is constantly under pressure, where budgets are always viewed with skepticism, and where the immediate needs far exceed resources and time, making the wrong choice can be very costly. Thus the conventional wisdom plays an important part in the timeline for a disruption taking hold. From the PC to the GUI to client/server, to the web, to the cloud, to acceptance of open source each of these went through a period where conventional wisdom was that these were inappropriate for the enterprise. Then one day we all wake up to a world where the approach is required for the enterprise. The new products that are forward-looking and weather the negatives wishing to maintain the status quo get richly rewarded when the conventional wisdom changes.
- Legacy products can’t change. Ultimately the best reason to persevere is because the technology products you’re disrupting simply aren’t going to be suited to the new world (new approach, new scenarios, new technologies). When you re-imagine how something should be, you have an inherent advantage. The very foundation of technology disruption continues to point out that incumbents with the most to lose have the biggest challenges leading through generational changes. Many say the enterprise software world, broadly speaking, is testing these challenges today.
All of these are why disruption has the characteristic of seeming to take a much longer time to take hold than expected, but when it does take hold it happens very rapidly. One day a product is ready for primetime. One day a missing ingredient is ubiquitous. One day conventional wisdom just changes. And legacy products really struggle to change enough (sometimes in business or sometimes in technology) to be “all in” players in the new world.
Of course all this hinges on an idea plus execution of a disruptive idea. All the academic theory and role-playing in the world cannot offer wisdom on knowing if you’re on to something. That’s where the team and entrepreneur’s intuition, perseverance, and adaptability to new data are the most valuable assets.
The opportunity and ability to disrupt the enterprise takes patience and more often than not several attempts, by one or more players learning and adjusting the overall approach. The intrinsic strengths of the incumbent means that new products can usually be defended against for a short time. At the same time the organization and operation of a large and successful company also means that there is near certainty that a subsequent wave of disruption will be stronger, better, and more likely to take hold simply because of the desire for the incumbent to get back to “normal”.
–Steven Sinofsky (@stevesi)
Innovation and disruption are the hallmarks of the technology world, and hardly a moment passes when we are not thinking, doing, or talking about these topics. While I was speaking with some entrepreneurs recently on the topic, the question kept coming up: “If we’re so aware of disruption, then why do successful products (or companies) keep getting disrupted?”
Good question, and here’s how I think about answering it.
As far back as 1962, Everett Rogers began his groundbreaking work defining the process and diffusion of innovation. Rogers defined the spread of innovation in the stages of knowledge, persuasion, decision, implementation and confirmation.
Those powerful concepts, however, do not fully describe disruptive technologies and products, and the impact on the existing technology base or companies that built it. Disruption is a critical element of the evolution of technology — from the positive and negative aspects of disruption a typical pattern emerges, as new technologies come to market and subsequently take hold.
A central question to disruption is whether it is inevitable or preventable. History would tend toward inevitable, but an engineer’s optimism might describe the disruption that a new technology can bring more as a problem to be solved.
Four Stages of Disruption
For incumbents, the stages of innovation for a technology product that ultimately disrupt follow a pattern that is fairly well known. While that doesn’t grant us the predictive powers to know whether an innovation will ultimately disrupt, we can use a model to understand what design choices to prioritize, and when. In other words, the pattern is likely necessary, but not sufficient to fend off disruption. Value exists in identifying the response and emotions surrounding each stage of the innovation pattern, because, as with disruption itself, the actions/reactions of incumbents and innovators play important roles in how parties progress through innovation. In some ways, the response and emotions to undergoing disruption are analogous to the classic stages of grieving.
Rather than the five stages of grief, we can describe four stages that comprise theinnovation pattern for technology products: Disruption of incumbent; rapid and linear evolution; appealing convergence; and complete reimagination. Any product line or technology can be placed in this sequence at a given time.
The pattern of disruption can be thought of as follows, keeping in mind that at any given time for any given category, different products and companies are likely at different stages relative to some local “end point” of innovation.
Stage One: Disruption of Incumbent
A moment of disruption is where the conversation about disruption often begins, even though determining that moment is entirely hindsight. (For example, when did BlackBerry get disrupted by the iPhone, film by digital imaging or bookstores by Amazon?) A new technology, product or service is available, and it seems to some to be a limited, but different, replacement for some existing, widely used and satisfactory solution. Most everyone is familiar with this stage of innovation. In fact, it could be argued that most are so familiar with this aspect that collectively our industry cries “disruption” far more often than is actually the case.
From a product development perspective, choosing whether a technology is disruptive at a potential moment is key. If you are making a new product, then you’re “betting the business” on a new technology — and doing so will be counterintuitive to many around you. If you have already built a business around a successful existing product, then your “bet the business” choice is whether or not to redirect efforts to a new technology. While difficult to prove, most would generally assert that new technologies that are subsequently disruptive are bet on by new companies first. The very nature of disruption is such that existing enterprises see more downside risk in betting the company than they see upside return in a new technology. This is the innovator’s dilemma.
The incumbent’s reactions to potential technology disruptions are practically cliche. New technologies are inferior. New products do not do all the things existing products do, or are inefficient. New services fail to address existing needs as well as what is already in place. Disruption can seem more expensive because the technologies have not yet scaled, or can seem cheaper because they simply do less. Of course, the new products are usually viewed as minimalist or as toys, and often unrelated to the core business. Additionally, business-model disruption has similar analogues relative to margins, channels, partners, revenue approaches and more.
The primary incumbent reaction during this stage is to essentially ignore the product or technology — not every individual in an organization, but the organization as a whole often enters this state of denial. One of the historical realities of disruption is uncovering the “told you so” evidence, which is always there, because no matter what happens, someone always said it would. The larger the organization, the more individuals probably sent mail or had potential early-stage work that could have avoided disruption, at least in their views (see “Disruption and Woulda, Coulda, Shoulda” and the case of BlackBerry). One of the key roles of a company is to make choices, and choosing change to a more risky course versus defense of the current approaches are the very choices that hamstring an organization.
There are dozens of examples of disruptive technologies and products. And the reactions (or inactions) of incumbents are legendary. One example that illustrates this point would be the introduction of the “PC as a server.” This has all of the hallmarks of disruption. The first customers to begin to use PCs as servers — for application workloads such as file sharing, or early client/server development — ran into incredible challenges relative to the mini/mainframe computing model. While new PCs were far more flexible and less expensive, they lacked the reliability, horsepower and tooling to supplant existing models. Those in the mini/mainframe world could remain comfortable observing the lack of those traits, almost dismissing PC servers as not “real servers,” while they continued on their path further distancing themselves from the capabilities of PC servers, refining their products and businesses for a growing base of customers. PCs as servers were simply toys.
At the same time, PC servers began to evolve and demonstrate richer models for application development (rich client front-ends), lower cost and scalable databases, and better economics for new application development. With the rapidly increasing demand for computing solutions to business problems, this wave of PC servers fit the bill. Soon the number of new applications written in this new way began to dwarf development on “real servers,” and the once-important servers became legacy relative to PC-based servers for those making the bet or shift. PC servers would soon begin to transition from disruption to broad adoption, but first the value proposition needed to be completed.
Stage Two: Rapid Linear Evolution
Once an innovative product or technology begins rapid adoption, the focus becomes “filling out” the product. In this stage, the product creators are still disruptors, innovating along the trajectory they set for themselves, with a strong focus on early-adopter customers, themselves disruptors. The disruptors are following their vision. The incumbents continue along their existing and successful trajectory, unknowingly sealing their fate.
This stage is critically important to understand from a product-development perspective. As a disruptive force, new products bring to the table a new way of looking at things — a counterculture, a revolution, an insurgency. The heroic efforts to bring a product or service to market (and the associated business models) leave a lot of room left to improve, often referred to as “low-hanging fruit.” The path from where one is today to the next six, 12, 18 months is well understood. You draw from the cutting-room floor of ideas that got you to where you are. Moving forward might even mean fixing or redoing some of the earlier decisions made with less information, or out of urgency.
Generally, your business approach follows the initial plan, as well, and has analogous elements of insurgency. Innovation proceeds rapidly in this point. Your focus is on the adopters of your product — your fellow disruptors (disrupting within their context). You are adding features critical to completing the scenario you set out to develop.
To the incumbent leaders, you look like you are digging in your heels for a losing battle. In their view, your vector points in the wrong direction, and you’re throwing good money after bad. This only further reinforces the view of disruptors that they are heading in the right direction. The previous generals are fighting the last war, and the disruptors have opened up a new front. And yet, the traction in the disruptor camp becomes undeniable. The incumbent begins to mount a response. That response is somewhere between dismissive and negative, and focuses on comparing the products by using the existing criteria established by the incumbent. The net effect of this effort is to validate the insurgency.
Stage Three: Appealing Convergence
As the market redefinition proceeds, the category of a new product starts to undergo a subtle redefinition. No longer is it enough to do new things well; the market begins to call for the replacement of the incumbent technology with the new technology. In this stage, the entire market begins to “wake up” to the capabilities of the new product.
As the disruptive product rapidly evolves, the initial vision becomes relatively complete (realizing that nothing is ever finished, but the scenarios overall start to fill in). The treadmill of rapidly evolving features begins to feel somewhat incremental, and relatively known to the team. The business starts to feel saturated. Overall, the early adopters are now a maturing group, and a sense of stability develops.
Looking broadly at the landscape, it is clear that the next battleground is to go after the incumbent customers who have not made the move. In other words, once you’ve conquered the greenfield you created, you check your rearview mirror and look to pick up the broad base of customers who did not see your product as market-ready or scenario-complete. To accomplish this, you look differently at your own product and see what is missing relative to the competition you just left in the dust. You begin to realize that all those things your competitor had that you don’t may not be such bad ideas after all. Maybe those folks you disrupted knew something, and had some insights that your market category could benefit from putting to work.
In looking at many disruptive technologies and disruptors, the pattern of looking back to move forward is typical. One can almost think of this as a natural maturing; you promise never to do some of the things your parents did, until one day you find yourself thinking, “Oh my, I’ve become my parents.” The reason that products are destined to converge along these lines is simply practical engineering. Even when technologies are disrupted, the older technologies evolved for a reason, and those reasons are often still valid. The disruptors have the advantage of looking at those problems and solving them in their newly defined context, which can often lead to improved solutions (easier to deploy, cheaper, etc.) At the same time, there is also a risk of second-system syndrome that must be carefully monitored. It is not uncommon for the renegade disruptors, fresh off the success they have been seeing, to come to believe in broader theories of unification or architecture and simply try to get too much done, or to lose the elegance of the newly defined solution.
Stage Four: Complete Reimagination
The last stage of technology disruption is when a category or technology is reimagined from the ground up. While one can consider this just another disruption, it is a unique stage in this taxonomy because of the responses from both the legacy incumbent and the disruptor.
Reimagining a technology or product is a return to first principles. It is about looking at the underlying assumptions and essentially rethinking all of them at once. What does it mean to capture an image,provide transportation, share computation, search the Web, and more? The reimagined technology often has little resemblance to the legacy, and often has the appearance of even making the previous disruptive technology appear to be legacy. The melding of old and new into a completely different solution often creates whole new categories of products and services, built upon a base of technology that appears completely different.
To those who have been through the first disruption, their knowledge or reference frame seems dated. There is also a feeling of being unable to keep up. The words are known, but the specifics seem like rocket science. Where there was comfort in the convergence of ideas, the newly reimagined world seems like a whole new generation, and so much more than a competitor.
In software, one way to think of this is generational. The disruptors studied the incumbents in university, and then went on to use that knowledge to build a better mousetrap. Those in university while the new mousetrap was being built benefited from learning from both a legacy and new perspective, thus seeing again how to disrupt. It is often this fortuitous timing that defines generations in technologies.
Reimagining is important because the breakthroughs so clearly subsume all that came before. What characterizes a reimagination most is that it renders the criteria used to evaluate the previous products irrelevant. Often there are orders of magnitude difference in cost, performance, reliability, service and features. Things are just wildly better. That’s why some have referred to this as the innovator’s curse. There’s no time to bask in the glory of the previous success, as there’s a disruptor following right up on your heels.
A recent example is cloud computing. Cloud computing is a reimagination ofboth the mini/mainframe and PC-server models. By some accounts, it is a hybrid of those two, taking the commodity hardware of the PC world and the thin client/data center view of the mainframe world. One would really have to squint in order to claim it is just that, however, as the fundamental innovation in cloud computing delivers entirely new scale, reliability and flexibility, at a cost that upends both of those models. Literally every assumption of the mainframe and client/server computing was revisited, intentionally or not, in building modern cloud systems.
For the previous incumbent, it is too late. There’s no way to sprinkle some reimagination on your product. The logical path, and the one most frequently taken, is to “mine the installed base,” and work hard to keep those customers happy and minimize the mass defections from your product. The question then becomes one of building an entirely new product that meets these new criteria, but from within the existing enterprise. The number of times this has been successfully accomplished is diminishingly small, but there will always be exceptions to the rule.
For the previous disruptor and new leader, there is a decision point that is almost unexpected. One might consider the drastic — simply learn from what you previously did, and essentially abandon your work and start over using what you learned. Or you could be more incremental, and get straight to the convergence stage with the latest technologies. It feels like the ground is moving beneath you. Can you converge rapidly, perhaps revisiting more assumptions, and show more flexibility to abandon some things while doing new things? Will your product architecture and technologies sustain this type of rethinking? Your customer base is relatively new, and was just feeling pretty good about winning, so the pressure to keep winning will be high. Will you do more than try to recast your work in this new light?
The relentless march of technology change comes faster than you think.
So What Can You Do?
Some sincerely believe that products, and thus companies, disrupt and then are doomed to be disrupted. Like a Starship captain when the shields are down, you simply tell all hands to brace themselves, and then see what’s left after the attack. Business and product development, however, are social sciences. There are no laws of nature, and nothing is certain to happen. There are patterns, which can be helpful signposts, or can blind you to potential actions. This is what makes the technology industry, and the changes technology bring to other industries, so exciting and interesting.
The following table summarizes the stages of disruption and the typical actions and reactions at each stage:
|Disruption of Incumbent||Introduces new product with a distinct point of view, knowing it does not solve all the needs of the entire existing market, but advances the state of the art in technology and/or business.||New product or service is not relevant to existing customers or market, a.k.a. “deny.”|
|Rapid linear evolution||Proceeds to rapidly add features/capabilities, filling out the value proposition after initial traction with select early adopters.||Begins to compare full-featured product to new product and show deficiencies, a.k.a. “validate.”|
|Appealing Convergence||Sees opportunity to acquire broader customer base by appealing to slow movers. Sees limitations of own new product and learns from what was done in the past, reflected in a new way. Potential risk is being leapfrogged by even newer technologies and business models as focus turns to “installed base” of incumbent.||Considers cramming some element of disruptive features to existing product line to sufficiently demonstrate attention to future trends while minimizing interruption of existing customers, a.k.a. “compete.” Potential risk is failing to see the true value or capabilities of disruptive products relative to the limitations of existing products.|
|Complete Reimagining||Approaches a decision point because new entrants to the market can benefit from all your product has demonstrated, without embracing the legacy customers as done previously. Embrace legacy market more, or keep pushing forward?||Arguably too late to respond, and begins to define the new product as part of a new market, and existing product part of a larger, existing market, a.k.a. “retreat.”|
Considering these stages and reactions, there are really two key decision points to be tuned-in to:
When you’re the incumbent, your key decision is to choose carefully what you view as disruptive or not. It is to the benefit of every competitor to claim they are disrupting your products and business. Creating this sort of chaos is something that causes untold consternation in a large organization. Unfortunately, there are no magic answers for the incumbent.
The business team needs to develop a keen understanding of the dynamics of competitive offerings, and know when a new model can offer more to customers and partners in a different way. More importantly, it must avoid an excess attachment to today’s measures of success.
The technology and product team needs to maintain a clinical detachment from the existing body of work to evaluate if something new is better, while also avoiding the more common technology trap of being attracted to the next shiny object.
When you’re the disruptor, your key decision point is really when and if to embrace convergence. Once you make the choices — in terms of business model or product offering — to embrace the point of view of the incumbent, you stand to gain from the bridge to the existing base of customers.
Alternatively, you create the potential to lose big to the next disruptor who takes the best of what you offer and leapfrogs the convergence stage with a truly reimagined product. By bridging to the legacy, you also run the risk of focusing your business and product plans on the customers least likely to keep pushing you forward, or those least likely to be aggressive and growing organizations. You run the risk of looking backward more than forward.
For everyone, timing is everything. We often look at disruption in hindsight, and choose disruptive moments based on product availability (or lack thereof). In practice, products require time to conceive, iterate and execute, and different companies will work on these at different paces. Apple famously talked about the 10-year project that was the iPhone, with many gaps, and while the iPad appears a quick successor, it, too, was part of that odyssey. Sometimes a new product appears to be a response to a new entry, but in reality it was under development for perhaps the same amount of time as another entry.
There are many examples of this path to disruption in technology businesses. While many seem “classic” today, the players at the time more often than not exhibited the actions and reactions described here.
As a social science, business does not lend itself to provable operational rules. As appealing as disruption theory might be, the context and actions of many parties create unique circumstances each and every time. There is no guarantee that new technologies and products will disrupt incumbents, just as there is no certainty that existing companies must be disrupted. Instead, product leaders look to patterns, and model their choices in an effort to create a new path.
Stages of Disruption In Practice
Digital imaging. Mobile imaging reimagined a category that disrupted film (always available, low-quality versus film), while converging on the historic form factors and usage of film cameras. In parallel, there is a wave of reimagination of digital imaging taking place that fundamentally changes imaging using light field technology, setting the stage for a potential leapfrog scenario.
- Retail purchasing. Web retailers disrupted physical retailers with selection, convenience, community, etc., ultimatelyconverging on many elements of traditional retailers (physical retail presence, logistics, house brands).
- Travel booking. Online travel booking is disrupting travel agents, then converging on historic models of aggregation and package deals.
- Portable music. From the Sony Walkman as a disruptor to the iPod and MP3 players, to mobile phones subsuming this functionality, and now to streaming playback, portable music has seen the disruptors get disrupted and incumbents seemingly stopped in their tracks over several generations. The change in scenarios enabled by changing technology infrastructure (increased storage, increased bandwidth, mobile bandwidth and more) have made this a very dynamic space.
- Urban transport. Ride sharing, car sharing, and more disruptive traditional ownership of vehicles or taxi services are in the process of converging models (such as Uber adding UberX.
- Productivity. Tools such as Quip, Box, Haiku Deck, Lucidchart, and more are being pulled by customers beyond early adopters to be compatible with existing tools and usage patterns. In practice, these tools are currently iterating very rapidly along their self-defined disruptive path. Some might suggest that previous disruptors in the space (OpenOffice, Zoho, perhaps even Google Docs) chose to converge with the existing market too soon, as a strategic misstep.
- Movie viewing. Netflix and others, as part of cord-cutting, with disruptive, low-priced, all-you-can-consume on-demand plans and producing their own content. Previous disruptors such as HBO are working to provide streaming and similar services, while constrained by existing business models and relationships.
- Messaging/communications apps. SMS, which many consider disruptive to 2000-era IM, is being challenged by much richer interactions that disrupt the business models of carrier SMS and feature sets afforded by SMS.
- Network infrastructure. Software-defined networking and cloud computing are reimagining the category of networking infrastructure, with incumbent players attempting to benefit from these shifts in the needs of customers. Incumbents at different levels are looking to adopt the model, while some providers see it as fundamentally questioning their assumptions.
With the latest pivot for Blackberry much has been said about disruption and what it can do to companies. The story, Inside the fall of BlackBerry: How the smartphone inventor failed to adapt, by Sean Silcoff, Jacquie Mcnish and Steve Ladurantaye in The Globe and Mail is a wonderful account.
Disruption has a couple of characteristics that make it fun to talk about. While it is happening even with a chorus of people claiming it is happening, it is actually very difficult to see. After it has happened the chorus of “told you so” grows even louder and more matter of fact. After the fact, everyone has a view of what could have been done to “prevent” disruption. Finally, the description of disruption tends to lose all of the details leading up to the failure as things get characterized at the broad company level or a simple characteristic (keyboard v. touch) when the situation is far more complex. Those nuances are what product folks deal with day to day and where all the learning can be found.
Like many challenges in business, there’s no easy solution and no pattern to follow. The decision moments, technology changes, and business realities are all happening to people that have the same skills and backgrounds as the chorus, but the real-world constraints of actually doing something about them.
The case of Blackberry is interesting because the breadth of disruptive forces is so great. It is not likely that a case like this will be seen again for a while–a case where a company has such an incredible position of strength in technology and business gained over a relatively short time and then essentially erased in a short time.
I loved my Blackberry. The first time I used one was before they were released (because there was integration with Outlook I was lucky enough to be using one some time in 1998–I even read the entire DOJ filing against Microsoft on one while stopped on the tarmac at JFK). Using the original 850 was a moment when you immediately felt propelled into the future. Using one felt like the first time I saw a graphical interface (Alto) or a GPS. Upon using one you just knew our technology lives would be different.
What went wrong is almost exactly the opposite of what went right and that’s what makes this such an interesting story and unbelievably difficult challenge for those involved. Even today I look at what went on and think of how galactic the challenges were for that amazing group of people that transported us all to the future with one product.
When you build a product you make a lot of assumptions about the state of the art of technology, the best business practices, and potential customer usage/behavior. Any new product that is even little bit revolutionary makes these choices at an instinctual level–no matter what news stories you read about research or surveys or whatever, I think we all know that there’s a certain gut feeling that comes into play.
This is especially the case for products that change our collective world view.
Whether made deliberately or not these assumptions play a crucial role in how a product evolves over time. I’ve never seen a new product developed where the folks wrote down a long list of assumptions. I wouldn’t even know where to start–so many of them are not even thought through and represent just an engineer or product manager “state of the art”, “best practice”, or “this is what I know”.
It turns out these assumptions, implicit or explicit, become your competitive advantage and allow you to take the market by storm.
But then along come technology advances, business model changes, or new customer behaviors and seemingly overnight your assumptions are invalidated.
In a relatively simple product (note, no product is simple to the folks making it) these assumptions might all be within the domain. Christensen famously studied the early days of the disk drive industry. To many of us these assumptions are all contained within one system or component and it is hard to see how disruption could take hold. Fast forward and we just assume solid-state storage, yet even this transition as obvious as it is to us, requires a whole new world view for people who engineer spinning disks.
In a complex product like the entirety of the Blackberry experience there are assumptions that cross hardware, software, communications networks, channel relationships, business models and more. When you bring all these together into a single picture one realizes the enormity of what was accomplished.
It is instructive to consider the many assumptions or ingredients of Blackberry success that go beyond the popular “keyboard v. touch”. In thinking about my own experience with the product, the following list just a few things that were essentially revisited by the iPhone from the perspective of the Blackberry device/team:
- Keyboard to touch. The most visible difference and most easily debated is this change. From crackberry thumbs to contests over who could type faster, your keyboard was clearly a major innovation. The move to touch would challenge you in technology, behavior, and more.
- Small (b&w) screens to large color. Closely connected with the shift to touch was a change in perspective that consuming information on a bigger screen would trump the use of the real estate for (arguably) more efficient input. Your whole notion of industrial design, supply chain, OS, and more would be challenged. As an aside, the power consumption of large screens immediately seemed like a non-starter to a team insanely focused on battery life.
- GPRS to 3G then LTE. Your heritage in radios, starting with the pager network, placed a premium on using the lowest power/bandwidth radio and focusing on efficiency therein. The iPhone, while 2G early, quickly turned around a game changing 3G device. You had been almost dragged into using the newer higher powered radios because your focus had been to treat radio usage as a premium resource.
- Minimize bandwidth to assume bandwidth is free. Your focus on reducing bytes over the wire was met with a device that just assumed bytes would be “free” or at least easily purchased. Many of the early comments on the iPhone focused on this but few assumed the way the communications companies would respond to an appetite for bandwidth. Imagine thinking how sloppy the iPhone was with bandwidth usage and how fast the battery would drain. Assuming a specific resource is high cost is often a path to disruption when someone makes a different assumption.
- No general web support v. general web support. Despite demand, the Blackberry avoided offered generalized web browsing support. The partnership with carriers also precluded this given their concern about network responsiveness and capacity. Again, few would have assumed a network buildout that would support mobile browsing the way it does today. The disruptor had the advantage of growing slowly (relatively) compared to flipping a switch on a giant installed base.
- WiFi as “present” to nearly ubiquitous. The physics of WiFi coverage (along with power consumption, chip surface area and more) assumed WiFi would be expensive and hard to find. Even with whole city WiFi projects in early 2000′s people didn’t see WiFi as a big part of the solution. Few thought about the presence of WiFi at home and new usage scenarios or that every urban setting, hotel, airport, and more would have WiFi. Even the carriers built out WiFi to offload traffic and include it for free in their plans. The elegant and seamless integration of WiFi on the iPhone became a quick advantage.
- Device update/mgmt by tethering to off air. Blackberry required tethering for some routine operations and for many the only way to integrate corporate mail was to keep a PC running all the time. The PC was an integral part of the Blackberry experience for many. While the iPhone was tethered for music and videos, the presence of WiFi and march towards PC-free experiences was an early assumption in the architecture that just took time to play out.
- Business to consumer. Your Blackberry was clearly a business device. Through much of the period of high success consumers flocked to devices like the SideKick. While there was some consumer success, you anchored in business scenarios from Exchange and Notes integration to network security. The iPhone comes along and out of the gate is aimed at consumers with a camera, MMS, and more. This disruption hits at the hardware, the software, the service integration, and even how the device is sold at carriers.
- Data center based service to broad set of cloud based services. Your connection to the enterprise was anchored in a server that business operated. This was a significant business upside as well as a key part of the value proposition for business. This server became a source for valuable business information propagated to the Blackberry (rather than use the web). The absence of an iPhone server seemed like a huge opportunity yet in fact it turned into an asset in terms of spreading the device. Instead the iPhone relied on the web (and subsequently apps) to deliver services rather than programmed and curated services.
- Deep channel partnership/revenue sharing to somewhat tense relationship. By most accounts, your Blackberry business was an incredible win-win with telcos around the world. Story after story talked of the amazing partnerships between carriers and Blackberry. At the same time, stories (and blame game) between Apple and AT&T in the US became somewhat legendary. Yet even with this tension, the iPhone was bringing very valuable customers to AT&T and unseating Blackberry customers.
- Ubiquitous channel presence to exclusives. Your global partnership strength was unmatched and yet disrupted. The iPhone launched with single carriers in limited markets, on purpose. Many viewed that as a liability, including Blackberry. Yet in hindsight this only increased the value to the selected partners and created demand from other potential partners (even with the tension).
- Revenue sharing to data plan. One of the main assets that was mostly invisible to consumers was the revenue to Blackberry for each device on the network. This was because Blackberry was running a secure email service as a major anchor of the offering. Most thought no one was going to give up this revenue, including the carrier ability to up-charge for your Blackberry. Few saw a transition to a heavily subsidized business model with high priced data plans purchased by consumers.
These are just a few and any one of these is probably debatable. The point is really the breadth of changes the iPhone introduced to the Blackberry offering and roadmap. Some of these are assumptions about the technology, some about the business model, some about the ecosystem, some about physics even!
Imagine you’ve just changed the world and everything you did to change the world–your entire world view–has been changed by a new product. Now imagine that the new product is not universally applauded and many folks not only say your product is better and more useful, but that the new product is simply inferior.
Put yourself in those shoes…
Disruption happens when a new product comes along and changes the underlying assumptions of the incumbent, as we all know.
Incumbent products and businesses respond by often downplaying the impact of a particular feature or offering. And more often than folks might notice, disruption doesn’t happen so easily. In practice, established businesses and products can withstand a few perturbations to their offering. Products can be rearchitected. Prices can be changed. Features can be added.
What happens though when nearly every assumption is challenged? What you see is a complete redefinition of your entire company. And seeing this happen in real time is both hard to see and even harder to acknowledge. Even in the case of Blackberry there was a time window of perhaps 2 years to respond–is that really enough time to re-engineer everything about your product, company, and business?
One way to look at this case is that disruption rarely happens from a single vector or attribute, even though the chorus might claim X disrupts Y because of price or a single feature, for example. We can see this in the case of something like desktop linux–being lower priced/open source are interesting attributes but it is fair to say that disruption never really happened to the degree that might have been claimed early on.
However, if you look at Linux in the data center the combination of using Linux for proprietary data center architectures and services combined with the benefit of open source/low price brought with it a much more powerful disruptive capability.
One might take away from this case and other examples, that the disruption to watch out for the most would be the one that combined multiple elements of the traditional marketing mix of product, price, place, promotion. When considering these dimensions it is also worth understanding the full breadth of assumptions, both implicit and explicit, in your product and business when defending against disruption. Likewise, if you’re intending to disrupt you want to consider the multiple dimensions of your approach in order to bypass the intrinsic defenses of incumbents.
It is not difficult to talk about disruption in our industry. As product and business leaders it is instructive to dive into a case of disruption and consider not just all the factors that contributed but how would you respond personally. Could you really lead a team through the process of creating a product that literally inverted almost every business and technology assumption that created $80B or so in market cap over a 10 year period?
In The Sun Also Rises, Hemingway wrote:
How did you go bankrupt? Two ways. Gradually, then suddenly.
That is how disruption happens.
Anyone worth their salt in product development knows that listening to customers through any and all means possible is the means to innovation. Wait a minute, anyone worth their salt in product development knows that listening to customers leads to a faster horse.
Deciding your own product choices within these varying perspectives is perhaps the seminal challenge in product development, tech products or otherwise. This truly is a tyranny of or, but one in which changing the rules of the game is the very objective.
In this discussion, which is such a common dialog in the halls of HBS as well tech companies everywhere it should probably be a numbered conversation (for this blog let’s call this Conversation #38 for shorthand—disrupt or die).
For a recent discussion about why it is so difficult for large companies to face changes in the marketplace, see this post Why Corporate Giants Fail to Change.
“Disrupt or die” or “disrupt and die”?
Failure to evolve a product as technologies change or as customer scenarios change is sure to lead to obsolescence or elimination from the marketplace. It is difficult to go a day in tech product development without hearing about technology disruption or “innovator’s dilemma”. The biggest fear we all have in tech is failing to keep up with the changing landscape of technologies and customers, and how those intersect.
At the same time, hopefully we all get to that lucky moment when our product is being used actively by customers who are paying. We’re in that feedback loop. We are improving the product, more is being sold, and we’re on a roll.
That’s when innovation over time looks like this:
In this case as time progresses the product improves in a fairly linear way. Listening to customers becomes a critical skill of the product team. Product improvements are touted as “listening to customers” and things seem to go well. This predictability is comforting for the business and for customers.
That is, until one day when needs change or perhaps in addition a new product from a competitor is released. Seemingly out of nowhere the great feedback loop we had looks like it won’t help. If we’re fortunate enough to be in tune to changing dynamics outside our core (and growing) customer base we have time to react and change our own product’s trajectory.
That’s when innovation looks like this:
This is a time when the market is receptive to a different point of view, and a different product — one that redefines, or reimagines, the category. Sometimes customers don’t even realize they are making a category choice, but all of a sudden they are working differently. People just have stuff to get done and find tools that help.
We’re faced with what seems like an obvious choice—adjust the product feature set and focus to keep up with the new needs of customers. Failing to do so risks losing out on new sales, depth usage, or even marginalization. Of course features/capabilities is a long list that can include price, performance, battery life, reliability, simplicity, APIs, different integration points or service connections, and any other attributes that might be used by a new entrant to deliver a unique point of view around a similar scenario.
Many folks will be quick to point out that such is only the case if a new product is a “substitute” for the product people are newly excited about. There is truth to this. But there is also a reality shown time and time again which gets to the heart of tech bets. It is almost always the case that a new product that is “adjacent” to your product has some elements of more expensive, more complex in some dimensions, less functional, or less than ideal. Then what seems like an obvious choice, which is to adjust your own product, quickly looks like a fool’s bet. Why would you chase an inferior product? Why go after something that can’t really replace you?
The examples of this are too numerous to count. The iPhone famously sucked at making phone calls (a case where the category of “mobile phone” was under reinvention and making calls turned out to be less important). Solid State storage is famously more expensive and lower capacity than spindle drives (a case where the low power, light weight, small size are more valued in mobile devices). Of course tablets are famously unable to provide apps to replace some common professional PC experiences (a case where the value of mobility, all day battery life, always connected seem more valued than a set of platform capabilities). Even within a large organization we can see how limited feature set cloud storage products are being used actively by employees as “substitutes” for enterprise portals and file shares (a case where cross-organization sharing, available on the internet, and mobile access are more valued than the full enterprise feature set). The list goes on and on.
As product managers we all wish it was such a simple choice when we face these situations. Simply leapfrog the limited feature set product with some features on our profitable product. Unfortunately, not every new product that might compete with us is going to disrupt us. So in addition to facing the challenges of evolving the product, we also have to decide which competitors to go after. Often it takes several different attempts by competitive products to offer just enough in the way of new / different approaches to begin to impact an established product.
Consider for example of how much effort the Linux community put into desktop Linux. And while this was going on, Android and iOS were developed and offered a completely different approach that brings new scenarios to life. A good lesson is that usually a head-on alternative will quite often struggle and might even result in missing other disruptive technologies. Having a unique point of view is pretty important.
The reality of this situation is that it is only apparent in hindsight. While it is going on the changes are so small, the product features so minimal, and the base of the customers choosing a new path so narrow that you don’t realize what is going on. In fact, the new product is also on an incremental innovation path, having attained a small amount of traction, and that incremental innovation rapidly accumulates. There is a tipping point.
That is what makes acting during such a “crisis” so urgent. Since no one is first all the time (almost by definition when you’re the leader), deciding when and how to enter a space is the critical decision point. The irony is that the urgency to act comes at a time when it appears from the inside to be the least urgent.
Choosing to innovate means accepting the challenges
We’ve looked at the landscape and we’ve decided as a team that our own product needs to change course. There is a real risk that our product (business) will be marginalized by a new entry adjacent to us.
We get together and we come up with the features and design to go after these new scenarios and capabilities.
The challenge is that some of what we need to do involves changing course—this is by definition what is going on. You’re Apple and you decide that making phone calls is not the number 1 feature of your new mobile phone or your new tablet won’t run OS X apps. Those are product challenges. You also might face all sorts of challenges in pricing, positioning, and all the things that come from having a stable business model. For example, your competitor offers a free substitute for what you are selling.
The problem is your existing customers have become conditioned to expect improvements along the path we were traveling together. Worse, they are by definition not expecting an “different” product in lieu of a new version of their favorite product. These customers have built up not just expectations, but workflows, extensions, and whole jobs around your product.
But this is not about your existing and best customers, no matter how many, it is about the foundation of your product shifting and you’re seeing new customers use a new product or existing customers use your product less and less.
Moving forward the product gets built and it is time to get it into market for some testing or maybe you just release it.
All that work your marketing team has done over the years to establish what it means to “win” in the space that you were winning is now used against you. All the “criteria” you established against every competitor that came along are used to show that the new product is not a winning product. Except it is not winning in the old way. What you’ve done is become your own worst enemy.
But even then, the new way appears to be the less than optimal way—more expensive, less features, more clicks, or simply not the same at doing things the product used to do.
The early adopters or influential users (that was an old term in the literature, “IEU” or sometimes “lead user”) are immediately taken aback by the change in direction. The workflows, keystroke memory, add-ins, and more are just not the same or no longer optimal–there’s no regard for the new scenarios or capabilities when the old ones are different. Worse, they project their views across all customer segments. “I can’t figure this out, so imagine how hard it will be for my parents” or “this will never be acceptable in the enterprise” are common refrains in tech.
This happens no matter who a product is geared towards or how complex the product was in the first place. It is not how it does anything but the change in how it did things people were familiar with. This could be in user experience, pricing, performance, platform requirements or more.
You’re clearly faced with a set of choices that just don’t look good. In Lean Startup, Eric Ries talks in detail about the transition from early users of a new product to a wider audience. In this context, what happens is that the early users expect (or tolerate) a very different set of features and have very different expectations about what is difficult or easy. His conclusion is that it is painful to make the transition, but at some point your learning is complete and it is time to restart the process of learning by focusing on the broader set of customers.
In evolving an existing product, the usage of a pre-release is going to look a lot like the usage of the current release. The telemetry proves this for you, just to make this an even more brutal challenge. In addition, because of the years of effort the enthusiasts put into doing things a certain way and all that work establishing criteria for how a product should work, the obvious thing to do when testing a new release is to try everything out the old release did and compare to the old product (the one you are changing course of) and then maybe some new stuff. This looks a lot like what Eric describes for startups. For products in market, the moment is pretty much like the startup moment since your new product is sort of a startup, but for a new trajectory.
Remember what brought us here, two things:
- The environment of usage or business around the product was changing and a bet was made that changes were material to the team. With enough activity in the market, someone will always argue that this change is different and the old and new will coexist and not cannibalize each other (tell that to PalmPilot owners who swore phones would be separate from calendar and contacts, or GPS makers who believe in stand-alone units, or…).
- A reminder that if Henry Ford had asked customers what they wanted from a car they would have said a faster horse. The market was conditioned to ask for and/or expect improvements along a certain trajectory and no matter what you are changing that trajectory.
All the data is flowing in that shows the new product is not the old product on the old path. Not every customer is interested in doing new things, especially the influential testers who generally focus on the existing ways of doing things, have domain expertise, and are often the most connected to the existing product and all that it encompasses. There is an irony in that for tech these customers are also the most tech-savvy.
Pretty quickly, listening to customers is looking exceedingly difficult.
If you listen to customers (and vector back to the previous path in some way: undo, product modes, multiple products/SKUs, etc.) you will probably cede the market to the new entrants or at least give them more precious time. If technology product history is any guide, pundits will declare you will be roadkill in fairly short order as you lack a strategic response. There’s a good chance your influential customers will rejoice as they can go back and do what they always did. You will then be left without an answer for what comes next for your declining usage patterns.
If you don’t listen to customers (and stick to your guns) you are going to “alienate” folks and cede the market to someone who listens. If technology product history is any guide, pundits will declare that your new product is not resonating with the core audience. Pundits will also declare that you are stubborn and not listening to customers.
All of this is monumentally difficult simply because you had a successful product. Such is the price of success. Disrupting is never easy, but it is easier if you have nothing to lose.
Many folks will be quick to say that new products are fine but they should just have the old product’s way of doing things. This can seem like asking for a Prius with a switch to turn off the battery (my 2002 Prius came with a training DVD, parking attendant reference card, and more!). There are many challenges with the “side by side” approach. The most apparent is that it only delays the change (meaning delays your entry into the new market or meeting of new scenarios). Perhaps in a world of cloud-services this is more routine where you have less of a “choice” in the change, but the operational costs are real. In client code/apps the challenge becomes very quickly doing things twice. The more complex the changes are the more costly this becomes. In software nothing is free.
Product development is a social science.
People and time
In this numbered conversation, “disrupt or die” there are a few factors that are not often discussed in detail when all the debates happen.
First, people adapt. The assumption, especially about complex tech products, is that people have difficulty or lack of desire to change. While you can always overshoot the learning people can or are willing to do, people are the most adaptable part of a system. One way to think about this is that every successful product in use today, those that we all take for granted, were introduced to a customer base that had to change behavior. We would not be where we are today without changing and adapting. If one reflects, the suboptimal change (whether for the people that are customers or the people running a business) is apparent with every transition we have made. Even today’s tablets are evidence of this. Some say they are still for “media consumption” and others say they are “productivity tools”. But behind the scenes, people (and developers) are rapidly and actively changing and adapting to the capabilities of tablets because the value proposition is so significantly improved in some dimensions.
Second, time matters. Change is only relative to knowledge people have at a moment in time and the customers you have at the moment. New people are entering the customer base all the time and there is a renewal in skills, scenarios, and usage patterns. Five years ago almost no one used a touch screen for very much. Today, touch is a universally accepted (and expected) input method. The customer base has adapted and also renewed around touch. Universities are the world’s experts at understanding this notion of renewal. They know that any change to policy at a university is met with student resistance (especially in the spring). They also know that next year, 25% of the “customer base” will be replaced. And in 3 summers all the students on campus will only know the new way. One could call that cynical. One could also call that practical.
Finally time means that major product change, disruption, is always a multi-step process. Whether you make a bet to build a new product that disrupts the market dynamics or change an existing product that disrupts your own product, it rarely happens in one step. Phones added copy/paste and APIs and even got better at the basics. The pivot is the tool of the new endeavor until there is some traction. Feedback, refinement, and balancing the need to move to a new space with the need to satisfy the installed base are the tools of the established product “pivoting” in response to a changed world. It takes time and iteration–just the same way it took time and iteration to get to the first summit. Never lose sight of the fact that disrupting is also product development and all the challenges that come from that remain–just because you’re disrupting does not mean what you do will be perfect–but that’s a given we all work with all the time. We always operate knowing there is more change to come, improvements and fixes, as we all to learn by shipping.
Part of these factors almost always demonstrate, at least in the medium term, that disruption is not synonymous with elimination. Those championing disruption often over-estimate progress towards elimination in the short term. Though history has shown the long term to be fairly predictable. Black cars are still popular. They just aren’t the only cars.
Product development choices are based on social science. There is never a right answer. Context is everything. You cannot A/B test your way to big bets or decisions about technology disruption. That’s what makes all of this so fun!!
Go change the rules of the game!
Note. I believe “disrupt or die” is the name of a highly-regarded management class at General Electric’s management school.
A post by Alex Limi, of Mozilla, Checkboxes that kill your product, is a fascinating read for anyone in the position to choose or implement the feature set of a software project. What is fascinating is of course the transparency and admission of the complexity of a modern software product. Along with this is a bit of a realization that those making the choices in a product are in some ways the cause of the challenge. Things are not quite so simple but are also not so difficult.
By now we are all familiar with the notion that the best designs are the simplest and most focused designs. Personified by Apple and in particular the words of Steve Jobs, so much of what makes good products is distilling them down to their essence. So much of what makes a good product line is only shipping the best products, the smallest set of products. So much has been written, including even in Smithsonian Magazine, about the love of simplicity that inspired and is expressed in the design language of Apple’s products based on a long history of design.
It is exceedingly difficult to argue against a simply designed product…so long as it does what you want or when it does more than competitive products.
In fact it is so difficult to argue against simplicity that this post won’t even attempt to. Let’s state emphatically that software should always do only what you need it to do, with the fewest number of steps, and least potential for errors due to complex choices and options.
On the other hand, good luck with that.
Anyone can look at any software product (or web site or hardware product) and remove things, decide things are not valuable to “anyone” or simply find a new way to prioritize, sort, or display functionality, content, capability. That’s really easy for anyone who can use a product to do. It is laudable when designers look back at their own products and reflect on the choices and rationale behind what, even with the best intentions, became undesired complexity, or paperclips.
The easiest type of simplicity is the kind that you place on a product after it is complete, hindsight is rather good when it comes to evaluating simplicity. This is simplicity by editing. You look at a product and point out the complexity and assume that it is there because someone made some poor assumptions, could not decide, didn’t understand customers, or a whole host of other reasons.
In fact, many choices in products that result in complexity are there because of deliberate choices with a known cost. Having options and checkboxes costs code and code costs time in development in testing. Adding buttons, hinges, or ports is expensive in materials, weight, or even battery life. Yet designers add these anyway. While data is not a substitute for strategy, looking at usage data and seeing that nearly every bit of surface area is executed, validates these choices (one could go through Limi’s post and reverse engineer the rationale and point to the reasons for baggage).
It is enormously difficult in practice to design something with simplicity in mind and express that in a product. It is an order of magnitude more difficult than that to maintain that over time as you hope for your asset to remain competitive and state of the art.
Software is a unique product in that the cost of complexity is rarely carried by the customer. The marginal cost for more code is effectively zero. While you can have lots of options, you can also effectively hide them all and not present them front and center. While you can have extra code, it is entirely possible to keep it out of the execution path if you do the work. While you can inherit the combinatorics of a complex test matrix, you can use data and equivalence classing to make good engineering assumptions about what will really matter. Because of these mitigations, software is especially difficult to design simply and maintain in a simple state even if you accomplish a simple design once.
Here are seven reasons why simplicity in software design is incredibly difficult:
- New feature: enable/disable. You add a new feature to your product but are worried about the acceptance of the feature. Perhaps because your new feature is an incredibly innovative, but different, way to do something everyone does or perhaps because your new feature is based on a technology that you know has limits, you decide to add the checkbox. The easy thing to do is to just add a “do you want to use this” or the first time you see the feature in action you offer up an option to “keep doing this”. Of course you also have to maintain a place to undo that choice or offer it again. Play this out over the next release and evolution of the feature and you can see where this leads.
- New feature: can’t decide. You add a new feature and it clearly has a modality where some people think it should go left and others think it should go right (or scroll up or down, for example). So of course the easy thing to do is just add an option to allow people to choose. Play this out over time and imagine what happens if you decide to add a new way or you enhance one of left or right and you can see the combinatorics exploding right before your eyes.
- New way of doing something: enable compatibility. You add a new way to do something to your product as it evolves. Just to be safe you think it would be best to also have the old way of doing something stick around so you add back that option—of course software makes this easy because you just leave the old code around. But it isn’t so easy because you’re also adding new features that rely on the new foundation, so do you add those twice? Play this out as the new way of doing something evolves and people start to ask to evolve the old thing as well and the tyranny of options gets to you quickly.
- Remove feature: re-enable. As your product evolves you realize that a feature is no longer valid, useful, or comes at too high a cost (in complexity, data center operations, etc.) to maintain so you decide to remove it. Just to be safe you think it is a good idea (or customers require it to be a good idea) to leave in an option to re-enable that old feature. No big deal. Of course it is important to do this because telemetry shows that some people used the feature (no feature is used by zero people). Play this out and you have to ask yourself if you can ever really remove a feature, even if there is a material cost to the overall system for it to be there.
- Environmental choice: customize. Your product is used in a wide variety of environments from consumer to enterprise, desktop to mobile, managed to unmanaged, private network to internet, first time to experienced people, developers or end-users, and so on. The remarkable thing about software is the ability to dynamically adjust itself to a different usage style with the simple addition of some code and customization. The depth and breadth of this customization potential makes for a remarkably sticky and useful product so adding these customizations seems like a significant asset. Play this out over time and the combinatorics can overwhelm even the largest of IT administrators or test managers. Even if you do the work to design the use of these customizations so they are simple, the ability to evolve your designs over time with these constraints itself becomes a constraint—one that is likely highly valued by a set of customers.
- Personality: customize. You design a product with a personality that reflects the design language across every aspect of the product from user interface, documentation, packaging, web site, branding and logos, and more. Yet no matter what you do, a modern product should also reflect the personality of the owner or human using it. You see no problem offering some set of options for this (setting some background or color choices), but of course over time as your product evolves there is a constant demand for more of these. At some extremes you have requests to re-skin the entire product and yet no matter what you try to do it might never be enough customization. Play this out over time and you face challenges in evolving your own personality as it needs to incorporate customizations that might not make sense anymore. Personality starts to look a lot like features with code not just data.
- Competitive: just in case. The above design choices reflect complexity added during the development of the product. It is also possible to make choices that do not arise out of your own choices, but out of choices that come from responding to the market. Your main competitor takes a different approach to something you offer and markets the heck out of it. You get a lot of pressure to offer the feature that same way. The natural reaction is to put in a quick checkbox that renders some element of the UI your way as well as competitor’s way. You battle it out, but rest assured you have the objection-handler in place so sales and marketing don’t stress. Play this out and you can see how these quick checkboxes turn into features you have to design around over time.
Of course we all have our favorite illustrations of each of these. You can imagine these at a very gross level or even at a very fine level. The specifics don’t really matter because each of us can see immediately when we’re hitting up against a choice like this. Play the design choice out over the evolution of the product/feature and see where it goes.
It is important to see that at the time these are not dumb motivations. These are all legitimate product design approaches and tradeoffs. Another way people see simple is that while you’re designing it you know how it will definitely not appeal to a set of customers. You can take a bet on convincing people or you can be a bit safer. Product development is uncertain and only hindsight is 20/20. For every successful product that is simple, there are a lot of simplicity approaches that did not pan out over time. Minimal can be simple, or just a minimal number of customers.
What can you do?
Software is definitely in a new era. The era of excess configurability or even infinite customization is behind us. The desire for secure, robust, long battery life along with incredible innovations in hardware that bring so many peripherals on board means that designers can finally look at the full package of software+hardware through a different lens.
If you draw an analogy to the evolution of the automobile, then one might see where the software world is today. And because we see software and hardware inextricably connected today, let’s say that this applies to the entire package of the device in your hand or bag.
In the golden era, as some would say, of automobiles it was the height of hip to know the insides of your car. A fun after school project for a guy in high school would be to head home, pop the hood on the Chevy, and tune the engine. Extra money earned on the side would go to custom parts, tools, and tweaking your wheels. You expressed yourself through your car.
Then along came the innovations in quality and reliability from car makers in the 80’s. They saw a different approach.
When I was 16 my father took me to look at cars. We stopped by the dealer and during the pitch he asked the salesman to pop open the hood. I am sure the look on my face was priceless. I had literally no idea what to look for or what to see. Turns out my father didn’t either. Electronic fuel injection, power steering, and a whole host of other things had replaced the analog cars he knew and loved (and currently drove). Times had changed.
I have not looked under the hood of a car since. My expectation of a car is that it just works. I don’t open the hood. I don’t service it myself. I don’t replace parts myself. I can adjust the seats, set the radio presets, and put an Om sticker on the back. I want the car’s design to express my personality, but I don’t want to spend my time and energy worrying if I broke the car doing so. Technology has advanced to the point where popping the hood on a car is no longer a hobby. The reliability of being able to drive a 2002 Prius for over 100,000 miles without worrying comes with fewer options and customizations, but I got a car that cost less to operate, took less time as an owner to maintain, and was safer in every way. Sold.
Today’s sealed Ultrabooks and tablets, app stores, and even signed drivers represent this evolution. Parts that done wear out, peripherals that you don’t need to tune or adjust at the software level, thin, light, robust, reliable. Sold.
Approach – Point of View
How can you approach this in the products you design? As you can imagine there is a balance. The balance is between your point of view and making sure you truly meet customer needs.
A point of view has to be one of the best tools of design A point of view is the reason for being, the essence, the very nature of a product. In a world where just about every product (but not all) is made of similar ingredients and solve problems that can kind-of, sort-of be solved in other ways, what distinguishes one product from another is a unique point of view that is followed through in the design.
A point of view says who the product is for and why. A point of view says the benefits of a product. A point of view says why this product is better, faster, and differentiated in the marketplace.
A point of view also guides you in deciding how to be simple. Simplicity comes from adhering to your point of view. If you have a clear point of view then simplicity follows from that. Is something consistent with your point of view? If so then it sounds like a candidate. If not, then why are you considering it? Is your point of view changing (it can, but be careful)?
But we don’t all have the luxury of declaring a point of view and sticking to it. You can share your point of view with customers, or potential customers. You can articulate your point of view to the market. You can also adapt and change. The market also adapts and changes.
That’s why product development is so exciting and interesting. The answers are not so simple and the journey is complex, even if the goal is product simplicity.
PS: Interested in a Harvard Business teaching case study on this topic, then perhaps check out http://www.hbs.edu/faculty/Pages/item.aspx?num=34113 (Microsoft Office 2007). This is a paid link for which I receive no compensation.
Access to rich usage data is something that is a defining element of modern product development. From cars, to services, to communications the presence of data showing how, why, when products are used is informing how products are built and evolve. To those developing products, data is an essential ingredient to the process. But sometimes, choices made that are informed by data cause a bit of an uproar when there isn’t uniform agreement.
The front page of the Financial Times juxtaposed two data-driven stories that show just how tricky the role of data can be. For Veronica Mars fans (myself included), this past week was an incredible success as a Kickstarter project raised millions to fund a full length movie. Talk about data speaking loud and clear.
This same week Google announced the end of life of Google Reader, and as you can see from the headline there was some controversy (it is noted with irony that the headline points out that the twitterspehere is in a meltdown). For all the 50,000 or more folks happy with the VM movie, it seems at least that many were unhappy about Google reader
The role of data in product development is not without controversy. In today’s world with abundant information from product development teams and analysis of that data, there is ample room to debate and dissect choices. A few common arguments around the use of data include:
- Representation. No data can represent all people using (or who will use) a product. So who was represented in the data?
- Forward or backward looking. When looking at product usage, the data looks at how the product was used but not how it will be used down the road (assuming changes). Is the data justifying the choice or informing the choice?
- Contextual. The data depends on context in which it is collected, so if the user interface is sub-optimal or drives a certain pattern the data does not necessarily represent a valid conclusion. Did the data consider that the collection was itself flawed?
- Counter-intuitive. The data is viewed as counter-intuitive and does not follow the conventional wisdom, so something must be wrong. How could the data overlook what is obvious?
- Causation or correlation. With data you can see an end state, but it is not always clear what caused the end-state. If something is used a lot, crashes a lot, or is avoided there might be many reasons, most not readily apparent or at least open to debate, that cause that end-state. Is the end-state coincident with the some variables or do those variables cause the end-state?
When a product makes a seemingly unpopular change, such as Google did with reader, some of more of these or other arguments are brought forward in the discussion of the choice.
In the case of Reader, the official blog stated “usage of Google Reader has declined”. While it does seem obvious that data informed the choice, if one does not agree with the choice there is ample opportunity to dispute the conclusion. Was the usage in absolute or relative decline? Were specific (presumably anonymous) users slowing their usage? What about the usage in a particular customer segment? The counter-intuitive aspect of the data showed, as most dialog pointed out strong first party usage among tech enthusiasts and reporters.
The candid disclosure of the use of data offered some transparency to the choice, but not complete transparency. Could more data be provided? Would that change how the product development choice was received?
Conversely, no one is disputing the success of the VM Kickstarter project (making a movie is similar to product development). The data is clear, there is a high level of demand for the movie. I know that fits my intuition as a fan of the series. The fact that this data came via a popular (and transparent) web service only seems to validate our intuition. In this case, the data is seemingly solid.
While these are just two examples, they happened in the same week and show the challenges of data, transparency, and product development choices. While data can inform choices, no one is saying it is the only way to make a choice or that those making products should only defer to data. Product development is a complex mix of science and intuition. Data represents part of that mix, but not the whole of it.
Data is not strategy
Ultimately, data contributes to product development but does not replace the very unscientific notion of what to build and why. That’s the art of product development and how it intersects with business strategy. This follows from the fact that products are developed today for use in the future. The future is uncertain for your own product’s evolution, but all around you is uncertainty. Data is an input to help you define a strategy or modify it, but cannot replace what is inherently the uncertain side of innovation.
In Eric Ries’ Lean Startup book (or http://en.wikipedia.org/wiki/Lean_Startup), there is an interesting discussion on how the role of data can contribute to an early stage product. While the anecdote and approach are described in the context of a project very close to Eric, I think we can all see parallels to other products as well. My intent is not to restate or recast the book, but to just reflect upon it a bit.
One part of developing a new product, as described, is to develop a minimum viable product (MVP) that does not reflect the final product but is just enough of the product to collect the maximum validated data about potential customers.
An interesting point in the description is how often the people that will use this early version of the product are enthusiasts or those especially motivated and forgiving about a product while under development. The tricky user interface, complex sign-up, or missing error conditions and things like that might not matter to these folks, for example.
Not every product ultimately targets those customers—they are not super large in number relative to the broad consumer world, for example. As you learn and collect validated data about your product strategy you will reach a critical point where you essentially abandon or turn away from the focus on enthusiasts and tilt towards a potentially larger customer base.
This is where your strategy comes into play. You have iterated and validated. Presumably these early users of your product have started to use or like what you have or at least pointed you in a direction. Then you’ll take a significant turn or change—maybe the business model will change, maybe there will be different features, maybe the user interface will be different. This is all part of taking the learning and turning it into a product and business. The data informed these choices, but you did not just follow it blindly. Your product will reflect but not implement your MVP, usually.
But with these choices there will probably not be universal agreement, because even with the best validated data there can be different approaches to implementing the learning.
The use of data is critical to modern product development. Every product of every kind should have mechanisms in place to learn from how the product is used in the real world (note, this is assuming very appropriate policies regarding the collection and usage of this data). This is not just about initial development, but evolution and maturing of the product as well.
If you’re going to use data to design and develop your product, and also talk about how the product was designed and developed, it is worth considering how you bring transparency to the process. Too often, both within an organization and outside, data is used conveniently to support or make a point. Why not consider how you could provide some level of detail that could reduce the obvious ways those that disagree with your approach might better understand things, especially for those that follow along and care deeply. Some ideas:
- Provide absolute numbers for the size of the data set to avoid discussions about sample size.
- Provide a sense of statistical significance across customer types (was the data collected in one country, one type of device, etc.).
- Provide the opportunity for additional follow up discussion or other queries based on dialog.
- Overlay the strategic or social science choices you are making in addition to the data that informed the choices.
Transparency might not remove controversies but might be a useful tool to have an open dialog with those that share your passion for the product.
Using data as part of product development will never be free of debate or even controversy. Products today are used across millions of people worldwide in millions of different ways, yet are often used through one software experience. What works in one context doesn’t always work in another. What works for one type of customer does not always work for another. Yet those that make products need to ship and need to get products to market.
As soft as software is, it is also enormously complex. The design, creation, and maintenance preclude at today’s state of the art an everything for everyone approach. To make those choices a combination of data an intuition make a good approach to the social science of product development.
PS: Please click the link on the first use of data for a discussion of data versus datum. :-)
Creating a product, whether totally new or an update, means deciding what’s in and what’s out. The main execution constraint you have is the time you are willing to spend developing your product (or the number of developers, roughly the same thing). In your planning you need to decide the right amount of work to do to create, or justify, the product—rightsizing your product plan. Executing a rightsized plan without compromising your vision is a core product development skill.
What is rightsizing?
While most all product development debates take place at a fairly granular level—having a specific feature, architecting an investment, choosing how to communicate the work—there are some broad topics that can have a profound impact on how the product evolves. The most critical first step of a project is to decide the “scope”. Deciding the scope of a project is an active conversation across all the stakeholders involved.
For software and service projects (note, project=product=service) the scope determines a whole host of choices, and even how you articulate the scope can open up or foreclose options. You sort of need to start by checking in with the realities of your foundation:
Entirely new product. An entirely new product is the opportunity to scope a product to a minimal set of features or needs. In other words you can literally pick the smallest set of features/investments to express your scenario or goals. It has become common to refer to this as a minimum viable product or MVP. Another aspect of “new” is whether the project is new to your company/organization or new to the industry as a whole. There’s a tendency to view scoping differently if something is entirely new to the world versus new to your organization. An MVP can take on one meaning for a startup where there are no expectations for “minimal”. For an existing company, this becomes increasingly challenging—things like globalization, accessibility, security, integration with existing account infrastructure, and more can set a significantly higher bar for “minimal”.
Evolving an existing product. Most all software is about evolving an existing product. In scoping the work to improve an existing product the main dimensions that determine the scope will be compatibility with the current product—in user experience (keystroke, flow), data (file formats, previously saved work or settings), features (what a product does), or platform (APIs, add-ins). In scoping a product plan for an existing product, deciding up front to maintain 100% of everything itself has a cost, which to the outside world or management, might be counter-intuitive. Regression testing, design constraints, and even what you choose to do differently with existing features determine the scope of the new work for the release. Sometimes a product can be new for the company even if it evolves an existing product, but these same constraints might apply from a competitive perspective.
Disrupting an existing product. Any project can evolve for some period of time and eventually requires a significant overhaul—scenarios change, scale limits reached, user experience ages, and so on. A project that begins knowing you will disrupt an existing product poses a different set of scoping challenges. First and foremost you need to be clear on what part of the project you are disrupting. For example, are you considering a full re-implementation of an existing product or are you re-architecting a portion of an existing product (again, say the UI, API, or features)? Sometimes a product can be new to your organization but disrupt a competitive product, which brings with it a potentially different view of constraints.
Side-by-side product. One type of project scoping is to decide up front that your project will coexist with a product that solves a similar problem from your customers/company/organization. This approach is quite typical for internal IT systems where a new system is brought up and run in parallel with the old system during a switchover period. For a consumer product, side-by-side can be a shorthand for “keep doing it the way you’re doing it but try out our system” and that can apply to a specific set of customers you are targeting early in development.
Each of these is more granular and real-world in an attempt to cover more of the software projects so many of us work on. Typically we look at projects as “new product” or “update” but tends to over-simplify the project’s scope.
Many projects get off to a rocky start if the team is not clear on this one question of scope. Scoping a product is an engineering choice, not simply a way to position the product as it is introduced. For example, you might develop the product as an evolution of a current product but fail to get some of the baseline work done. Attempting to position the product as “totally new” or “just run it side by side” will probably backfire as many of the choices in the code do not reflect that notion–the seams will be readily apparent to customers (and reviewers). As with many challenges in product development, one can look back at the history of projects, both successful and not, and see some patterns to common challenges.
Pitfalls in scoping
Deciding and agreeing up front to the scope of your product is a critical first step. It is also easy to see how contentious this can be and can often generate the visceral and stereotypical reactions from different parts of a collective team.
If you develop an enterprise product and propose something that breaks compatibility you can expect the scoping efforts to be met with an immediate “requirement” that compatibility be added back to the plan from your enterprise sales force, for example.
A consumer product in a space such as note taking or writing, as an example, can certainly be immediately overloaded with the basics of text processing and image handling. Or one can expect reviewers to immediately compare it to the current most popular and mature product. We’re all familiar with the first release of products that received critical reviews for missing some “must have core features” (like copy/paste) even though they had a broadly disruptive first release.
The needs for a product to be global, accessible, or to plug into existing authentication mechanisms are examples that take a great deal of up front work to consider and clarify with the team (and management).
In fact the first pitfall of most scoping efforts might be the idea that disagreements up front, or just different points of view that have been “shelved”, will be easily resolved later on. One coping mechanism is for folks to think that the brilliance of the product’s innovation will be apparent to all parties and thus all the things “out of scope” will go from a disagreement to shared understanding once people see the product. My experience is that this isn’t always how it works out :-)
The most difficult challenge when you’re scoping the project is that you actually considered all of these “obvious” things, yet as people see the product (or plans) for the first time these come across to them like obvious misses or oversights. You probably know that you could add features that exist in the marketplace, that you’re breaking compatibility, that you’re going to need to run side-by-side, that you’re not ready for complex character sets, and so on. Yet as the product is revealed to peers, management, reviewers, or even as the team sees the whole thing coming together there’s always a chance of a bit of panic will set in. If you’ve gone through an effort to plan the scope, then none of this will be news and you will have also prepared yourself to continue forward progress and the discussion for how and why choices were made.
Even with that preparation, there are a few common pitfalls to project rightsizing that one needs to consider as a project goes from planning through execution. These pitfalls can surface as the product comes together and the first customers see it or these can be the reason the product isn’t getting to a stage where others can see it:
- Backing into a different scope. The most critical failure of any project is to think you can revisit the scope of the project in the middle, and still stay on time. If you decide to break compatibility with the existing product and build out new features assuming this approach then you’re faced with rearchitecting the new features, cutting them, or some decidedly tricky middle ground. Taking a step back, what you’re really doing is revisiting the very approach of the whole product. While this is possible, it is not possible to do so on the schedule and with the resources you have.
- Too much. Most all of us have scoped a project to do more than we can get done in the time and with the resources we have. A robust product plan provides a great deal of flexibility to solve this if you were clear on the scope—in other words a feature list that is too long is easy to trim. This is decidedly different than trying to change scope (change from disrupting the product to evolving the product for example). If all you have is too many features, but the intent of the release is consistent with that long list—I promise there are features to cut.
- Too little. In the current climate where MVP is a perfectly good way to develop innovative new products, you can still scope the product to do too little. In the new product space, this could be a sign that you have not yet zeroed in on the innovation or value-add of your product. Similarly, any project that involves a data center deployment (or resources) and a commitment from partners can also be scoped such that the collective investment is more than the collective return on that investment. In the evolution of existing products, such a release might be characterized as simply too conservative. It could also lack focus and just be a little bit of stuff everywhere.
- Wrong stuff. Often overlooked as a potential pitfall of product scoping is a choice to solve the wrong problems. In other words the plan might be solid and achievable, but the up-front efforts scoped the project on the wrong work. This is simply picking wrong. The trap you can fall into is how you cope with this—by simply adding more work or on-the-fly rescoping the product to do more or change scopes. Wrong stuff is a common pitfall for evolving existing products—it is when the scoping efforts lacked a coherent view of priorities.
- Local or global optimization. Scoping a product is essentially an optimization function. For an existing product that is evolving, there is a deliberate choice to pick an area and re-optimize it for a new generation. For a new product, the MVP is a way of choosing a place in the value chain to optimize. This scoping can be “off” and then the question is really whether the adapting that goes on during the project is optimizing the right plan or the wrong plan. This optimization challenge is essentially a downstream reaction to having picked the wrong stuff. You can A/B test or “re-position” the product, but that won’t help if you’re stuck on a part of the value curve that just isn’t all that valuable. Is your optimizing truly about an optimal product or are caught in a trough optimizing something local that is not enough to change the product landscape?
Of course projects go wrong in so many ways, some major and some minor. In fact, part of product development is just dealing with the inevitable problems. Nothing goes smoothly. And just like Apollo 13, when that first glitch happens you can think to yourself “gentlemen, looks like we had our glitch” or you can stay alert knowing that more are on the way. That’s the nature of complex product development.
Approach to rightsizing
Rightsizing your project up front is a way to build in both constraints and flexibility.
Rightsizing is clarifying up front the bounding box of the project. If you’re clear about the mechanical and strategic constraints of a project then you’ve taken the first step to keep things on track and to make sure your commitment to your team, customers, and management to develop a product can be met. One way to think of these constraints is as the key variables for project scoping—you rightsize a project by choosing values for these variables up front.
Mechanical constraints are the pillars of a project from a project management perspective. You can think of these as the budget or the foundation, the starting point:
- People. How many people are going to work on the project? This is the simplest question and the easiest one to fix up front. A good rule of thumb is that a project plan should be made based on the number of people you have from day one. While many projects will add people over time, counting on them to do critical work (especially if the project is not one that lasts years) is almost certain to disappoint. Plus most every project will have some changes in staffing due to natural people transitions, so the best case is to assume new people can fill in for departing folks.
- Time. The next easiest scoping variable is how long your project will last. Whether it is weeks, months, or years you have to pick up front. Proponents of continuously shipping still need to pick how long from the time code is planned and written until that particular code is released to customers in some way—and of course that can’t be done in isolation if multiple groups have code to release. As with people, you can add more time but you don’t get proportionally more work. And as we all know, once the project starts just making things shorter usually fails to meet expectations :-) Many stakeholders will have a point of view on how long the project should last, but this cannot be viewed in isolation relative to what you can get done.
- Code and tools. For any project that is starting from an existing product, one should be deliberate about what code moves forward and what code will be replaced/re-architectected. Starting from an existing product also determines a number of mechanical elements such as tools, languages, and cloud infrastructure. For a new product, picking these up front is an important rightsizing effort and not the sort of choices you can revisit on the fly as often these impact not just the schedule but the expression of features (for example, native v. HTML5 app, or what infrastructure you connect to for authentication). Choosing the code up front will bring in many stakeholders as it impacts the scope of the project relative to compatibility, for example.
While each of these mechanical attributes are relevant to the product strategy they don’t necessarily define the product strategy. Commonly, products talk about release cadence as a strategy but in actuality that is an expression of the mechanical aspects of the project. Strategic constraints are the walls of your project that build on the foundation of your mechanical constraints. Your strategy is how you make choices about what to do or not do for the product. There are a couple of key strategic constraints to address up front:
- Big bets. Every project makes a very small, or even just one, big bet. For an existing product this might be a bet on a new user interface or new business model. For a new product this might be the key IP or brand new scenario. The big bet is the rallying cry for everyone on the team—it is the part everyone is going to sacrifice to make work.
- Customer. Every project needs to start off knowing who will be using the product. Of course that sounds easy, but in a world of scoping a project it means that every potential customer cannot be served by a project to 100% of their needs or wants. Knowing how you are delivering value to the relevant customers is a key rightsizing effort. If you’re building on an existing product and breaking with the past or building a new product, then by definition some folks will not see the product as one that meets their needs. It does not mean you will never meet their needs nor does it mean every customer like that will see things the same way.
- Long term. When rightsizing a project you want to know where you are heading. There are many ways to do this—some very heavy and some very lightweight. The context of your business determines how much effort to put into this work. If you know where you are heading over time, not just one release, then you can connect the dots from what is missing today to where you will be after one or more turns of the crank. A long term discussion is not the same as long term planning. Long term planning is a heavyweight way of making commitments you probably can’t deliver on—we all know how much changes in every aspect of the team, market, business, etc. But long term discussion allows everyone to get comfortable that “thinking” is happening. One way to think of this tool is to make sure the dialog is what the team is thinking about, not what the team is doing, so that the long term dialog does not morph into long term commitments.
The first step in building a product plan is to scope the product—rightsizing. It is common to fall into extremes of this step—being extremely minimal or being too broad. In practice, the context of your business contributes to determining what viable alternatives to rightsizing are. There are tools you can use to actively rightsize the project rather than to let the size of the project just sort of happen. Rightsizing the current project with a longer term view as to where you are heading allows projects to be scoped without compromising your vision. As with any aspect of product development, being prepared and knowing that these challenges are in front of you is the best way to manage them.
Balancing the needs of different types of customers within a single product is an incredibly difficult challenge in product design. Most every product faces this in the course of choosing features or implementation. Designing products in a changing world with changing definitions of success can be a real challenge, but there are a number of creative approaches that can be used.
Tradeoffs across different customers
I was lucky enough this recently to spend some time with the CEO of a growing company (> 150 developers). The company faces a constant struggle in their product line over how to balance the feature demands of end users “versus” IT professionals. This is an especially acute challenge in a growing company where resources are limited and winning those early paying customers is critical.
The use of “versus” is intentional. Most of the time we view these trade-offs as binary, either/or. That’s the nature of the engineering view of a challenge like this. In stark contrast, the sales/marketing view is often an “and” where the most desired end-state is to meet the needs of every type of customer. In practice, the reality of what needs to be done and what can be done is much more subtle.
Because IT pro and end-users are often viewed as working against each other, this is natural (By the way, I’ve never been a fan of the term user or end-user – a wise program manager I had the pleasure of working with once pointed out “only one other industry refers to customers as users, let’s not follow their lead.”) IT Pros think end-users seem to exist to cause information leaks and network slow-downs. End-users, let’s just call them people or humans, think IT is there to prevent any work or progress from happening. Again, reality is less extreme.
But we face many tradeoffs in developing feature lists, and thus product plans, all the time. In fact for just about any software product/service these days you can easily list a variety of customer types:
- Humans. These are the broad set of people who will use your product. Generally you don’t assume any extreme level product usage skills. These are typical customers. Of course all other customers are human too :-) The challenge each faces is representing their constituency and role beyond that of typical customer.
- IT Pros. For most tech products, IT Pros are the folks that deploy, purchase, or manage the product. They might also provide infrastructure and hardware required to use a product or service. IT Pros also champion the people in the organizations they support.
- Developers. Developers contribute add-ins or consume APIs to develop customized solutions. For many products, developers form a critical part of the ecosystem and often create the stickiness associated with a successful product.
- Power users/enthusiasts. These folks know the ins and outs of a product. Often they teach others to use a product, staff the newsgroups or self-help forums, write blogs and articles about your product. These are your fans. Power users also have feature requests (demands!) for more control, more customization and so on. Notice right away how this could work against IT Pros who want less of those or Humans who might be perplexed by such features.
- Channel partners. Many products are sold direct to humans or IT. On the other hand there are a large number of products where there are intermediate partners who are required to sell, service, or otherwise transact with paying customers. In an ad-supported product these are your advertisers. This provides another tension point our industry commonly talks about—the aspects of ads in apps and web sites which are important to the channel but might not be valued as much by Humans, as an example.
- Markets. Many products aim to meet the needs of a global customer base. Even in a global economy, there are major differences in features and scenarios around the world.
Not every product has every potential customer type and above is not a complete list. Often there is overlap; for example, most IT Pros are often enthusiasts and/or power users. The term “persona” often gets used to represent a customer type. That is a good tool, but rather than focus on the details of the person, just defining a broad category is enough when planning and scoping the product. The persona comes later in the process.
In industries defined by a combination of physical goods and physical distribution channels, products are segmented and offered with different attributes at different prices for different customers. Software and hardware products that intermix, or don’t distinguish between, both work and personal life, and often switch between those many times throughout a day, pose a special challenge to product designers (and marketers). Working within this consumerization trend motivated this post.
Relative to designing for such a scenario, a product plan might find itself in a tough spot because of challenges in the plan or approach:
- Focus nearly exclusively on one target customer type. Sometimes the approach is to just draw a line in the sand and say “we’re all about end-users”. Often this is the default most products take. In some products you can see a clear view of who the target is and a clear strategy. There are then “some features” aimed at appeasing other potential customer types. You might rationalize this by combining customer types, by saying something like “Developers are just humans who can code” for example.
- Do a little bit everywhere. There might be a case where a product is not quite deliberate and the organization or up front resource allocations end up dictating how much each type of customer gets. For example if you allocate a few developers to each segment then the let folks plan independently, the chances of features holding together well are reduced. More likely, features might end up competing or conflicting as they are developed.
- Have a plan (and some execution) and then realize late in the process you’re missing a customer. We have talked about the need to bridge the engineering and marketing efforts. In some plans there are engineering plans that don’t get buy off and as the product starts to come together the inevitable panic of “there are no features for X” where X is a customer type not receiving enough love. Meetings. Panic. Last minute changes. Doesn’t work.
The key to resolving tradeoffs is to know you’re making them up front. Product development is inherently about tradeoffs in many dimensions—in fact product development can be viewed as a series of tradeoffs and the choices made relative to those tradeoffs.
Planning to make choices
A recurring theme with this blog will be to surface issues while you are planning and to acknowledge up front that the plan is going to make choices about what to do. Rather than think of this as a micro-management waterfall approach, the team needs to arrive at principles that guide decision making every day for the team. Principles and plans work better than budgets, organizations, or requirements—principles are what smart and creative people can use effectively as a tool to address the tradeoffs inherent in product development. Tools like budgets and requirements are more like weapons people can use against other parts of the team to prevent work from happening. Principles tell the team the starting point and the end point and offer guidelines for how to make choices as the path to the end point is developed.
As an example, consider a budget that you establish that says how many resources will go to Humans and how many will go to IT Pros. Sounds great on paper. It seems like the easy way up front to decide how to address the conflict or tension.
This can backfire because it is not a holistic plan for what the product will be. In fact is almost prevents the holistic plan from happening because the first choice you make is to partition resources. The leaders of those resources are then incented to just make a plan for their efforts rather than think about the whole of the project. That isn’t evil or malicious, just a natural outcome of resource allocation and accountability.
This same dynamic might occur if you partition the team into front end/back end, or UI/service for example. Such a structure is fine, but should be done in the context of a holistic plan. Putting in place such a structure before there is a plan and hoping the plan resolves difficult problems can be difficult.
Conflict and tension are actually created by a resource budgeting approach as people naturally defer choices and decision to the management and resource allocation tools, not a collaborative product plan.
Building on the previous post about developing a framework, one can use these same tools as a way of cross-checking the plan against customer types.
We talked about a tool where you identify feature ideas and assign costs as a way to arrive at a holistic view of the product. This tool can be used with different folks in the org or even customers/partners.
Once you have arrived at a list you can take this a step further. A good idea is to refine the list of proposed ideas with your own knowledge of feature descriptions and granularity. What is helpful at this point is to develop a catalog of features that you feel could be effectively communicated to customers broadly.
For example for a very large product these would be the sessions you might give at a customer workshop describing the product. For a first generation product these features would be the product information page on a web site or even the table of contents of the product overview document you might give to a member of the press.
The way to gain an understanding of the tradeoffs you are making relative to customer types is to take the features and align them with the different customer types you have identified. This can easily be done by a spreadsheet or even a list on a whiteboard.
When you’re done every feature is listed once – you don’t give yourself credit in more than one place for a feature. This forces you to really decide who values a feature the most. Features will naturally fall into place. For example, management features will fall to IT Pros or ease of use to end-users.
Most products will show the inherent “tilt” towards a customer type during this phase. You then step back as a team and ask yourself if you’ve made the right tradeoffs or not.
Then iterate and make sure you are really delivering the holistic plan. Once you get closer to a holistic plan you can allocate resources. Iteration doesn’t stop there of course. You can move forward with a more refined view of the plan and the resources. Implementation then progresses based on the resources at hand, which is better than ideation and planning based on resources. Unlike a characteristic waterfall approach, a good planning process is a process of iteration, convergence, and parallel efforts across disciplines.
The above all sounds good, probably. If you do the above you can solve the resource allocation and overall scope of the product relative to different types of customers. It doesn’t, however, get to the heart of one really hard problem. What to do when customer needs conflict?
One thing to do is bury your head in the sand and just say there is no conflict. That is saying, for example, that end-users will value features missing that IT removed or that IT will just get over themselves and not mind arbitrary extensions being loaded on the device. That doesn’t work of course.
Because product development does not end—a release is just a point in time, which is even more so on today’s continuous product cycles—you do need to get comfortable not doing everything for everyone every release. There is always a next release. So resolving design tradeoffs needs to be about having a set of principles and a product architecture that you can build on.
This is where understanding where your own product is going is crucial, a longer term strategic view. Most products when they are new receive a combination of praise and criticism from power users, as one example. If the scenario or problem solved is compelling, power users will praise the product. They are, generally, quick to offer up suggestions and feedback for how to add flexibility or more features. That’s exciting.
In the process of designing the product, a key responsibility for the designer is to know where a design is heading. How will you know how to add more power and control? If you don’t have ideas while designing the product in the first place you might be designing yourself into a hole. Famously, copy/paste were missing from smartphones when first released. Even with a new touch design language, designers clearly understood where this would go. That was important and made it easy to introduce without a major shift in the overall ease of use that was the hallmark of the design. This could have easily been a conflict between power and ease of use.
Code architecture or architectural approaches play a key part in how you think about where you are today relative to where you are going. Many of the architectural differences between the different OS platforms we see today compared to how an OS looked 10 years ago result from making tradeoffs in the architecture. App stores, sandboxing, APIs with brokers are all about tilting the architecture towards security, battery life, and end-user safety.
We look at these changes in architecture today and compare them to where we came from and can easily see the difference. But think about the debates and planning that took place—this was a big change in approach. It was not easy for those making platforms to make these architectural tradeoffs in such a new way. Creatively addressing new requirements is a key part of understanding your product evolution over time.
An important tool is having a set of product principles–design language, architectural framework, and customer value proposition. These principles not only guide the development team but make it easy to articulate the tradeoffs made to customers when you’re done. That doesn’t make the dialog easy or even get folks to agree with the choices you make. But you’re having a conversation informed by what your product is trying to do.
An amazing change in happening in our industry today with “BYOD” or bring your own devices to work. This is a whole new level of design tradeoff the industry is facing. Since the late 1990’s the focus on administration has been to “lock down” or “control” computing devices. That worked well given the choices and challenges faced.
Consumers now can bring their own devices to work, work from home, or find ways of doing work outside the scope of the corporate network/software/device. This doesn’t change the security, IP, and safety needs of a corporation or government agency. It does change the decision framework of IT. Their internal clients have choices and how those choices overlay with the design tradeoffs in products is very interesting.
Just as APIs and OS capabilities are changing, and perhaps resetting expectations of some customer types, the way devices and software are managed are changing architecturally. As you’re planning the product, developing an architectural approach that plays out in a forward looking way is going to be a key part of resolving the design tradeoffs.
This is the engineer part of resolving tradeoffs—sometimes it is not just coming up with features, but new architectural approaches can put you on a new trajectory. That new trajectory defines new ways to design products for many types of customers.
The only thing you know for sure in addressing customer tradeoffs is that there is no right answer that always pleases all customers. That’s the nature of appealing to multiple customer types. That’s why product development is also a social science.
An Example: the enterprise challenge
These days, one of the most challenging product design debates centers on how one balances the needs of enterprise IT and the community of people, humans, using products within an enterprise. There is a major shift going on in our industry with the wave of consumer products able to fulfill scenarios outside of the control IT.
The dramatic change is decidedly not in the enterprise need to secure the digital assets of an organization or to maintain the integrity of corporate networks or to even manage the overall usage of corporate resources (balancing work and non-work). These requirements are not only still there, but in a world where a single leak of customer or financial data can make international news or the interest of regulatory bodies these requirements are more intense than ever.
The dramatic change, however, is in the ability for people within an enterprise to easily acquire tools to accomplish the work they need. In another era, obtaining servers, licensing software, getting it on site and running were all tasks that required IT sponsorship and often resources. Today tools such as cloud storage, peer to peer communications, CRM, or even commerce are just a few clicks away and require nothing more than a corporate credit card, if that.
The only policing mechanism that can be deployed “against” these tools is a policy approach. While one can block TCP/IP ports or suspicious network traffic (and certainly block downloaded client code on managed PCs), even this is challenging. Network access itself is easily obtained via a WiFi/4G access point or phone as hotspot.
The traditional approaches of locking down a device simply don’t work. In a world of mobile devices and perhaps even multiple operating systems, there isn’t a clear focal point for these lock down efforts and certainly not even a single implementation that is possible.
Where some might see this as freedom, others might see this as chaos. Smart product design efforts see an opportunity. There’s an opportunity to design products that embrace this challenge and opt to provide an architecture for IT to get done what is required of them and at the same time making it easy for people to discover and spread tools to other coworkers that solve the business challenges they face.
The design challenge is about defining a new architecture that takes into account the reality that you can design yourself into a corner if you go too far in either direction. You can focus too much on empowering people and face either a policy choice from IT or worse an active campaign against your product from the enterprise analyst community. You can focus too much on IT and your product might enable IT to turn a cool product into a locked down or customized experience that drives end-users to your competition, which is only a click away.
This is where design principles can come into play. What is the current implementation of management—what has been implemented by IT organizations that has might have lowered the satisfaction of corporate computing relative to home computing or BYOD, for example? Efforts like excessive logon scripts or complex network access requirements that keep people from using the network and favoring the path of least resistance, perhaps? Or perhaps a customized user experience for a product that is also used on home PCs and thus “different” at home and work, driving more use of the home PC for work?
Given an environment like that what is an architecture that takes into account the downsides of existing approaches to enterprise management and creates a more favorable experience while maintaining security, knowing that options abound?
Can a device be managed without changing the user experience? Can a software product be locked down relative to critical functions to reduce support calls without IT getting between the work that needs to be done and the employee?
These are the sorts of questions one needs to raise in reconciling the apparently contradictory needs of IT and people using modern products and services.
Really diving into these as you design your product has the potential to develop a real competitive advantage for your product and service.
Wheels up, returning from CES. Seems like a good time to reflect and share some of my observations.
Sharing raw data is an important part of building a strong cohesive team. Raw data allows everyone on a team to see the inputs and thus map from there to the conclusions, whether those are new plans or course corrections to existing plans.
This post shares some observations about CES but first provides some context and practices for the ins and outs of trip reports in the context of product development.
Why do trip reports?
From the earliest days of business travel, my managers have required a trip report in exchange for the privilege of taking a trip at company expense. Whether you are a manager or not, sharing your observations and learnings from a trip (site visit, customer roundtable, trade show, conference) is a way of contributing to the shared understanding of products and technologies.
A report is just that—a collection of words and artifacts—and by itself it does not represent the follow up actions as those need to come from a process of taking the data from multiple perspectives and spending time thinking through implications. In fact, a point of failure in product development is over-reacting to the immediacy of a single trip report or point of view (no matter who on the team wrote the report). Those are anecdotes and need more work to turn them into actions like changing plans or features.
There’s no right way to create a report. More often than not the format, structure, and detail of a report should follow from the type of event. Do you organize by type of technology or by vendor, by customer or customer theme, by conference session or by technical subsystem, and so on?
Reports also don’t need to be short, especially if the trip was filled with information. If you want to offer a distillation at the bullet point level then there are a couple of options. For a public event you can often cite a blog or article (or two) that seems to match your perspectives. For a confidential event you should still do a detailed report and distribute as appropriate, but consider an oral version for members of the team. Bullet points can be good on their own to make key points and also may serve as an outline for the body of the report. The downside of providing only bullet points is that it might not share enough of the raw data and folks might think of what you shared more as conclusions. Keep in mind that the time spent writing the report is also time spent thinking more deeply about what you experienced.
I personally value the use of pictures quite a bit. For site visits showing the artifacts (example app screens, paper based systems, use of devices in context, photos of the physical environment) can be super helpful to highlight what you saw. For conferences, if there is a great slide or graphic from a session, showing that can help a great deal. And for tradeshows, showing off products is super fun. Video of course can be cool but introduces complexity in sharing in some formats.
There’s often a discussion on how much hyperlinking to do in a report (links to presentation PPTs, videos, or product information for example). It really depends on the need the readers/target will have to seek even more information. Obviously you should always be prepared to provide more information, but I don’t think it is a best practice for your report to be filled with blue underlines or missing data because everything is a click away. If you’re tracking hardware, for example, and some spec (wt, mHz, watt) is important then just include that in the report.
There are two aspects to confidentiality / intellectual property to respect when writing a report. First, always be careful to report on things you are permitted to write down and report on. You should ask permission for any photos (at customer sites or conferences, and even some tradeshow booths). Second, when you distribute your report make sure you are working within company policy on the way information is shared.
Whether you use email, attachments, a file share, OneNote on SkyDrive (to share with a small group), or a blog (internally hosted) really depends on your org’s culture. The blog format is great because then you have one place with all your reports so you can always know where they are no matter what type of report. The key is to just make sure, without spamming people, that the data is available for folks on the team – or your audience.
As a manager or leader on the team, it is always a good practice to remind folks that a trip report (unless specifically noted otherwise) is just anecdotal information and not a change in plans, a call to action, or anything beyond sharing. With the data from you and other sources, the right folks who are accountable should act. If you do have feedback then separate it from the report as a good practice.
Leading up to this year’s CES show, one might have thought the CE industry was in a lull and devoid of activity, let alone innovation, by reading a few of the pre-show reports. Nothing could be further from the truth. CES 2013 was another year of amazing things to see. More importantly, CES highlights the optimism that drives our industry. The pursuit of new products, new businesses, and most importantly combining those to come up with ways to simplify and improve life for people through electronics, hardware, and software.
It seemed to me that a good number of the early reports were a bit on the snarky side and reflected a view that there would not be any major disruptive or cool announcements. Folks were talking about a lull in innovation. I even read one blog post saying the industry was boring (I’ve met product people in most every industry and can honestly say they never think their own industry is boring).
Measuring innovation by what is new, shown, and/or announced for the first time at one of the world’s most massive tradeshows is not the right measure. Companies do not usually time their product development to coincide with tradeshows. Announcing a new product in a sea of thousands of other booths is not often the best communications strategy. In today’s world, announcements can be communicated broadly through a variety of channels and amplified socially at a time that fits the business.
Expecting a company to unveil something at the show is somewhat misplaced. On the other hand, a big part of CES, at least for me, is really being able to see any (large) company’s full product line “end to end” and to see how they are fitting pieces together to deliver on scenarios, value, or competition. Smaller companies have an opportunity to show off their products in a much more interactive fashion, often with very knowledgeable members of the team showing things off. Most importantly, CES lets you see “side by side” whole categories of products—you see the positioning, the details, and how companies present their products.
Unveiling a new product or technology that is a cross-industry effort, one involving many partners, does work particularly well at CES. Intel’s efforts around Ultrabooks, in 2012 and 2013, demonstrate this. While Intel’s booth and large scale presentations show off Windows 8 and Ultrabooks, the amplification that comes when seen on display at Sony, Samsung, LG, Toshiba, and more is where the sum of the whole is greater than the parts.
Many folks might not be aware, but along with the booths and all the semi-public displays, many companies conduct confidential briefings with press and partners at CES. These briefings might show off future products and strategies, but the reporters cannot write about what they see. In this case, CES is just convenient, especially for international press who don’t get to see US companies in person. It is an interesting approach because it can positively impact press coverage of already disclosed products when reporters know “phew, there’s more to come”. It could also frustrate, “hey I want to write about that”.
Writing a CES trip report is tricky for a non-reporter. Folks not there are following blogs and mainstream media and tens of thousands of stories are flowing out from LV. A tech blog might have 30-40 people on site and might file 300-400 stories from the floor—and that is just one outlet.
One person (me) can’t compete with that. But as a product development person, there’s a different lens—we’re looking at products and technologies as ingredients and competitors, not as things we’re looking to buy now. We are looking at trends and not necessarily the here and now. A way to think of it is that some go to CES as though it were a restaurant looking for a complete meal. Others go to CES the way that chef’s go to a market in search of ingredients for their ideal of a meal. The broad consumerization of CES sometimes leaves behind the notion that it is, in fact, an industry tradeshow.
CES 2013 was a fun show for me, spending about 15 hours on the show floor. I’ve always made sure in attending CES (or COMDEX or MWC or anything) to have time to see the show and experience the richness of the event. It is easy to lock yourself in a meeting room or go from private briefing to private briefing and convince yourself you saw CES, but CES is really on the floor. And the floor was buzzing. That’s also why this report is snark-free. There’s no such thing as an entirely objective report as every observer has a bias, but you can make a report free from snide remarks.
CES 2013 was definitely a year of refinement across many product lines. Pulling some themes across a broad set of products, there was refinement in many ways:
- Mobile. Stating the obvious, mobile is front and center for every product. Where CES used to think mobile was in the North Hall’s Auto section, now everything is mobile. Where cables, connectors, and wire used to occupy the LV Hilton (aka the Whyte House) there are now radios and antennas. Even power consumption is now focused on battery life rather than mains draw.
- Design language. The design language in use for both hardware and software is trending towards a clarity and minimalism–turning over the screen to the app and the customer. There’s a lot less glowing and translucency. Navigation is clearer. Touch gestures are assumed on any device and often are not readily apparent (that is designers are assuming you will figure out how to touch and tap to make stuff happen). And the use of the full screen for the task at hand is clearly dominant. Rather than gain “speed” or “power” via multitasking by arranging, widgets, picture in picture, and so on, the focus is on moving quickly between task-oriented screens. From program guides to elaborate settings on advanced A/V to apps for healthcare you can see this language. There is a new definition of productivity underway that’s sure to be the topic of a future post.
- Build quality. Across the board products are getting better. That’s not to say there’s a fair share of low-end and low-quality stuff, particularly tablets, one can see in the South Hall as usual. There is, however, a rising tide of quality. This is a sign of further upstream integration of components as well as maturing manufacturing and assembly. It is also a reflection of consumer demand—when the difference in quality is represented by a 10-15% price delta on a sub-400 dollar purchase, quality is generally worth it. That doesn’t change the desire for high quality and low prices, but physics still dominates.
- Service integration. It was hard to find a product that did not integrate with the web and back end service in some way. While third party services have been a theme for several years, the role of first party services is up significantly. These services are now a big part of the value of a hardware product. Telemetry is key service that is part of every product. While we might curse updates or think it encourages poor engineering, the reality is that the quality of what we experience is better than ever because of these updates.
- Social integration. The integration of products with social networks is technically an easy thing to do (these networks are motivated to have more updates flowing in) so it follows that many products now integrate with networks. You can hop on a scale and share the weight right away. You can share movies you have watched easily. You can even share how happy a meal made you.
- Broadening of Moore’s law. We all know how MIPS increased over time. We then learned how available storage increased over time. We’re now seeing this increase in bandwidth usage (UHD Netflix streaming, for example) and in the silicon based nature of visuals (screens and camera sensors, for example). Even wireless networking is seeing a significant uptick in speed. There’s a lesson in not betting against these changes—ride the wave.
- Connected life. For sure, the connection of our lives to the internet continues as a trend. It is really amazing how many analog things are being digitized—door locks, luggage tags, mouth guards, and more.
One of the neat things about going to the same show every year (I think this is easily 18 or 20 for me and CES) is to compare year on year what comes and goes. It is just an “observation” or “feeling” and not a measure of square footage or number of products. CES 2013 saw some interesting changes in products that were very present last year and less so this year.
Looking back at CES 2012 there were a few things that made an impression as a trend or were visible and went the other direction this year:
- 3D. 3D was really big last year and you really had to work hard to even find a booth with glasses at all. I can’t recall something that had so many real products you could buy (and could buy that previous holiday) and in one year essentially vanished. I’m still surprised by this a bit because the world is 3D—it seems that the technology approach wasn’t working so I would not write off the concept just yet.
- Storage. There was a lot less in the way of storage technologies—hard drive cages, USB drives and sticks, media storage cabinets even. The cloud world we live in along with seemingly unlimited storage in the devices we use indicate this trend will continue. Kingston’s 1TB memory stick was cool (though maybe a bit bigger than you might expect before seeing it).
- Waterproof. Last year it seemed like every booth had a fish tank holding a phone or tablet. While there were plenty of waterproof cases and a few waterproof devices, it might be that people go rafting with their tablets less than product folks thought :-)
- Media boxes. There used to be a seemingly endless array of boxes that distribute photos, videos, and music around a home network. With Pandora/Netflix/etc. built into every TV and DVD player (and apps on every device), this type of device has probably been integrated. For the enthusiast, the capability of using privately ripped media (and those codecs) around the house still requires a solution but that might be heading towards the homegrown/open source approach rather than product.
- Digital cameras and video cameras. The ubiquity of high quality cameras in our smartphones makes it tough for most of us to carry a second discrete camera. One thing to always look out for at CES is when one product category will be subsumed by another, but also be on the lookout for when people might be trying too hard on the integration/combination front. There was definitely a focus on making discrete cameras take on characteristics of phone cameras with user interface, Wi-Fi integration, and post-processing (sepia and toaster from your camera).
- Gesture based TV. The excitement of gesture based control of TV was all but gone. Last year every TV had 10’ of space in front of it so the demo folks could control it by gesture. The demos didn’t work very well and so this year TVs were being controlled by apps on tablets and phones. This might be subsumed by voice or might return with a much better implementation.
Impressionable products and technologies
While some products and technologies seemed to trend downward, there were quite a few exhibited that appear to be trending upward or remain at a very high level of interest and development. The areas for me that are worth looking into more as products are developed include some of the following (in no particular order).
UHD/4K. What’s not to like about 4K! The biggest crowds are always around the biggest screens and this year was no exception. Seeing the 100” and greater 4K screens is breathtaking. It is incredible to think that it was just two years ago we were ogling at a 60” LED 1080P screens. Moore’s law applies to screens. Every major TV/screen company was showing 4K screens and these will be products soon enough for sure, and then the prices will come down. Folks were debating the value of 4K on different screen sizes or in different room sizes. Even though the physics can prove your eye can’t resolve the different, the physics of manufacturing screens will make it cost effective to use high density pixels counts almost everywhere. Obviously as we have seen with devices, there’s work for software (content) to just work at 4K—each company was showing native 4K and upscaled HD content to show off their technology for future and present content. Can has?
Display technology. The technology behind UHD displays is also on the move. This year saw a significant amount of credible innovation in the area of screen technology. Flexible displays seem more realistic than past years because they were in more than one booth. OLED made a strong reappearance with an amazing 4K OLED screen. Curved screens that match what we see in movie theaters showed up. I loved the wide aspect ratio screen from LG. Touch is being integrated into large panels for use for broadcast, meeting rooms and signage. Even the distribution of HDMI signals for digital signage saw innovation with single CAT5 systems at commodity prices. Samsung had a very cool transparent display that allows a physical product to be “enclosed” in the screen.
Multi-screen. There’s an incredible desire for the ability to get what I am seeing on a phone or tablet on to a bigger screen (the flipside of getting what streams over cable/sat onto my phone/tablet is a different problem). To really solve this well (respecting digital rights, getting everything on the screen) should be a low level connection like “wireless HDMI” but the power, bit rate, and complexity of that has not lent itself to a solution (below is a photo of HDMI test equipment if you ever wanted an idea of the complexity of the signal, or just cut an HDMI cable and look inside). Software solutions turn out to have equal complexity when you consider the decoding required in a TV (where component pricing is critical). DLNA holds out hope but is suited to photos/videos. AirPlay has the presence of iOS devices but needs Apple TV connected over wires. Sony, LG, Samsung, and others are starting to show solutions based on Miracast. This has some real potential if screens and projectors start building this in (and today you can get the aux box via third parties such as netgear). The other part of multi-screen is the aux screen scenario–the tablet screen show auxiliary content or is a remote control. There was somewhat less of this in 2013 compared to last year. This seems to be struggling with scenarios and responsiveness right now but seems like it will be figured out—after all, how many of us watch a movie and look things up on IMDB on our phones or watch sports tracking another game or stats on our tablet? The scenarios last year were focused on Facebook/twitter on your TV or news/weather while you watch and that is what seems to have been reduced in excitement (those always seemed a bit awkward to me for a family room).
Cameras. The first consumer digital cameras were shown at CES back in the early 90’s. It was so interesting because prior to that cameras had their own show. Fast forward 20+ years and cameras are 100% electronics. While discrete consumer cameras are struggling a bit to find a place in a world of phones, digital SLRs are seeing a rebirth at a level of flexibility and sophistication that is mind blowing. The role of DSLR for video has spawned a whole industry of accessories to morph a still camera into a motion camera in terms of form factor (the sensor and lens are the real value). Image stabilization, critically important on tiny form factors, is becoming incredibly good. Tablet sized devices are becoming reasonable for cameras (last year I thought it looked really goofy and this year it seems to make sense). One has to think though that there is a digitalization of “lenses” yet to happen. The physics of optics is due for a rebirth – the improvement curve on lenses and the SLR model seems to have reached the limits of physics. The new Canon 200-400 f4 with integrated 1.4x converter lens is super cool, but so heavy and costly. The next generation of cameras that go beyond using silicon to duplicate the resolution of film will break through at a future CES. Often you see products go through the “use electronics imitate the analog world” for a while before they find their digitally authentic expression.
The following is a 10 second video showing image stabilization from Sony. The image is a live image from two cameras mounted on a shaking platform, above the large screens.
Phablet. The made up word that was used more than it seems like it should was Phablet—a device that is bigger than a phone and smaller than a tablet. Given the size of phones this might mean 5-6.9” or so. It seems that there are two views. There’s the view that a phone is a phone and should be “less than” some size, and a tablet is a tablet which is 7-8” unless it is a big tablet (9.7”) or a PC/tablet. The other view is that consumers will be selecting from a wide variety of sizes and the industry will meet many needs. I like this second view. While I might choose a more routine phone size, too many people like larger sizes. Whether a larger screen is the one device someone uses or no is a tricky question. More than size, the pixel density is something to consider because apps won’t scale arbitrarily and how to scale at certain combinations of diagonal size and ppi have real impact on the quality of interaction with apps. I would not discount the consumer demand for a sustainable market of a variety of sizes of portable devices.
DISH/Directv/Comcast. The companies that distribute “real TV” to consumers (especially live events and original programming) seemed especially innovative this year. DISH is particularly innovative in bringing together a very nice and high quality multi-room and multi-device scenario. One thing that really struck me was the new ability to flag a program for transcoding to your mobile device and easily download it. To date this has been mostly impossible to do. Unless you want to wait for DVD or streaming this is the only way to time and location shift first-run programming. Programming guides are getting much better and faster to interact with and the integration of fun data (related programs, background info) is great to see. Getting to place where you have one tuner box and then much smaller, fanless, storageless, settop boxes in other rooms is really close.
Health. CES hosted a separate exhibit area for health related products/services (this is where they are encouraged for being part of the themed area). These products are truly modern products—empowering consumers with technology to literally improve their lives, and connect them to the internet and other resources. There are all sorts of sensors for well-known human telemetry: weight, blood pressure, pulse oximetry, glucose level, air quality, distance traveled, and more. There are also sensors for fuzzier (computed not measured) areas such as concentration, sleep quality, mood, and so on. The common element for all of these is measurement with a device that connects to the internet (or directly your mobile device via bluetooth) and then on your device you can view trends, track, and analyze the data. For many people this is literally life and death (tracking bp, glucose). For many it is a way to maintain fitness levels or achieve a better level of fitness. Two things really struck me. First, there is a real responsibility these companies will need to shoulder to separate medically actionable data from telemetry that will simply drive you crazy and drive up medical costs for society (tracking pulseox for a normal healthy person is dubious). Second, these are really a unique set of products/services/businesses that are essentially mobile-only, profiting by either the device sale or a device + service subscription. Some are not even bothering with web-browser based viewing.
New PCs. While Dell, HP, and Microsoft were not showing their own booths, there were plenty of new PCs. This was newsworthy and clearly showed a focus on “designed for Windows 8” which is exciting. Intel pulled together many of these PCs under the Ultrabook moniker and announced specs for the next generation of Ultrabook logo PCs (including touch). Samsung, LG, Toshiba, Sony, Panasonic, Lenovo all had very nice PCs with hiqh-quality touch, nice trackpads, great screens, thin, light, and in a variety of screen sizes 11-15. The All-In-One PCs with touch were quite nice as well, especially Sony and Samsung’s models. The Vizio lineup continues to evolve and show unique designs and good value. Razer was showing a Core i7 based tablet designed from gaming with some awesome gaming attachments. Panasonic shows a 20” 4K tablet that blew me away—seeing the quality of AutoCAD drawings showed a real value to the full “stack” of hardware and software. There were a number of hybrid PCs (tablet with removeable/hideable/flippable keyboards) that are becoming clearer and more refined—I especially liked the Samsung and Lenovo entries. These PCs are really great for developers and designers—they let you work directly with the code and a client/designer at the same time in both coding and tablet usage styles. As with “phablet”, it seems that the variety of tablets enabled by Windows will be something that continues to bring innovative ideas to consumers.
Green. There continues to be a push to make sure devices are green—while that lacks a concrete definition most devices are touted as low(er) power than they used to be. With the focus on mobile most devices are already much lower power than a tower PC of 2 or 3 years ago and even the 27” all-in-ones are running low-power chipsets and using aggressive OS power profiles. There are numerous power strips that reduce draw and drive “standby” behavior through certain outlets. There were a number of power strips that said they were greener because there was one integrated DC converter for charging USB devices. I loved the case/bag companies using recycled materials to make bags (though it still isn’t clear if this is carbon neutral or not, but for sure the developing world figured this form of reuse long ago making carry bags out of rice and grain bags). The most interesting challenge is that to really reduce power consumption (and extend battery life) requires hardware and software working together. Hardware companies announce the power draw of the hardware independent of the software platform; devices advertise battery life independent of radio signal strength or app load; manufacturers can create a software profile (drivers and more) that is not optimal for the advertised hardware number. There’s a lot to get this right.
Wireless communications. Obviously wireless mobile communications are everywhere, literally. One product due for a revolution in this regard is LifeAlert (“help I’ve fallen and I can’t get up”). Lifecomm is a Verizon product that houses a full 4G “phone” in a bracelet or dongle. Push a button and your location and information generates an assistance call right to you and a dialog can also take place—no matter where you are in Verizon’s service area. Super cool. Greatcall has a similar product that is a keychain form factor. There were related products for pets and luggage as well, but the one for humans seems to be particularly valuable.
Neat new companies
Everyone who goes to CES always tells you that the smaller companies have the coolest innovations. It takes a lot of energy and some luck to really find one of these. Even if you’re the press and get all the requests to meet you still have to pick from a thousand choices. I happened to stumble across a couple I thought I’d mention.
Qubeey. This is a startup from the Los Angeles area. That already makes them different as they are not a “tech” company, but a company that uses tech. They think a lot about how to connect entertainment to the audience that cares about them. If you’re a self-expressed fan/follower/friend of a talent then Qubeey provides a way for that person/band/brand to “push” highly interactive content to you that overlay the context of what you’re doing on your mobile device/win/mac. They have cool overlay video technology and even interactive SMS games. It is a unique approach to what amounts to advertising but doesn’t seem like that because you signed up to interact with something you care about.
“Secret”. I had a chance to see a briefing for a top secret gesture based technology that is very nice. This is a technology out of university labs about to be a product for TV/device makers. It uses a single off the shelf camera (like in an iPhone) and then uses CPU/GPU to compute the tracking of your hands for gesture based UI. This is cheaper, smaller, and less complex than other solutions out there. I saw it in action and it works remarkably well—there’s almost no latency between hand movements and tracking. It works at the driver level so it can use gesture to emulate touch with existing games and software. It can be easily trained to track an object as well (like a wand, sword or saber) for games.
Tablet cases. There were a lot of cases for tablets. Seriously there were a lot of cases for tablets from companies big and small, new and old! It is clear that tablets have a need for more protection, keyboards, and stands. I tried to capture photos of some of the variety of cases/add-ons that add style, keyboards, and protection, but also add significant size and weight to what are otherwise sleek and light tablets. Many seem to hinder the ergonomics of the device, unfortunately. I really don’t understand why someone hasn’t built a tablet yet that has a really strong case, built in stand, and a cover that also allows typing. I said free of snark, not free of sarcasm :-)
Here is a great example of the work Panasonic consistently does for universal access. Their voice control TV won an innovation award for universal design.
IEEE was running a poll to determine views on what gadgets are no longer in use (“Gadget Graveyard”). I love the irony–a gadget graveyard from the engineers that brought you the gadgets.
Phew. Another CES. Every year the new products energize me and show just how much creativity is going on in our industry.
PS: Please see the Disclosure page that has been added. The link is on the right rail.