Learning by Shipping

products, development, management…

Posts Tagged ‘product management

Why The Heck Can’t We Change Our Product?

Screen Shot 2016-02-08 at 10.48.08 AM

I drove by the fork in the road and went straight.
— Jay-Z (see author’s endnote)

One of the most vexing product challenges is evolving the UX (user experience, and/or user UI) over long periods of time, particular when advancing a successful product with a supportive and passionate community.

If you are early and still traveling the idea maze in search of product-market fit, then most change is good change. Even in the early days of traction, most all changes are positive because they address obvious shortcomings.

Once your product is woven into the fabric of the lives of people (aka customers) then change becomes extraordinarily difficult. Actually that is probably an understatement as change might even become impossible, at least in the eyes of your very best customers.

The arguments are well-worn and well-known. “people don’t like change”…”muscle memory”…”takes more time”…”doesn’t take into account how I use the product”…”these changes are bad”…”makes it harder to doX”…”breaks the fundamental law of Y”…”what about advanced users”…”what about new users”…and so on. If you’re lucky, then the debate stays civil. But the bigger the product and the more ardent the “best” (or most vocal?) customers, well then the more things tilt to the personal and/or emotional.

Just this past week, our feeds were filled with Twitter rumored to make a big change (or even changing from Favorite to Like), Uber changing a logo, and even Apple failing to change enough. It turns out that every UI/UX change is fiercely monitored and debated. All too often this is a stressful and unpleasant experience for product designers and an extremely frustrating experience for the customers closest to the product. Even when changes are incredibly well received, often the initial response is extremely challenging.

For all of the debates, a product that fails to dramatically change is one that will certainly be bypassed by the relentless change in how technology is used.

Yet change, even of a core user experience, is an essential part of the evolution of a product. For all of the debates, a product that fails to dramatically change is one that will certainly be bypassed by the relentless change in how technology is used. We do not often consider the reality that most new products (and services) we enjoy are often quite similar to previously successful products, but with a new user experience.

Consider that the graphical interface for spreadsheets and word processors replaced whole companies built around predominantly similar capabilities in character mode interfaces. The competitive landscape for browsers was framed by having minimal interface. Today’s SaaS tools often lead with capabilities similar to legacy products expressed through consumerized experienced and cloud architecture.

Technology platform disruptions are just that, disruptive, and there’s no reason to think that the user experience should be able to smooth out a transition. There’s every reason to think that trying to make a UX transition go smoothly might be a counter-productive or even a losing strategy.

The biggest risk in product design is assuming a static world view where your winning product will continue to win with the same experience improving along the same path that got you success in the first place.

The reality is that if you are not doing more in your product you are doing less, and doing more will eventually require a redesign and rethinking over time. The corollary is that if you are only doing what you’ve always done, but a little better every time, then as a practical matter you are also doing less relative to always-emerging competitive alternatives.

The biggest risk in product design is assuming a static world view where your winning product will continue to win with the same experience improving incrementally along the same path that got you success in the first place.

There are dozens of amazing books that tell you how to design a great user experience. There are seemingly endless books and posts that are critical of existing user experiences. There are countless mock-ups that say how to do a better job at a redesign or how to fix something that is in beta or already changed (see The Homer). Few and far between, however, are the resources (or people) that guide you through a major change to user experience.

No one tells you that you’ll likely face your most difficult product design choices when your product is incredibly successful but facing existential competitive challenges — competitive challenges that your most engaged customers won’t even care about.

This essay is is presented in the following sections:

  • User Experience Is Empowerment
  • Everyone’s A Critic
  • Pressure To Change
  • 5 Ways To Prepare
  • 5 Approaches To Avoid
  • Reality

User Experience Is Empowerment

At the most basic human level, the mastery of a tool (a user interface or experience) is about empowerment. Being able to command and control a tool feeds a need many of us share to be in control of our environment and work.

Historically we (all) seek to be the master of our “life tools” whether an shovel, a horse, a car, a PC, or the arcane commands of a modern social network. Something magical happens for a product (and company) when it is so compelling that people spend the time and effort required to master it.At that moment, your product becomes an essential part of the lives of customers, customers who come to believe their world view is shared by “everyone”.

Those customers become your very best customers who see your tool as a path to mastering some of the ever more complex aspects of life or work.Those same people also become your harshest critics when you try to change, anything.

A Story

Permit me to share an example of how this empowerment can work, deliberately using an historic example few will remember.

Some time in the mid 1990’s I was a product (program in Microsoft lingo) manager on Office and we were trying to figure out how to transition from a consumer app (yes we used the phrase consumer app back then) to an enterprise platform (that word was new to us). There were many elements to this, but one in particular was the setup and deployment of the Office apps.

As is often the case, customers were ahead of us on the product team when it came to figuring out the most efficient way to copy the Office bits (all 50MB of them) from floppies to a file server to desktops and laptops. Customers had all sorts of things they wanted to do when “installing” Office onto a PC — changing default settings, removing unneeded files like clipart, and even choosing which drive to use for the bits. Many of those could be controlled by the setup program that was itself an app requiring human interaction, save for a few select capabilities. PC admins wanted to automate this process.

As you might imagine, admins cleverly reverse-engineered the setup script file that was used to drive the process. This was a file that did for the comma what LISP did for parentheses. It was a giant text file filled with record after record of setup information and actions, with an absurd number of columns delineated by commas. In fact, and this is a little known embarrassment, the file was so unwieldy that to edit it required a non-Microsoft editor that could handle both the text file length and the line width. Crazy as that was, a very large number of PC admins became experts in how to “deploy” Office by tweaking this SETUP.INF. I should mention, one missing comma or unmatched quote and the whole system went haywire since it was never designed to be used by anyone but the half dozen people at Microsoft who understood the system and were backed by a large number of test engineers.

We believed we had a clever idea to solve a broad range of customization, security, and even engineering challenges which was to replace the fragile text file and third party text editor with a robust database and graphical interface. Admins could then push a customized installation to a PC without being in front of it and the PC could even repair an installation if it somehow got corrupted or munged. We really put a lot of effort into solving this with a modern, thoughtful, and enterprise friendly approach. Consumers would see almost no changes.

Then came the first beta test and what could best be described as a complete disaster. Of course the customers in the beta were precisely the small same set of customers that had figured out how to hack the previous system. We had little understanding that these customers had become heroes within world of PC Admins because they could get a new PC up and running with Office in no time flat. Many had become the “goto person” for Office deployment in an era when that was a big deal. Deploying Office had become a profession.

That should have been success for us on the Office team except we changed the product so much that everything special the admins had mastered had become irrelevant. They had lost their sense of mastery over their environment.

Looked at through another lens, implicit in the change is a devaluation of the acquired knowledge and expertise of these customers.

The pain was felt by our team as well. Our goal was to make things better for admins by replacing the tedious and error prone work with a platform and slick new tools. As an added bonus the new platform greatly improved robustness and reliability and added whole new features such as install on demand. All the admins saw, however, was a big change and subsequent loss of empowerment. Looked at through another lens, implicit in the change is a devaluation of the acquired knowledge and expertise of these customers.

It would have been easy to see how to take their very specific and actionable feedback and roll the changes back to what we had before, or to introduce some bridge technology (or other “solution” discussed below). On the other hand, the competitive forces that drove these choices including web-based and browser-based tools were increasingly real. The technology shift was underway. It was clear if we did not change the product and disrupt the established processes we would have only hastened a potential disruption of our business. The market was clear about the failures of the architecture we had in place.

Everyone’s A Critic

If you ever want to go for a record on the world’s longest comment thread then author a post on user interface suggesting how a product should be improved, fixed, re-done, or just rescued. On the internet, no one knows you’re a dog but everyone is a user interface critic and/or designer. Certainly well-intentioned, each person is genuinely responding to challenges they have with a product. Some of debates over the past few months in Windows circles over the hamburger menu or the discussions around iOS’ 3D touch show the many sides to how this plays out from friends and fans alike. As I type this, Twitter even has a trending hashtag on rumored changes to the product.

Some of my favorite posts have always been the ones where someone takes the time and effort to do a rendering to improve upon an experience I worked on/managed — “this is what it should look like”. The internet and tools make it easy to turn a complex and dynamic system design into a debate about static pixels and an image or two (or twenty). The ensuing debate might be narrowly focused on specific affordances such as “the hamburger menu” or whole themes such as “skeuomorphic versus flat” design.

The presence of renderings or “design alternatives” only add stress and uncertainty to what is already an emotionally charged and highly uncertain process while at the same time creating a sense of authority or even viability. The larger the project the more such uncertainty brings trouble to the design, especially as people pile on saying “why not do that?”. It is worth noting that the internet can also be right in this regard, such as this post on iOS keyboards.

The irony is that you’re far more likely to participate in a UX discussion/debate with people with very different starting and end points than those for whom your design is intended.

There’s quite a challenge in the tech dialogs around UX. First, those that participate in the dialog are on the whole representative of power users and technology elite. More often than not, UX design is seeking to include a broader audience with a wide range of skills or even interest in using more depth functionality.

Second, the techies (to use @waltmossberg’s favorite phrase) often prefer innovations that enable shortcuts or options to existing features they see flaws with rather than doing whole new things. Techies tend to want to fix the shortcomings in what they see, which is also not always aligned with solving either broader usage challenges or even business problems. The irony is that you’re far more likely to participate in a UX discussion/debate with people with very different starting and end points than those for whom your design is intended.

While every person can be a critic (often even for products that they do not routinely use), it is also the case that every UI can become old or at least static and open to criticism. Interfaces age both internally (the team) and externally (the market). Internally, you might hit a wall on where to evolve. Customers fall into routines and usage patterns. Nothing new you do is recognized or used.

Externally, your shiny new experience that replaces some old experience (for doing slightly different or totally unrelated things) will eventually become the type of experience that gets replaced. While some think of this in terms of stylistic trends like transparency or gradients, the truth is there are functional aspects to aging as well such as interacting with touch or gestures, bundling of different feature sets, or macro trends in visual design.

When you put together the large number of critics and the certainty of your experience aging, you’re in for a challenging time to evolve your interface. Whether you have a consumer app or site with a billion users, a commerce site, a productivity tool, or a line of business app the challenges are all the same. While the scale or direct economic impact might differ, to those designers and product managers working the problem the challenges and decisions are the same, and to their customers the frustration is just as real.

Pressure To Change

Changing a user experience should in theory be no more or less difficult than a major re-plumbing or even creating the first experience. Some things make changes more difficult, or perhaps at least more open to direct criticism. With a backend change, the most visible changes might be performance or uptime (to be fair, the debates about change within product engineering are just as contested). With UX changes (additions, subtractions, reworking) everyone seems them through a more complex lens.

Changing something that people have an emotional connection to is difficult. An emotional connection creates expectations or even norms, and the natural human reaction is to defend the status quo and maintain control. The discussions of change rapidly deteriorate to preference, taste, or argument by analogy, or assertion all of which are very difficult to counter when compared to facts, stopwatches, or physics.

In my experience there are several key pressure points that drive change, beyond the most obvious of fixing what is broken. You can accept or reject these and advocate for change yourself or leave room for your competitors to capture the leadership and change. Depending on context wisdom could be found from many perspectives.

Pressure to change confronts a successful and engaging product because of:

  • Evolving use cases
  • Locating new capabilities
  • Discovering features
  • Increasing tolerance of complexity
  • Isolating change leads to complex analysis of benefits
  • Competing products and/or changing expectations

Evolving use cases. You might design your product to solve a specific key scenario, but over time you find a different set of use cases coming to dominate. This in turn might require rethinking the flow through the product or the features that are surfaced. In a sense this can be seen as evolving the product to meet real world usage versus theoretical usage, except it will still be a change. We see this in the amount of UX real estate legacy tools devote to print-based formatting or layout design compared to the next generation of tools that surface collaboration and communication features as primary use cases. The changes in Facebook and Facebook Messenger demonstrate this driver. A relatively minor scenario saw increased usage and strategic value driving a significant, and hotly debated, experience change.

Locating new capabilities. As new capabilities are added, most all of the time those features require some UX affordance. Almost never is there room in the existing product for the new feature. In evolving Office, we reached a point where we literally ran out of room on menus and then on toolbars (originally the goal of toolbars was to be a shortcut for things on the menus or dialog boxes, but soon features on toolbars had no counterparts in menus and dialog boxes). This challenge is even more acute in today’s mobile apps that are often “filled” from the very first release. This design challenge can be likened to trying to add a new major appliance to an existing kitchen — there is almost never room to add one until the next major redesign when flows are reconsidered. As a result, when it comes time to add UX for entirely new scenarios your product will change significantly. Recently, LinkedIn chose to invest heavily in content authoring and sharing but the core experience was aimed at jobs and resumes. The new mobile app, which was reviewed positively in many cases, changed the focus substantially to these new scenarios.

Discovering features. It is easy to want common features to be easy to use and new features to be discoverable, but those are increasingly at odds as a product evolves. The first challenge is just in finding the screen real estate for new capabilities as discussed above. More likely, however, is that new capabilities will be subordinate to existing ones in terms of surface area. This leads to affordances such as first-run overlays to explain what all the product might do or what gestures are available, which is itself added complexity (and engineering!). One also sees new capabilities with a disproportionate “front and center” placement in an effort to increase discoverability. Often this results from a “marketing” need to drive awareness of the very features being used in outbound marketing efforts.

Over time, A/B testing or usage data will then drive additional change as features are rotated out to make room for new. This all seems quite natural, but also clearly drives complexity or even confusion. This in turn raises the challenge of even changing a product in the first place. Most Google productivity tools we use experience this challenge. Gmail and Apps are increasingly complex and it is getting more difficult to discover capabilities. Historically Google had Labs features to explore new areas and now even Inbox is a whole new experience for mail.

If you ever doubt the ability for people to tolerate more complexity, then just look at the old version of any famous site, app, or program. You’d be amazed at how sparse it is.

Increasing tolerance of complexity. Everyone loves simplicity and certainly every designer’s goal in creating a system is to maintain the highest level of simplicity while providing the right functionality. Over time there is no way to remain simple as more features are added the ability for someone new to the system to command it necessarily decreases and the usage of the system’s breadth decreases. Nevertheless, people become accustom to this growing complexity. It creates a moat relative to new entrants and a barrier to change. People loved the ironically named Chrome browser when it arrived because it was so clean and simple. Few would argue that level of simplicity remains today, yet the complexity is embraced and there’s little opening for a browser that provides less functionality.

For all the criticisms directed at the complexity of Microsoft Office, few switched away to products that do less simply because they were simpler. If you ever doubt the ability for people to tolerate more complexity, then just look at the old version of any famous site, app, or program. You’d be amazed at how sparse it is. The pressure to reduce and simplify comes from everywhere with technology products, but sometimes a failure to embrace a level of complexity can prevent important and strategic change. The most adaptable part of the entire technology stack is the human being at the very top.

Isolating change leads to complex analysis of benefits. UX experience, new or changed, is almost always viewed in isolation. New UX is viewed relative to the small number of initial capabilities and the ease at which those are done compared to existing solutions (i.e. make a voice call on the original iPhone). Changes to products are viewed through the lens of “deltas” as we see in reviews time and time again — reviews look at the merits of the delta, not the merits of the product overall relative to new scenarios that might be more important and old scenarios that might be less important now (as user needs evolve). When viewed in isolation, change is amplified which then makes change more difficult to execute, absorb, or even accept.

Isolation results in intense levels of discussion among the technologists as alternatives are proposed (after the fact) even for very small changes. More importantly, this dialog amplifies the value of small changes which in the scheme of thing will do nothing to improve the business and everything to prevent larger and more strategic changes from happening. Platforms providing horizontal capabilities to broad audiences are notorious for these debates in isolation. Consider the transition iOS made to a new visual design which is now a distant memory.

Competing products and/or changing expectations. The biggest and most important driver for change are the external market forces of competition. Each of the previous drivers are all within your own world view — these are changes you are driving for products you control with inputs and feedback you can monitor. The competitors you view as strategic are incredibly important inputs relative to the longer term viability of the business.

The fascinating thing is that your best customers are the least likely to be worried about your longer term strategy, especially if they have bet their jobs and are empowered by your product. In fact, they will be just as “dismissive” of competing products or new approaches or solutions as your highly paid sales people that are continuing to close deals or the self-taught expert who can’t wait to join the product team. As a technologist you know that your product will be replaced or superseded by a new product and/or technology. It is just a matter of time.

The most important thing to consider is that it is almost never the case that your direct competitor will serve as motivation for changing expectations. The pressure to change will come from unexpected substitutes or newly crafted combinations of a subset of existing capabilities. Ironically, most all of your inputs will come from people and members of the team/company focused on your direct competitor (and the bigger your presence the more likely this will be).

5 Ways To Prepare

The first rules of product design relative to change are to expect it and to prepare for it. It is common-place to remind ourselves in product design that the enemy of the good is the perfect, but relative to evolving experience the enemy of the good is the past.

Assuming that today’s user experience encompasses the value of your product tomorrow is certain to get you in trouble (just as assuming some specific code or API is the core of your value). A comforting way to approach this is to remind yourself that before your current successful user experience there was a successful experience that was widely used. People gave up that product to learn and use your new and different product.

The following are five ways to prepare your design for a future that will require you to change:

  • Solve the n+1 problem up front
  • Design for the choices you know about
  • Optimize only to a point
  • Decide your app strategy early on
  • Flat is your friend

Solve the n+1 problem up front. One of the most common times a new feature causes UX churn is when you’re adding something you knew about but didn’t have time to engineer. I call this n+1 because across the product there are places where your experience (and code) assumed a finite number of choices and then down the road you find you need one more choice. Commonly this is seen as choices like photo filters, email accounts, team/channels, formatting options, and so on. These changes are recognizable when you go from either no choice to a choice or need to switch from binary/ternary to some list.

The warning signs for this potential change come very early on because you either cut the feature or the feedback is everywhere. It is almost always the case that these are core flows in the experience so designing up front can be a big help. Incidentally, this also holds for engineering and the product architecture where the highest cost additions are often when you need to go back and engineer in a level of indirection to solve for choice where there was no architecture. Some might see this as counter to MVP approaches but nothing comes without a cost.

Design for the choices you know about. As a corollary to above, there is a design approach that says to leave room for the unknown, “just in case”. Such a design often leaves open space in the interface that stares at you like an unused parking spot at the mall. At first this seems practical, but over time this space turns into an obstacle you must work around because nothing ever seems to meet the bar as belonging in the space.

On the other hand, this also serves as a place where everyone on the team is battling to elevate their feature to this valuable “real estate”. Even more challenging, your best fans will have a million ideas for how to fill up this space (and renderings to demonstrate those ideas), and too often that amounts to using it to provide shortcuts to existing functionality. Preparing for what you don’t know by compromising the current does little to postpone the inevitable redesign and does a lot to make the current design suboptimal.

Optimize only to a point. Optimize to a point and recognize that you will change, and assume that the vast majority of input will be focused on areas you were not really expecting (someone on the team probably was expecting but not everyone). In preparing for a future of change one of the most difficult things to do in design is to recognize that where you are at a given point in time for a development cycle is good enough to ship. Stop too soon and the risk of missing is high. Stop too late and the reluctance on the team to change down the road is only increased because of sunk costs and a too much historic baggage.

The most critical rule of thumb in product design is that a product releases “as is” and does not come with all the designs you considered or could consider. When it comes time to disrupt yourself with significant changes, do not underestimate the amount of institutional inertia that will come from a few years of researching and testing every possible alternative to a design. The expression often used, “peacetime generals are always fighting the last war” applies to design and product choices as well.

Decide your app strategy early on. A strategic question facing any broad-based product will be how many mobile apps do you need. In the enterprise if you’re building a full ERP system there’s no way to have a single app, but it could also be very easy to create a sort of app shrapnel and replicate the 1500 legacy web sites that the average large corporation maintains. If your product has either a desktop or web solution and apps are being added, you have to decide early on if your app is a scenario-based companion of the primary/only way you expect people to use a service — you might be considering a mobile capture app to go with a web-based analysis app, or keeping the admin tools on the web to accompany mobile workers in apps.

It is very difficult to switch mindsets down the road so this choice is key. A valuable lesson (in disruption) was learned during the transition from desktop to web. The prevailing broad view that web apps would be supersets of desktop apps proved to be true as many believed, but it just took about three times as long as people thought. If you believe your mobile app is a companion to your site, just be prepared for a large number of customers that only want to access over mobile even if they are not doing so today.

Flat is your friend. Programmers and designers often love hierarchy — hierarchy helps our computer brains to organize and deal with complexity and most techies have no problem navigating hierarchy. Unfortunately most people long ago failed to grasp the Dewey Decimal system and search seems to win out over hierarchical organization in most every instance. Aside from that, the most frustrating changes to experience come when you reorganize a hierarchy (trust me on this one).

Hierarchy is the source of muscle memory and also where much of a sense of mastery comes from. The power users are the people that know where features are hidden or how to drill through panels to find things. Hide and Seek or Concentration are great for the right people, but a poor way to do user experience. A solid approach to avoid a future reorganization is to see how flat you can keep your experience. SEO or A/B testing (or marketing) will always push to keep things above the fold, oddly motivating hierarchy, and not favor scrolling which most everyone understands. The alternative of click/tap to a new place is way more disruptive both today and down the road.

We all wish we could be fully informed about the way we will evolve our product and the way competitors will provide a unique view into the space we are going after. That is never as easy as it could be. The above are just a few ideas to consider if you start from the mindset that once you achieve success you will end up going through a user experience change.

5 Approaches To Avoid

With a successful and deeply used product, when you do make a significant change to your experience the feedback is often swift and clear, and universal praise is exceedingly rare. As a result, the product team discussion will move from expressing frustration to proposing solutions very quickly. Even if you were expecting some pushback, it is never pleasant. At that moment, there is a limited design vocabulary available to make unplanned adjustments, perhaps even more limited than the engineering time you have to execute (assuming that as with most projects you are under pressure to complete all the new stuff).

Similarly, you might anticipate some pushback and are considering a proactive approach with proactive objection handlers or scheduled time for feedback. Regretfully, the solution set is the same since the problem is the change itself, not the way you are changing. The only difference is that the more you engage in defensive engineering efforts the less time you have to get the new work done. More importantly, time spent on salves or bridges only takes away from the existential competitive dynamic that is motivating the need for change.

These potential solutions all arise from the same place, which is that your early adopters, best customers, and front lines sales are all successful with your product and resisting a big change. The resistance is natural — the feeling of empowerment and familiarity. Remember, the reason your are making changes is because your successful product, in your best judgement, is facing an existential threat. This threat is not coming from these early voices, but from the customers you have failed to acquire or are not likely to ever acquire. You are making changes to support future growth not incrementally improving your existing customers.

While the 5 approaches outlined below are typical, they often backfire in predictable ways which is why they should be avoided:

  • Add a new mode
  • Offer customization
  • Solve with a UI level of indirection
  • Downplay the changes
  • Redesign quickly

Add a new mode. Enthusiasts, marketing, and enterprise customers have no problem with change so long as you add an option they can use to get back to the old way of doing things. This feedback can be pretty sneaky. They will say that the option can be hidden and hard to get to because it is really only for power users or admins, or just an objection handler for the sales process. They might even tell you that you can take away the option after some time to adjust. By the way, another variant of this request is to just provide an option to “hide the new stuff”. You see how this is a sneaky ask — eventually there will be something in the new stuff that even these customers will want but without interfering with the existing “old” way. It certainly seems lightweight enough.

The challenge is twofold. First, once you can get back to the old way of doing things then everyone will want to know why that option exists. “Is the new way not good enough?” will be a common refrain. Second, once you have such an option you are designing for two experiences all the time. Everything you add needs to consider the old way and new way of getting to a new product areas. Not only is this super difficult, it is expensive and it takes away from forward-looking strategic needs. In general, modes, whether user-directed or contextual, are a way to postpone making a strategic choice about the future of your product and advertise your own indecisiveness.

By the way, technology enthusiasts love modes because modality (I suppose dating back to VI) implies hierarchy, control, choice, and a priori knowledge of where your are heading in the product that most people don’t have. Almost all choice and modality is ultimately ignored by customers and when the product magically switches modes, there is almost always a level of frustration that comes from the unexpected behavior changes (even think of the cleverness of having views defined by portrait or landscape which tend to be confusing in practice).

Offer customization. Customization permits you to make big changes with three mitigations.

First, you can allow people to customize your functionality one setting at a time until it returns to where it was. Often this is how a product evolves as it tries to automate previously multi-step processes. For example, if you used to manually turn off the lights at home via some IoT app but then add machine learning to guess, the first time the lights are wrong the answer will be to disable the new automation (AutoCorrect in Word was like this). You need to get something right or handle it gracefully when you’re wrong, but turning it off means it will never be successful.

Second, customization is often used to rearrange the user interface to get it back to where it was before when it was good Maybe you took away a share button to use the one provided by the OS or maybe you added a few tabs to the top level UI, well then just a switch to move things around.

Third, customization can be used for when you want to add something and the team can’t even decide whether it is a good idea or not so you add your own way to turn it off or hide it. All of these have the same downstream problems with setting a risky precedent that can’t be maintained (i.e. everything new comes with a way to change it) and adding combinatorics that can’t be managed (the testing matrix).

Making up your mind is the best approach. Longer term, the disruptive innovation will come from when the new product subsumes your product and at that time customizing how your product works will prove to be extraordinarily low-level, almost like a debugger. Think this will never happen, then look at all the options in Office that we totally sweated over. Again, technologists love customizations so you are almost certain to get positive feedback and strong encouragement to providing customization.

Solve with a UI level of indirection. The hamburger, Tools|Options, right click, sub-menus, and more are all ways of adding things without adding things or hiding things you add but don’t want to add. There’s no magic answer to where to squeeze all the things you need in a design into a (very) finite space, but for sure if you find yourself putting something new behind a level of indirection then think twice.

Once you think you can get away with “change” by putting it behind a level of indirection then you might as well not do it. Sometimes this type of approach takes place in enterprise products where you are responding to a competitive dynamic but you don’t agree with the competitor or the competitor’s approach is at odds with your overall design. The theory is to add the checkbox but not break your overall model or experience. Only you can judge whether you are seeking credit with reviewers and analysts or actual humans but be careful thinking that you have solved the problem and not just created a future problem.

The discoverability of your work hidden behind a level of indirection is minimal so always ask if you’re doing the work for customers or to make yourself feel like you’re addressing a business or customer need.

Downplay the changes. If you go through the work to understand why you are going to make a big change, then design and engineer a change, the very worst strategy is to downplay the effort when it comes time to communicate what you did.

If you make a big change and talk about it like it is a little change then many will wonder if you are confused and/or lack empathy. The challenge is that this one is the easiest of all responses since it is simply a different tone or wording in a post describing what is changing. You choose the right screen shots or feature names and things can look more familiar. When customers who were expecting the same or incremental see what you’ve done this tends to increase the backlash and the internet loves a good dose of corporate “wool over the eyes of customers”.

Since significant strategic challenges are driving this change, backing off is a worst of all worlds reaction — you send the message to the market that you’re fighting the last war, thus not engaged in the future, and customers come to expect that as well.

Redesign quickly. If how you communicate the change is the short term response, then the medium term response is to quickly redesign what you just spent a lot of time designing. How quickly can you back out the most egregious changes? How can you undo things with as little engineering work as possible? What if you just added a couple of old things back front and center? These are all things that will be rapidly floated within the team.

The most fascinating aspect of this response is that this is what the internet will do for you, both quickly and broadly. The reason is that once a new design is put forth, incremental changes to that design along the lines of “do this instead” will be offered by the community that is empowered by your product. There will almost certainly be good ideas amongst all of these, and even more likely they will be alternatives you considered (we never really learned why Apple so steadfastly refused to highlight the shift state on the iOS keyboard but it is impossible to believe this was not discussed).

Design and engineering are difficult and we all know that the likelihood of mistakes increases with the pace of reactionary change.

Reality

As this post keeps saying, the reason it is so painful and frustrating to change user experience is because right at the moment that your product has successfully reached the point of being empowering and critical to the jobs and lives of your customers it is also facing the most existential competitive and marketplace challenges.

The reality is you have to respond to the marketplace. You can choose to continue to iterate on the same path with the same customers. In the technology world that is, with as high a certainty as you can count, focused on the shrinking market. Disruption is real and it so far it is proving to be much more of a law than a theory.

Is the time to change right? Is the design you chose the right one? Are you focused on the right strategic competitors? The other reality of technology change is that most often the forces that keep a product and company from transitioning from one generation to the next are not an understanding or ability to debate these choices, but the ability to execute across product, engineering, marketing, and sales.

The really good news about all of this is that if you can create the product change and go-to-market execution, the reality is that short-term memory is a real thing, especially in a growing market. If you can make changes that secure new customers and grow or that your typical customers can adjust to without “incident” then there’s a really good chance that memory will be short term.

Those customers that chose to stick with character interfaces or would not move off a web app, got left behind by graphical and mobile users. This happens with every technology platform shift and happens within every category. Growth is the friend of change and if you’re not growing you are by definition shrinking.

I’d like to add one last reality for everyone who both made it this far and is out there critiquing new designs for products they use and love. The people working on products you love are on average as good as you, as thoughtful as you, and as informed as you. They are all open to feedback and good two-way discussion. Treat them the way you would like to be treated in the same situation.

Steven Sinofsky (@stevesi)

Author’s note. I’ve never used a music lyric quote and don’t mean to steal Ben’s intro, but this quote from this song has special meaning to me in this particular situation.

This post originally appeared on Medium.

Written by Steven Sinofsky

February 8, 2016 at 12:00 pm

5 Ways to Compete With [Big] Incumbents

Japan, Jiu-Jitsu-KämpferIn The Stack Fallacy: Why Big Companies Keep Failing Anshu Sharma writes about how difficult it is for a [big] company to move up the stack to adjacent businesses/product categories by building on their successful base. If you are competing against one or more incumbents, even if you believe they will ultimately fail because of this fallacy, it is still an incredibly challenging competitive situation. Using some typical weaknesses as your competitive strengths can increase your potential for success when being the next part of the stack a big company takes on.

In a competitive environment, often a “checklist” battle dominates. This is especially true if you are competing with an enterprise incumbent. There are many ways to compete with a company that has more resources, existing customers, and access to broad communications channels. You can be systematic in product choices and communication approaches and increase the overall competitive approach.

You can think of these as the Jiu-Jitsu of the Stack Fallacy — using the reasons competitors can fail as your strengths:

  • Avoid a “tie is a win”
  • Land between offerings or orgs
  • Know about strategy tax
  • Build out depth
  • Create a job-defining solution

This post is mostly from an enterprise competitive perspective, but the consumer and hardware dynamics are very much the same. While some of this might seem a bit cynical, that is only the case if you think about one side of this battle being better than another — in practice this is much more about a culture, context, and operational model than a value judgment.

Avoid a “tie is a win”

The first reaction of an incumbent (after ignoring then insulting the competition) is to build out some response, almost always piggy-backed on an existing product in an effort to score a “tie” with reviews and product experts (in the enterprise this means places like Gartner). The favorite tool is the “partner” or “services” approach, followed by a quick and dirty integration or add-in. Almost never do you see first party engineering work to compete with you, at least not for 12–24 months following “first sighting”.

Their basic idea is to clear the customer objection to missing some feature and then “get back to work”. In enterprise incumbent-speak, “a tie is a win”.

The best way to compete with this behavior is to go head to head with the idea that a checkbox or add-in does away with the need for your service and worse such an implementation approach will almost always be insufficient over time and hamstrung by the need for integration.

Don’t worry about your competitor pointing out the high cost of your solution or the burden of something new being brought into the enterprise. Both of those will become your strengths over time as we will see.

Land between offerings or orgs

The incumbent’s org chart is almost always the strongest ally of a new competitor. The first step is to understand not only where your competitor is building out a response, but the other product groups that are studying your product and getting “worried”. Keep in mind that big companies have a lot of people that can analyze and create worry about potentially competitive products.

You can bet, for example, that if you have any sort of messaging, data storage, data analysis, API, or visualization and compete with the likes of Oracle, Salesforce, Tableau, or other big company that several groups are going to start thinking about how to incorporate your product in their competitive dialog.

You can almost declare success when you hear from your customers that your product has come up in multiple briefings from a single company. Perhaps the biggest loss in a large company is when a Rep loses a deal to a competitor and news of that travels very fast and drives tactical solutions equally fast — tactical because they are often not coordinated across organizations.

When you find yourself in this position, two things work in your favor. First, there’s a good chance you will soon find yourself competing with two “tie is a win” solutions , one from each org— white papers talking about partners who can “fill in the gaps” or add-ins that “do everything you need”, for example. No P&L or organization wants to lose a deal over a competitor.

Second, you will have time to continue to build out depth because the organizations will begin the process of a coordinated response. This just takes a long time.

The best thing that can happen at this point is if you have a product that competes with two larger companies. At that situation you can bet that you are the thing those companies care the least about and what they care the most about is each other. You might find yourself effectively landing between many organizations then and that spot in the middle is your whitespace for product design and development — go for it!

Know about strategy tax

Once an organization grows and becomes successful, one of the key things it needs to do is define a reason for the whole to be greater than the sum of the parts. The standard way incumbents do this is to have some sort of connection, go-to-market, feature, or common thread that runs through all the offerings. This defines the company strategy and the reason why a given product or service is better when it comes from a particular company (and also the reasoning behind a company being in multiple businesses).

In practice, the internal view of these efforts quickly becomes known as astrategy tax. From a competitive perspective these efforts are like gifts in that they make it clear how to compete. For example, your product might have integrated photos but your competitor needs to point customers to another app to deal with photos. Your product might be supported by channel partners but your competitor will only sell direct (or vice versa). This can go to an API level, particularly if you compete with a platform provider who is strategically wedded to a specific platform API.

A classic example for me was the Sony Memory Stick. If you were making any device that used removable storage then you were clearly going to use CF or SD. But there was Sony, marching forcefully onward with Memory Stick. It was superior. It had encryption. It had higher capacity (in theory). At one point after a trade show I left thinking they are going to add Memory Sticks to televisions and phones, and sure enough they did. What an awesome opening if you needed to compete with a Sony product.

A strategy tax can be like a boat anchor for a competitor. Even when a competitor tries to break out of the format, it will likely be half-hearted. Any time you can use that constraint to your advantage you’ll have a unique opportunity.

Build out depth

The enemy of “tie is a win” is product depth. Nothing frustrates an incumbent more than an increasingly deep feature set. Your job is to find the right place to add depth and to push the incumbent beyond what can be done by bolting capabilities into an existing product via add-ins, partners, or third parties.

Depth is your strength because your competitor is focused a checkbox or a tie, figuring out the internal organization dynamics of a response, or strategizing how to break from the corporate strategy. While you might be out-resourced you are also maniacally focused on delivering on a company-defining scenario or approach.

The best approach to building out depth is to remain focused on the core scenario you brought to market in the first place. For example, if you are doing data visualization then you want to have the richest and most varied visualizations. If you have an API then your API should expose more capabilities and your use of the API should show off more opportunities for developers.

There’s a tendency to believe that you need to build out a solution that is broad and to do that early on. The challenge here is that this takes you into the incumbent’s turf where you need to build not only your product but the existing product as well. So early on, push the depth of your service and become extremely good at that — so good that your competitor simply can’t keep up by using superficial means to compete. This example from Slackcrossed my feed today and shows the depth one can go to when there is a clear focus on doing what you do better than anyone else.

Your goal is to expand the checkbox and to move your one line of the checkbox to several lines. This is how you change the “tie is a win” dynamic — with depth and ultimately defining a whole category, rather than one item.

Create a job-defining solution

When building a new product and company, one of the most significant signs of success is when your product becomes so important it is literally someone’s job. Once you become a job then you are in an incredible feedback loop that makes your product better; you have an opportunity to land and expand to other parts of a big company; and you have an advocate who has bet a career on your product.

New products have a magic opportunity to become job-defining. That’s because they enter a customer to solve a specific problem and if that gets solved then you have an advocate but also a hero within the company. Pretty soon everyone is asking that person how they get their job done so much better or more efficiently and your product spreads.

The amazing thing about this dynamic is that it often goes unnoticed because rarely are you replacing entirely something that is already in use, but simply augmenting the tools already in place. In other words, the incumbent simply goes about their business thinking that your product just complements their existing business.

This obviously sounds like a big leap to accomplish, but it speaks to the product management decisions and how you view both the product and customer. With enterprise products it is almost always a two step process. First you solve the specific user’s problem and then you solve the problems the IT team has in using the product as part of a business process (i.e. authentication, encryption, mobile, management).

This works particularly well because your incumbent’s product has already achieved this milestone, and it is their product that (a) is not working and (b) is almost certainly some other function’s job software. It is another way of landing in the whitespace of the organization. Your competitor’s job is not looking for more to do, especially not someone else’s job, so you have some clear road ahead.

The challenge of existing winners breaking into new or adjacent businesses is real and difficult. Very rarely does this happen. The inherent obstacles, both technically and culturally, new products have specific entry points to compete.

Steven Sinofsky (@stevesi)

This post originally appeared on Medium.

# # # # #

Written by Steven Sinofsky

January 29, 2016 at 9:44 am

Posted in posts

Tagged with , ,

CES 2016—Observations for Product People


CES 2016—Observations for Product People

IMG_0973

This is not me.

CES is the best place to go to see and learn about making products. In one place you can see the technology ingredients available to product makers along with how those ingredients are being put together and how they are interacted with and connected to customers.

I love going to CES and walking the show floor north to south, convention center to Sands and seeing and touching the products, including the way random show-goers perceive and question what is out there.

As much as I love attending, I also love taking a step back and thinking (and writing) about what I learned. Doing so provides great context for me in working with startups on their products, talking with enterprise customers about their needs, and partnering with bigger companies to enhance their go to markets.

As a reminder, CES is not a big electronics store nor is it a research lab. It is somewhere in between. While there are many ready-buy products on display, most are not yet ready to use. Many of the most interesting technologies are not yet in products. Most companies are working to put forth their best vision for where things are heading. It has always worked best for me to think about the show directionally and not as a post-holiday shopping excursion. Equally important is keeping in mind that I’m not the customer for everything one can see.

This is a long post. The breadth of CES is unprecedented. The show is not “consumer electronics” or even “home entertainment” but it every industry. Where else would you see booths from car companies, delivery services, film studios, computer makers, electronics component makers, cable tv companies, mobile phone carriers, micro processor and chip makers, home improvement superstores, and so on. From startups to mega-caps, from every country, from supply chain components to complete products everything is represented. The opportunity is unique.

CES has become a software show. Even the interesting hardware is dominated by firmware, cloud services, and connectivity. It is increasingly clear that if you’re interested in software you have to be interested in pretty much every booth. I’ve heard software is eating the world and that’s on display in Las Vegas.

The major observations impacting product makers and technology decision makers on display at CES 2016 include:

  • Invisible finally making a clear showing (almost)
  • Capable infrastructure is clearly functional (almost)
  • Residential working now, but expectations high and software not there
  • Wearable computing focusing on fitness
  • Flyable is taking off
  • Drivable is the battle between incremental and leapfrog
  • Screens keep getting better
  • Image capture is ubiquitous
  • Small computers better and cheaper for everyone
  • Big computers better but not game changing

Invisible finally making a clear showing (almost)

For many years much of the show floor was dedicated to the problems of where to store bytes, how to move those bytes around a network, how to type, or even how to convert bytes from one format or device to another. What’s most amazing is just how much of all of this is now simply invisible. The whole industry has moved up the stack.

If you go through all the winners of CES “best of show” (note, wow there are a lot of winners!) most all of them have a few things in common:

  • No local storage (for customers to deal with)—everything is cached from the cloud or streamed (i.e. no media servers, no hard drives, no formats to worry about, no backups to do). Yay!
  • No wires—everything is wireless. Even better, most everything is WWAN (mobile phones) Wi-Fi or Bluetooth. This is infrastructure that is now normal—meaning not a point of differentiation or confusion—the mobile ecosystem and supply chain all but guarantee this connectivity and capability. Almost nothing has an RJ-45 network jack and anything that might require one has some sort of wire-closet hub to separate the actual device from the wired connection. Most everything easily tunnels to your Smartphone via cloud services. Yay!
  • No buttons—everything has a touch screen and there are few buttons to deal with. When a complex user experience is needed, it is almost always done with a mobile app (more on that below). What was amazing was just how rare it was to even see a keyboard and certainly gone are rows of rectangular buttons. Yay!
  • Almost, no mains–a lot of focus is going to long life batteries, solar, and certainly wireless charging. Many of the winners such a Bluetooth location devices, cameras, and home automation/security operate on batteries lasting almost a year to two years. That’s long enough to probably never change the battery and just replace with the next generation! Put devices where you want and access them from anywhere. There’s a massive amount of cool engineering and clever approaches that go into being ultra low power. Yay!

This set of attributes represents the starting point for most any product. It is also a huge opportunity for consumers because it means the ability to adjust devices over time, even for residential equipment, is much easier than the past. Imagine when you move, you can just relocate your security camera, for example.

To be fair, there are some wires, but we are down to three: Apple lightning, USB C, and HDMI. USB was ubiquitous throughout the show and devices that should use USB C (like new PCs) and didn’t look like they missed out. Given wireless video casting, even HDMI cables will fade into the background for most people. I’m beginning to think more and more that Apple Lightning is looking more like Firewire in that it was superior at the time but the industry caught up faster than expected. It might even be the case that HDMI will move to the USB C connector form factor (not protocol). Going to/from these three cables is also easy, which is great.

IMG_0985

USB C was everywhere.

One fun note is that quite often you see a product that seems clever and/or odd and then you see it again, and again. This year, I saw the identical USB charge station a dozen times. This is the China manufacturing and distribution system at work.
IMG_0921

USB charge station seen all over the floor.

 IMG_0918
USB charge station for when you’re really serious about charging (C version on the way!)

Capable infrastructure is clearly functional (almost)

It is interesting to see the mobile supply chain’s relentless focus on continued integration drive very capable infrastructure into nearly every single device.

Going back, there would have been a CES where “wireless music” was a thing all by itself. Or maybe you recall a CES where just being able to have a camera was a big deal. Most probably remember when GPS was a “thing”.

CES 2016 shows that all of these scenarios have come together and basically in anything you want to make one can have all of these (and more) or pick and choose easily what capabilities to expose. From a base capability perspective this includes:

  • Attaching a camera and sharing captured images/video
  • Streaming audio
  • Controlling the power state and move it around
  • Locating the device
  • Alerting those nearby with sound, vibration or those far away with mobile alerts
  • Lighting the device with tiny LEDs of any color that never burn out and consumer little power
  • Uploading sensor data from the device
  • Sensing the environment

All of these are available to product makers and likely harder and more expensive to acquire discretely than they would be by essentially taking a mobile phone BOM and making a device. If you talk to the makers at the booths, most every device has more capabilities in hardware than is being exposed in the current release of software. Cameras are capable of 4K, SIM slots go unused, sensors collect but don’t share, and so on.

The big challenge is no surprise. Software development is unable to keep up with the hardware. What is going to separate one device from another or one company from another will be the software execution, not just the choice of chipset or specs for a peripheral/sensor. It would be hard to overstate the clear opportunity to build winning products using stronger software relative to competitors. Said another way, spending too many cycles on hardware pits you against the supply chain for most products.

Some of the devices that include most of these include a rubber duck (speaker, remote control), knit cap (music player), light bulb (speaker, camera, climate), walkie-talkies (location, camera), power strip (remote control, telemetry, power usage report), flower pot (soil water level, camera). The list goes on and on!

Screen Shot 2016-01-10 at 11.02.24 AM
Looks like a rubber duck, but it is also a remote controlled streaming media player with kids apps!
IMG_0986
Looks like just a knit hat, but a music hat that streams music.

Residential working now, but expectations high and software not there

The most visible example of the ecosystem of components, manufacturers, and distribution coming together is in residential—products to control, protect, and monitor the home. There were dozens of companies showing what looks to be essentially the same product:

  • Wi-Fi or WWAN base station that connects to and controls the sensors in the home while also communicating with a monitoring service
  • Door/window open/close sensors to detect entry
  • Water sensor to detect floods
  • Motion sensor to detect intruders
  • Outlets and switches to control lighting and outlets
  • Smoke/Fire/CO sensors for safety
  • Thermostat for environmental
  • And so on.

In addition, there are more specialized (and harder to make) controllers for legacy home systems like garage door remotes, water heater, sprinkler, and so on.

Plus there are cameras for security monitoring and doorbells and locks to control entry, though many systems struggle with offering and integrating those.

The reality is that all of these basically just work and provide evidence of the supply chain at work. These are offered by startups, white labeled to many local distributors who will handle installation, and all the major home improvement stores carry them. You might have even seen the pitch from Comcast or AT&T for these as well. There were at least a dozen full service companies on the floor.

They are all essentially the same offering. Well, except for for the software and that is where they are all quite different and where the “ready for prime time” evaluation needs to be done. While they all have apps, for many scenarios some of these can prove quite awkward for some basic control—to the point where it is more annoying than helpful to use. For most customers, the app becomes secondary to more traditional keyfobs/dongles and PIN codes.

Once again, this shows where there’s opportunity to focus and potentially win.

Traditionally this has been an area where the reviews clamor for integration and synergy across device. A couple of things became clear this year:

  • Since everyone can offer everything (due to the supply chain) the viability of the company becomes more important than worrying if the company will offer a particular sensor/controller.
  • Integration is happening through a very traditional “consortium” and as nice as this sounds it isn’t clear it is working particularly well. First, much of what makes these easy to use is the way each maker handles out of box setup (which is mostly outside the standard) and adding additional sensors over time. Second, the UX for managing sensors and controllers integrated by third parties is usually least-common-denominator compared to first party.

In fact, this year saw a significant change in integration. Last year most all home automation was integrated with Nest. While that is still the case, as most would not the integration provided little useful capabilities and the “native” apps proved better. This year everyone integrated with Alexa from Amazon Echo. This made for compelling demos to turn lights on/off or adjust temperature. Time will tell if Alexa will be replaced next year or if Nest will up the level of integration.

IFTTT (an a16z portfolio company) was frequently used in demonstrations for conditional and multi-step scenarios. IFTTT replaces “custom installer” macros and other tools that have often plagued “home automation”.

Two great examples of this are programmable door locks and video doorbells. Both of these are logical integrations for the rest of a security system and while basic integration over z-wave is possible, for most scenarios (answering the door, programming new combinations) the vendor specific app is required. These are difficult to make products that need to fit in legacy infrastructure, so this is to be expected.

That said, because of the rising tide of infrastructure, the locks and doorbells have come a long way in the past year. Ring doorbell even released a battery operated (rechargeable) camera to accompany the doorbell (it is basically the motion-activated doorbell camera without the bell). Vivint has done good work to integrate Kwikset locks, a first party doorbell, as well as Amazon Echo to provide a more complete solution.

But for now, the base level capabilities are there and work across many providers. It is likely that these will further coalesce into a market where it is easier and better to get all the components from one company rather than trying to stitch them together. The good news is that this category is a pretty simple DIY project. The better news is that because of the SaaS revenue for monitoring, it is not hard to find an offering that comes with free installation (such as from Comcast).

Home integration happens with this in theory, though in practice the supply chain makes it easier to avoid cross-manufacturer integration if at all possible.

 
IMG_0876
 IMG_0842
Example of one of many suppliers offering the full range of sensors and controllers.
IMG_0829
Even at the low end, all the same sensors, detectors, cameras are available. 
IMG_0869
Most cameras now combine motion detection and some machine learning to reduce false alarms. This camera is integrated into a traditional porch/doorway light so no extra wiring is needed.
IMG_0993
In an example of the ecosystem at work, this same switch (based on the no-battery, no wires approach of enOcean) shows up in dozens of different systems.

Wearable computing focusing on fitness

The big news last year was all about “smart watches”. This year the focus of many of the same makers turned more to fitness and less about overall lifestyle.

There were certainly many connected measurement devices (body composition, weight, sugar levels, blood pressure, etc.) and every device is able to measure sleep (on your wrist, in your pillow, or in your mattress) or steps taken.

Unlike the home security sensors, there’s still a great deal of science to be done to correctly (accurately, precisely, reliably) measure humans and much science that is needed to make this information actionable. I continue to think we’re measuring more than we can consume and act on, especially on a constant basis.

It looks like the major band makers agree and this year became much more focused on the specifics of exercise. The biggest announcement came from Fitbit with the new Blaze wrist wearable, “smart fitness watch”.

IMG_0816
Fitbit Blaze wrist wearable for fitness

In addition, Polar, Garmin, Under Armour and more all had new/improved bands dedicated to fitness. Much of the technology is about adapting algorithms to understand what the telemetry means depending on the sport (i.e. how do you measure fitness goals from your wrist when doing weights).

My view is that these bands are doing amazing work for people that are hardcore training in sports, but that the vast majority of people won’t benefit from the charts and graphs that come from doing a lot of work to set up using these. Speaking purely from the point of view of improving the average person’s fitness, a scale and blood pressure monitor seem more important. For most people, just walking for a fixed amount of time would be an improvement and a watch focused on timing laps for multiple sports is unnecessarily complicated and potentially demotivating. The support that comes from the community aspect for basic measurements and activities is documented and well-known to be a benefit, but that wasn’t the focus of the products on the floor.

The other aspect where these bands both differentiate and are still searching for broad fit is in software. With some sports sharing times (rides, trails, etc.) is a part of the hardcore enthusiast and so the community aspect is important. Again though that isn’t necessarily a mass consumer scenario.

I’m certain that the medical physiology (what measurements mean), sensor technology (how to measure), and medical research (how to act) will continue to evolve in this space. The longer term goal of a device that tracks meaningful body telemetry that regular people can act on themselves is not far off.

Fitness monitoring is not unique to humans. There were a number of products to help to monitor your pet with a pet wearable.

IMG_0848
Connected Pet—monitor what your pet does during the day for better fitness.

Flyable is taking off

Drones were more numerous and more capable than last year. As much as the category is maturing, it is worth noting how early this really is.

There are two large players in Parrot and DJI who commanded a significant presence on the floor. Beyond that, once again we can see the supply chain at work as there were countless companies with largely similar products.

The most common experience in the drone booths would be to watch someone come up to a company rep and ask about the range and then follow up asking about the payload. I must have seen this 20 times and each time the person walked away disappointed, as if they where hoping this was the magic booth that had the drone that really could deliver groceries or fly cross-country. The other question was how autonomous the drone was and always the answer was disappointing.

The vast majority of what is going on is still in the realm of traditional radio controlled (RC) flight in new form factors with amazing cameras (made possible by the influence of the smartphone supply chain). Even the major vendors are still in the early stages of the basics of geofencing, route planning, and other scenarios focused on safety.

There’s clearly a product development cycle analogous to PCs v mainframes/minis happening. Drones are never going to be jets or general aviation, just as PCs were never going to be mainframes. When something sees a 10x increase in usage/adoption the new product is always much different at that scale.

On the other hand, things will not evolve so fast and loose the way PCs did because drones share the same airspace as jets (in a way that PCs never shared with mainframes). That’s why I think this evolution will see more “real” aviation get pulled into drones much sooner than we saw mainframes (i.e. servers and server hardware attributes) pulled down into PCs. Reliability, safety, and more will need to happen sooner rather than later. Piloting a drone will be a profession, not a hobby, until they can really pilot themselves (but even then…). With that there will be opportunities.

Like other categories, the difference between companies is not as much in the supply chain components or even the manufacturing/integration but in the software platforms. In the case of drones, it is more the minimal amount of software. There’s still a lot of “pro-sumer” work that needs to happen to get the full cycle of sensors, flight, data gathering all working. One example of this at work was Parrot demonstrating their work with senseFly (a Parrot company) for agriculture.

Another example was this complete “police surveillance operation” kit from Flymotion. It offered the full command center for monitoring in the case of disaster or other need.

IMG_0884Flymotion’s complete Police surveillance drone system.

One of the most crazy and unexpected drones was from EHANG, a China based company (co-founded by an ex-Microsoftie!). Their product is a single passenger drone — basically an autonomous Uber-drone. You get in it and it flies you somewhere. Totally mind blowing. Given the differences in regulatory climates, this product is making fast progress in China and is already airworthy. I don’t often post pictures of me, but here I am to give you a sense of scale of this one.

IMG_0888
Here I am exiting after checking out the single passenger EHANG drone.
1-Oyk_Z19POTxQ-qfbIiFBLg
Another image of EHANG’s drone from their web site.

Next year is going to be an incredibly interesting year for drones. That is certain!


Drivable is the battle between incremental and leapfrog

Back down on earth and on the roads, the biggest battle in the global economy is over the next generation of “car” transport. Given the size of the market and the importance car companies played in the 20th century it is obvious why so much focus is on self driving cars or on alternative powered cars (or both at the same time).

All this coverage needs to be put in context of what was on display at CES. First, it is remarkable that car companies are using CES as a platform for announcing their autonomous work and general innovation in driving—while autos (and the Detroit supply chain) have been at CES for years it was always in the context of the after-market accessories or in building better premium “electronics”.

Second, while the whole North Hall of the convention center is devoted to cars, the vast majority of what is on display is traditional after-market customizations and even standard cars. FCA’s center stage was an interesting revamp of a Jeep interior, independent of autonomy or alternative power, for example.

The most interesting topic to ponder is really the nature of disruption that is taking place. Existing auto companies are seeing every aspect of their business upended. On the one hand all of their expertise in engines, interior design, drive trains is called into question by electric cars. On the other hand, autonomous driving challenges the fundamental business model of these car makers. Together these disrupt the entire process cars are built—a supply chain of parts makers, product managers, brand managers, dealer franchises, and more that has been built up over 100 years.

It is one thing for GM to show a Bolt, which by all accounts looks amazing. Or similarly for VW to show the BUDD-e van ( electric range of 373 miles, be capable of recharging to 80% capacity in about 15 minutes and would have a top speed of 93 m.p.h). But it will be quite another to deliver these at scale, sell them, and change the pricing and business models along the way. That’s just super hard for any company to do. As a reminder, FCA, Ford, GM combined sell light trucks for about 72% of their North America vehicles and those are more of their profits. Here’s a fascinating article on GM and the change underway there.

IMG_0990
VW BUDD e electric van to be available in a couple of years.

The role software and hardware (again, the smartphone supply chain) will play and how companies execute those areas will almost certainly be determining factors. For example, it will be much more difficult to built a reliable car if the software and hardware systems are a combination of legacy and new; or if every car needs to be built to handle the “optional” autonomous or driver assist features. Will the car makers look to the existing supply chains in place or be able to make huge and difficult choices to trust new suppliers with new components?

An example of this is NVIDIA which is building out a significant and integrated suite of car electronics. Basically making a car SoC. NVIDIA is not Bosch or Delphi.

IMG_0971
NVIDIAs car “SoC”.

While we were at CES Tesla updated their customers vehicles with the ability to summon your car. In a world where car makers still mail out DVDs or USB sticks to update maps, it is interesting to think about how things need to change inside those companies to enable that sort of customer experience.

If you think all of this is just being pro-Valley or cynical, then I would offer this counter example. Mercedes’ announcement at CES was that they intend to announce by the end of the year their strategy for electric cars. So in a year they will announce what they intend to do (of course many people are working on that now). The clear focus is on driver assist leading to autonomy (which they might be very advanced in).

For me, the most exciting transportation product was the Gogoro SmartScooter, which was also at the show last year. Think of the product as an electric Vespa with a max speed of 60mph and a range of about 60 miles at 40mph. But you don’t recharge it while parked, you pop out the battery and pop in a new one (or two) at one of many battery stations around town. You can own the scooter or potentially share them the way Divvy bikes are shared in major cities in the US. The company also has a home station to charge batteries in two hours.

This feels like a potential future of urban transport in most moderate climates.

 

1-3O7HM_4ejmE_BZqqRX-ztg

Gogoro SmartScooter and public charge station.

1-UXDBrT9cis24EKRykC1_CQ

Gogoro Energy Network showing charge stations in Taiwan.

Screens keep getting better

It used to be that the big (or flat and big) news at CES was about TV. Booths used to be filled with TVs. TVs are important but this year saw a greatly reduced push around smart TVs and a much bigger emphasis on overall image quality.

The reason for this is HDR and 4K. While most people gravitate to 4K (which debuted two years ago and is widely available now, including streaming content) the real news is HDR. HDR is “high dynamic range” or the ability to show more range of brightness. If you imagine scenes from Jessica Jones or Daredevil, HDR makes those scenes so much better, much more like what you would see in person. Unlike more pixels which we all know most people and most rooms can no longer discern, HDR is immediately visible to most viewers. Here’s a great thread on Stack Exchange about dynamic range.

 IMG_0935
Standard 4K image on left, HDR image on right.

All the major companies were showing off HDR displays. There’s a new industry acronym Ultra HD Premium which signifies an appropriate level of dynamic range. Netflix and other content providers will also be supporting HDR.

Take a moment to consider why this is not like the transition to HD and why it will happen much faster. HD required new content and going back to existing libraries of film and rescanning to make Bluray which you then needed to buy to play in your Bluray player. Network TV had to make the transition. Broadcast spectrum had to be allocated, and so on. Now this is all about software—recording is captured in RAW which has the information to make HDR (though more can be done in sensors for sure, which is a huge opportunity!) and re-encoding with enhanced metadata can be done as desired. Even distribution is no longer focused on just studios with new content coming from new players who have software perspective to bring.

Dolby is doing very exciting work to bring HDR to theaters and to home screens. They also showed some incredible work on sound called ATMOS which is a sound encoding that allows a single speaker bar to use a large number of drivers to deliver 7.1 sound. It was incredibly cool to sit there and hear sound coming from everywhere (Mad Max!) from one sound bar from Yamaha on the super cool LG HDR OLED display.

Still TVs continue to just get better. OLED continues to amaze and seeing a TV that is a sheet of glass is the star of the show. The Samsung SUHD 8K was the one to watch this year.

IMG_0956
Samsung 8K HDR Quantum Dot display. Yowza!

In the magic of software and physics department, Sony was showing short throw laser projectors that were mind blowing. One was a 40″ image projected from a 4″ cube speaker essentially against the wall. The other was a 100″ image from about 12″ away. Amazing! (Super interesting how the digital sensor captured the image by the way — some insights into how the lasers work!)

 IMG_0960
Sony short throw projector. Image is about 40″ projecting from the cube speaker essentially next to the wall.
IMG_0966Sony short throw project. Image is about 100″ projecting from the floor console about 12″ from the wall.

Image capture is ubiquitous

Cameras are everywhere in products. Once again this is enabled by the supply chain create by the pull of smartphones. Incredibly high quality cameras can be integrated into very small places and draw very little power. If everything is connected, then you don’t even need to store images or have an interface to interact with them.

Cameras are gaining more resolution, working better in lower light or infrared, and offering new capabilities driven by software. In particular, motion sensing, face detection, and object recognition software capabilities are becoming key parts of cameras. Though cameras themselves as a stand-alone product are much less interesting than integrating them into environmental or people monitoring, smartphones, cars, doorbells, or industry/job specific functions (police cameras for example). As with home security, the supply chain makes it easy to have the camera, but software is what makes it useful.

A great example of this at work is the Blink camera. By using motion detection software and bluetooth LE this camera becomes completely wireless—it uses CR123A batteries that can last 6–12 months. It is like a completely wireless Dropcam (but not one you would look at all the time unless you wanted to change batteries). Netgear Arlo is a camera taking a similar approach. These cameras communicate over Bluetooth to a small powered base station that connects to a wired network

 IMG_0860
Blink camera operates completely wirelessly using batteries and Bluetooth LE.

In a dynamic similar to smart watches, the action cameras seemed to struggle this year. There was no pulling back from extreme sports or just extreme in general as the main purpose for GoPro, for example. It isn’t clear to me how much bigger this market can be. There were a vast number of GoPro-like clones out there.

There were quite a few “VR” cameras which were any number of cameras (depending on lens) designed to capture 360°. The playback would use Google Cardboard. A good example is the Nikon KeyMission 360 which captured 4K images.

Kodak’s Super 8 is a high tech Super 8 movie film camera. It is quite the hipster product. It shoots film, which comes in classic Super 8 cartridges that you ship back to Kodak where they are developed and then delivered as scanned footage. As a silver-halide-enthusiast, I find it very neat but I am a bit skeptical. What might have been more interesting it to build a camera that had the UX affordances of shooting with film but the convenience of digital (along the lines of a Nikon Df still camera).

IMG_0881
Kodak’s Super 8 film camera and trusty Tri-X film!

One of the additions to CES this year was dedicated space for crowdsourced ideas/companies such as from Kickstarter. One project in this part of the show was the Enlaps camera. In one package the full range of the supply chain comes together — cameras, solar, mobile data, and cloud services. The product+service is incredibly cool and solves what is traditionally a very difficult problem which is capturing long term, time lapsed video from a remote location. While even phones have interval image capture, the ability to manage power, control the camera, and monitor what is going on is enormously complex (see http://ohioline.osu.edu/w-fact/pdf/0021.pdf for wildlife capture which is like time lapsed by motion triggered). I love the “set it and forget it” Enlaps product. It has two 4K cameras, solar power, and a web service that handles the complexity of time lapsed intervals so you can easily stream the results to your phone. You could use this for sunrise/sunset, changing seasons, tidal pools, wildlife migration, construction projects, crowd flow, events, and more.

IMG_0846
Enlaps.io is a fully self-contained interval camera. It shoots 4K images at intervals and sends them back to your mobile device. The camera operates off solar power and can be placed remotely for as long as you need.

Once again our pets get some love from cameras too. In this case your pet can “Facetime” with you by pushing a very Pavlovian button. Of course it isn’t enough to just see your pet. With PetChatz you can release pet pleasing smells and treats using your mobile device.

1-dlo3zlTfCYyQtLrt-HOJAg
PetChatz is video conferencing and treat dispensing for your pet. No, really it is.

Small computers better and cheaper for everyone

There is not a lot of news at CES for small computers as most companies save that news for GSM World. What is there shows the continued ability for the mobile supply chain to deliver all the components for a small computer and to now package them in ever-improving quality packages at ever-decreasing prices.

All of the vendor phones displayed the floor were of course Android. It is worth noting that one almost never saw Android being shown as part of products in booths unless the product is doing something that it probably shouldn’t be doing (i.e. root kit, peripheral, access to some low-level OS thing). The common thing I heard in the booth booths with Android would be “we’d like to do this but Apple won’t let us”. Personally, this is less of a call for Apple to open things up and more of a call for developers to think up different solutions, at least that’s my view of prioritizing “consumer electronics quality” over “get stuff done”. Most of the scenarios that were Android-only seemed somewhat dubious to me.

That said, there were dozens of phone makers with very high quality builds of phones. This one from nuu mobile is part of a line that goes from $99-$299 direct to consumer. There are lots of these companies differentiating mostly by channel approach (country, carrier, unlocked, rate plans) and less and less by software I think.

1-TDqiQ7PScZ2IK8qSQz4hkw
Top of the line nuu mobile Z8 retailing for US$299.

At the extreme low end, some of the China manufacturers still show some pretty old school stuff. I just liked how this ODM model had the generic “Brand” on it waiting to be picked up by a wholesaler.

IMG_0896
Old school phone labeled “Brand” waiting to be picked up by a wholesaler.

And in a throwback to the smaller-is-better era, here is a full voice/text phone done as an earbud. Those are actual buttons. No word on talk time. I actually saw it work!

IMG_0899
Full ear bud phone. No really.

Big computers better but not game changing

There were quite a few new Windows 10 laptops, all-in-ones, and big tablets announced this year (most with Spring availability).

The general trend is thinner, higher resolution screens, and Intel’s new Skylake processors. The Samsung 9 and the LG Gram garnered a lot of attention. The Samsung comes very close to the Macbook in form factor in a larger screen. As a Windows PC it has more ports of course. The LG is crazy light as you can see in the photo below. They were not quoting any battery life and there is no touch screen. Both skipped using USB C for power though which is disappointing in terms of specs.

IMG_0947
LG Gram Windows 10 PC is super light.
IMG_0952
Samsung 9 is a little thicker and heavier but features a touch screen.

HP, Dell, and Asus also had new PCs.

The area where PCs still currently lead phones is in graphics capabilities. But you can only experience this if you use the massive discrete cards from NVIDIA (primarily) and not with the integrated graphics on every laptop. If you want these insanely powerful graphics capabilities (say for deep learning experiments, bitcoin mining, CAD/3D, or just gaming) you have been stuck with a pretty hard to deal with tower. There’s help on the way in two neat PCs.

One is the MSI 27 XT all in one which takes a classic 27″ AIO and bolts on sort of a backpack for a PCIe graphics card. It isn’t pretty but it is a much more viable way to get the power you need assuming the display is good (which I was not able to see).

 Screen Shot 2016-01-10 at 11.10.16 AM
This MSI 27″ All-In-One has a discrete graphics card cage on the back for a PCIe card.

Razer which builds a great community of PCs and accessories offered up a pretty unique combination. The Razer Blade is a high end ultrabook stylized for gamers (colored LED keyboard lights). It runs high end Intel Skylake parts (Core™ i7–6500U) and has a great screen. It would be an accomplished Ultrabook on its own.

Via Thunderbolt 3 in a USB C form factor you can attach a mid-tower sized box with an external PCIe graphics card (as well as some additional ports). This turn the Ultrabook into a pretty high end workstation. I admit to taking a wait and see regarding the quality over time and security of all those kernel mode drivers via plug-and-play of thunderbolt so I’m looking forward to seeing how this evolves in real world settings.

1-T9OFdHaoS0NW-3IsNjCyzA
Razer Blade ultrabook and Thunderbolt 3 connected PCIe desktop graphics Core accessory.

Finally, in the tablet form factor Samsung announced the TabS Pro for Windows 10. The most interesting thing is that it carries integrated LTE which you don’t see often. The tablet itself is a 12″ slab with a great sAMOLED display. It runs the updated Core M processor which runs everything anyone would need to run unless you’re running Visual Studio or full time CAD/CS. The specs are great. In practice the soft, z-fold cover is awkward (just watching the booth folks deal with it), doesn’t stay attached, and doesn’t support the 12″ screen while using touch.

IMG_0954
Samsung TabPro S

Finally…

It is tough to beat this story of entrepreneurial spirit. Meet 13 year-old Taylor Rosenthal. He’s an entrepreneur from Opelika, Alabama. As an avid team sports participant he has more than once run across the challenge of needing the right first aid gear for minor cuts and scrapes on the playing field. He developed a set of kits and a vending machine called RECMED. He made his way to CES to show off his company, which he told me is remaining independent even though he already received a significant buyout offer!

Way to go Taylor!

1-uh7jteCOrxrUoJaPmfktpw Taylor Rosenthal, CEO and Founder of Recmed.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

January 11, 2016 at 10:59 am

Posted in posts

Tagged with , ,

“Hallway Debates”: A 2016 Product Manager Discussion Guide

MeetConstitution3When it comes to innovation and roadmaps there’s nothing special about the start of a new calendar year other than a convenient time to checkpoint the past year and to regroup for the next. Everyone’s feeds are going to be filled with “best of”, “worst of”, and “predictions” for 2016 and those are always fun. What I’ve always found valuable is taking a step back and thinking about the themes that will impact decisions at the product and business level over the next year.

In that spirit, the following is a selection of “hallway debates” that (I think) are sure to occupy the minds of product managers and product leaders over the next year. The hallways can be literal or figurative (i.e. twitter included).

Innovations in 2015

No doubt this year was filled with more than a fair share of “nothing new” or “incremental” views on innovation, but as usual I don’t think that is the case.

The big companies were busy with Watch, iPhone 6S, iPad Pro, Windows 10, Surface Book, HoloLens, Nexus 6P, Pixel C, and more. Amazon delivered an amazing amount of AWS features and capabilities (conveniently listed here). All while this is going on, startups continue to iterate, create new categories, and introduce new technologies (my biased, favorite list is onproducthunt.com every day).

All in all, 2015 was tremendously busy with so many of the products introduced clearly pushing the state of the art.

Musts for 2016

Before getting to the choices that have nuance or subtlety, the following are two top-of-mind factors that get to the heart of building products in 2016. These aren’t debates at all, but creating actionable and measurable plans should be a priority for every team and company.

Diversity and Inclusion

This year, rightfully, brought an overdue and intense focus on the role of diversity and inclusion within leadership, engineering, and businesses in general. The time is right for there to be an exponential change in our approach. We all know this is more than an opportunity, but a necessity. Products are used by an infinite variety of people from all walks of life, backgrounds, and abilities, and it follows that products should be built that way as well.

Looking at this through the lens of rapid change and how working on this early on is so critical, startups need to address this early in a company (and team!) lifecycle or the need to address this dramatically within an existing organization becomes clear. The simple math (and yes this is a simplification) done on this model, highlights just how difficult it can be to catch up if there is just a small and systematic bias along just one dimension (men/women) in an organization.

The discussion needs to happen and from that plans and acting on those plans needs to follow.

Security and Privacy

The run of security breaches in 2015 continued unabated. The damage continues to go up and there seems to be no end in sight to being able to secure legacy infrastructure. There are, however, some good news scenarios.

First, the move to mobile, particularly iOS, and third party SaaS services affords an opportunity to reset the security landscape. Of course these new operating systems are not absolutely secure as we have seen, but the level of security that comes from a new architecture and the investment in up front security places you on much firmer ground. There is no denying this so if you want to be secure, getting more of your use cases to mobile is going to help.

As makers, there are many things that are now part of the first wave of product features that were previously “good to have” features. Start with building on top of existing identity and authentication methods, build all communication channels as encrypted, and encrypt stored data within your own services. Previously these were viewed as enterprise features to be added later or to charge more for, but now these are essential to bootstrapping a service. They are just a start, necessary but not sufficient.

Nearly every discussion about what to purchase or what to build next needs to happen in the context of security and privacy.

Choices for 2016

Product choices are never as binary as we would like — there’s rarely an absolute. In general, there are nuances and more importantly context that drive a particular approach. Therefore, hallway discussions are rarely won (or lost) but are a necessary part of deciding on product strategy.

The best product manager or product leader discussions to have in 2016:

  • Invest in Deep Learning
  • Bet on Mobile OS
  • Ride the Mobile/ARM Ecosystem Wave
  • Compute, Not Just View, on Mobile
  • Go with Public Cloud
  • Choose Platforms Carefully
  • Track Computer Science
  • Avoid the Bridge
  • Create “The Plan” with Quality In Mind

Invest in Deep Learning

This past year has been an incredible year of progress in artificial intelligence or machine learning. Progress has been so significant that the pinnacle of tech leadership has articulated a growing concern of the risks of AI!

Even so, this generation of AI has gone from recognizing cat videos to being able to quickly and easy tag your friends in photo galleries and online services. Nearly every good recommendation engine is now powered by deep neural networks and machine learning.

A couple of great examples of machine learning include visual search at Pinterest and images within Yelp listings. This year even saw Google generate smart email replies for you! These are incredible advances in how problems are solved. The common thread is classifying existing large and labeled data sets within deep neural networks. This contrasts significantly with previous approaches to these same problem areas that would use click streams or other algorithmic approaches.

If you’re still coding recommendations, classifications/labeling, or automated generation of content using algorithms or simple networks, then this is the year to investigate how you would use these maturing approaches. They are all better by leaps and bounds over existing solutions that rely on smaller data sets and algorithmic approaches.

As with every previous AI advance, it is likely that some aspects of these new approaches will be combined with the current state of the art. In particular, the role of existing linguistic solutions will prove incredibly valuable for smaller data sets or difficult to classify solutions for natural language queries or processing in general. Pay close attention to how the research advances though because the role of deep learning for these scenarios is changing quickly.

Bet On Mobile OS

Are tablets turning into laptops? Are laptops turning into tablets? Are tablets losing out to larger screen phones? The permutations of form factors have been dizzying this year. Regretfully, most of this industry dialog is confusing and confused. If you are making client apps, then you could easily get drawn into this confusion and might miss the key decision.

The critical decision point for any new code is to focus on the platform and OS and less on the size and shape of the device. The forward-looking choice is to focus on mobile operating systems: always connected, app stores, touch, security, battery life, and more. These attributes are step function improvements over the x86 platform and given the ongoing investment up and down the stack the gap will only widen. For more on this see a post I wrote “Mobile OS Paradigm”.

Enterprise applications are sure to see a significantly increased level of focus and support on mobile platforms for a number of reasons. First and foremost, the level of security on these platforms is so much improved there isn’t even a debate. Second, enterprise capabilities for managing data landing on these devices continues to improve at a rapid pace and already greatly exceeds all existing approaches. Third, renewed efforts from Apple and Google on enterprise capability enable new scenarios and new approaches.

This topic will continue to generate the debate — “on a phone or tablet, I can’t do everything the way I am used to.” This is a debate that can’t be won and is both a fact and not useful. The reality is that the style and products of work are rapidly changing and so are the tools to support work. Ask yourself a simple question, which is how often do you reach for your phone even when you’re sitting at a desktop or when a laptop is nearby? The more you do that the more that everyone will be making an effort to make sure important stuff happens on that mobile OS.

A large number of workloads will continue, “forever”, to be laptop/desktop centric, starting with software development itself. We’re 30 years into the PC and server revolution and many workloads still happen on mainframes (or with printers or desktop phones). The presence of some scenarios does not invalidate forward-looking decisions. It takes a very long time for an installed base of something to drop to zero.

Ride the Mobile/ARM Ecosystem Wave

The iPhone 6S launched with a new ARM chipset designed by Apple and manufactured by both Samsung and TSMC, along with a wide range of components almost none of which are single sourced. While this is the result of excellent work by Apple it also speaks to the incredible strength of the the ARM/Mobile ecosystem. By any measure, the ability to deliver tens of millions of devices sourcing so many incredibly advanced components from so many vendors is unprecedented.

One of the milestone advances this year was the hardware capabilities of the iPad Pro. The compute performance of this iOS-based tablet exceeds the mainstream laptop. The most striking of the gains came in mobile graphics which are likely to remain a firmly established leadership position. Those clinging to the last generation were quick to point out that there are more powerful devices available or that the software doesn’t allow the same capabilities to shine through. Clearly that is a short-sighted view given the level of investment, number of players, and OS innovation driving the mobile ecosystem.

Such innovation can be mostly transparent to software developers. If you are, however, building hardware or looking to innovate in software using new types of sensors or peripherals then betting on the ARM ecosystem is ano-brainer. The internet-of-things revolution will take place on the ARM ecosystem.

The ecosystem will be on display in full force at the Consumer Electronics Show as it always is each year. We will see dozens of companies making similar products across many industries (the first stage of the Asian supply chain at work). Take note of what is released and you will see the ingredients of the next wave of devices. Integrating multiple devices from a hardware perspective or building innovative cloud services will prove to be great ingredients for new approaches.

Compute, Not Just View, on Mobile

Given the innovation taking place in the ARM/mobile ecosystem, it is fair to ask “what should we do with all this capability?”

The first 10 years of the smartphone apps could be characterized by trying to squeeze the vast capabilities of a cool web service into a tiny screen. With larger screens and innovations in user interface we’re seeing more done in apps. Now with so much more compute and storage on mobile devices we should see innovation in this dimension.

There are many scenarios where the architecture of roundtripping to a server is both slow and costly. The customer experience could be improved with the use of device-side compute or caching on the device.

To illustrate this point consider spell or grammar checking (something which most people don’t implement themselves so it makes a good example). Before connectivity, dictionaries (or rules) were authored by humans and installed locally on devices and rarely updated (how often does language change?) but speedy. The internet showed us that language usage and terms change frequently and spelling in a browser turned to a service with client rendering of results, and high latency along with much better results.

Today the best spelling dictionaries (and suggestions) will be derived from deep learning and training of models on large corpora. Even with great connectivity the latency of a service-only experience is too noticeable. Given the compute on today’s devices, it is not unreasonable to start to see models built on massive data sets packaged up for use locally for specific queries or classifications locally on a device.

This might extend to many examples where learning and models are providing classification or recognition.

Value the Open Source Community as Much as the Code

There is no doubt that the biggest change in software since the internet has been the way open source software now completely dominates the entire industry. By any measure of innovation, open source drives the software that is eating the world.

The not-so-secret ingredient of open source is the community. The community is created from the very start of a project. Early on, the most successful open source projects begin to focus on enrolling the community and creating a shared ownership of both the direction and implementation of the project. You can see this in Github stats around contributors, when people joined and how much they contributed.

Deciding which projects to use in your work should be influenced by the strength of the open source community contributing to a project.

Established companies have benefitted enormously from open source as a foundation, much of that housed within their mass-scale data centers. Recently, many projects which were developed inside companies are being “open sourced” after the fact. This is certainly a positive in many ways, but also offers a different model from what has previously been the most successful.

In contrast to traditional open source projects, these open sourced projects are looking for a community to join in and validate them after the foundation has been built. They are not really looking for a community to shape or influence the project, at least not in ways that will influence what that company sells or offers. Too often it isn’t quite clear how the future of a project will be managed relative to the open sourced code.

It is still early in this new “movement” and worth paying attention. For the time being, joining projects early and/or betting on projects with a genesis as part of an open source community seems to be the best bet. The leaders of any movement tend to people that build, not inherit, projects.

There are companies commercializing existing open source projects. In this case, seeing how those companies continue to contribute to and participate with the community, especially one those founders/individuals created is an easy way to validate a bet on a project. My belief is that the leading companies are created with the leaders that created the project and continue to participate in the evolution. To spark the hallway discussion, check out this post by a16z’s @peter_levine about the uniqueness of RedHat’s success.

Go with Public Cloud

Public cloud or private cloud? To many the answer that sounds best is hybrid cloud — best of both worlds. The lure of the hybrid cloud is incredibly seductive to enterprise customers. The idea that you can get “credit” or “accelerate” the move to the cloud if you just keep some of your existing infrastructure on-prem. The arguments are well-known (data co-location, compliance, etc.) but unfortunately they are never made relative to the two factors that matter most.

First, there is no real “architecture” for a hybrid cloud — the very nature of the cloud is a new way to build and scale applications.

Second, the time and effort to create such an architecture and the complexity introduced all but guarantees building a one-off system that will only grow increasingly harder to maintain, essentially impossible to secure, and then ultimately migrate to a modern cloud.

This is not a semantic debate or a debate of “pure cloud”, but an architecture that is fairly concrete. The cloud is a public cloud and it is far more than easy to create virtual machines in a data center (aka a private cloud). Even if you’re using a public cloud, it is important to consider what your architecture or runtime look like relative to scale — simply moving your VMs to another data center isn’t a cloud either.

From your (potential) customer’s perspective, the arguments for either a private cloud (which isn’t really a cloud) or to simply avoid a public cloud are well-worn and simply not compelling. Security, scale, cost and more all tip in favor of the public cloud without any debate. In almost every regulated environment that touches the internet, the practical view of saying no to the cloud is making less and less sense. It is still going to be important to build that hallway feedback loop from the customer to the product team to make sure you’re well-versed in this dialog and focused on winning the right customers.

If you’re building enterprise software then you’ve already been wrestling with the cost, complexity, and relative difficulty of a customer offering with a consistent, scalable, manageable, and secure on-prem solution. Some will deliver this, but that will not be the norm and enterprise customers should not expect (or want) this from every vendor.

The cloud is not a call to migrate the largest legacy systems, but how to think about new systems and innovating on top of existing systems. That’s the best way to navigate this debate and to build forward-leaning products and services.

If an enterprise is constrained in such a way as to believe it can’t move or create a public cloud then by far the best approach is to avoid investing in a hybrid or “private” approach and stay on the current course and speed until you can invest what is needed.

Choose Platforms Carefully

Everyone building an app or a site wrestles with the platform choice, which has two challenges.
First, everyone with an app wants to make sure it can be used by anyone from most any device.

Second, an app is great but quite a few people (along with enterprises) want to use it from their desktops. In the meantime, doing a great job scaling the product, adding features to win deals, and staying ahead of competition are taking all your time.

The siren song of cross-platform will continue unabated this year. The ability to deliver a winning experience from a single code base targeting increasingly divergent mobile platforms will continue to prove elusive (and the presence of an exception or two does not make it any more possible). As with past transitions and even some current efforts, getting the first bit done can lead one to believe that you’ve found the magic and it can work. This too is part of the pattern of cross-platform. Over time, an increasingly short time due to rapid platform evolution, the real-world catches up with even the best efforts. There’s more on this topic here.

The main mobile platforms are innovating in ways that do not have symmetric or analogous capabilities: the rise of the Swift language, the diverging user interface models (voice, multi-app models), the changing hardware landscape (force touch, tablet sizes and resolutions), and platform services (payments, identity, service integration).

This only leaves one viable, long-term option for any mobile app that is key to the overall value equation, which is to manage platform efforts as dedicated and separate teams. Managing this is always expensive and often complex. The key is how product managers lead the choice and execution of shared features and platform specific features.

As if this isn’t enough, some set of enterprise tools are seeing demand for apps on the desktop that go beyond browser apps. A number of mature enterprise efforts have started to deliver App Store or downloadable traditional desktop apps in addition to the browser. The implementation of these has consistently been to wrap the browser version in a native frame window and use an embedded webview for the app. This affords some base integration such as convenient app switching, window management, and persistent logon. Few offer any in-depth desktop OS integration and most still remain behind the browser in key capabilities (drag and drop for example). This is especially true if the primary use case is running across multiple browsers given the effort to maintain consistency in those implementations.

The real versus perceived value of these webview based solutions is debatable. The resources required and the implied long-term commitment to customers are not. In addition, the implementation choice leaves little room for true desktop integration. Unless you’re willing to commit to building a native experience, it isn’t clear that the investment for these apps represents the best way to win or stay ahead in the marketplace.

Track Computer Science

Some of the most interesting startups are those coming out of university or industry research laboratories with open source projects or those bringing entirely new approaches to what are traditionally “well-understood” computer science fields. It has been a long time since so much of what is new and interesting also happens to be the newest and most interesting topics being researched by new graduate students in university research labs.

Innovation tends to come in waves where whole new ideas take shape and then for a “generation” those ideas are executed on to the n-th degree. A classic example of this is the relational database model, SQL. Born out of IBM’s industrial labs (in what is no doubt a whole other era in corporate labs) and then iterated upon at the database layer by IBM, then Oracle, then many others. It created an industry of tools and products, along with perhaps the largest community of those skilled in the core SQL concepts. Today we have a whole new generation of database technologies all of which have their roots in research labs around the world.

Deep learning has roots in research labs and is now on a very fast pace to broad commercialization. Virtual reality and augmented reality approaches build on a wide array of computer science inputs.

All of this is a way of saying that your hallway discussions and debates this year should include computer science. Track the conferences. Read the summaries. Dig into the papers. Even if you’ve been out of school for a while or never really thought the research world was all that practical or interesting, my view is that we’re in the part of the cycle where there is much to learn by paying attention.

Avoid the Bridge

The hardest product choices remain those to be made by enterprise products. Where as the consumer world it is almost always about moving forward, being new, and building on the current, the enterprise world always has to balance the installed base, legacy, or compatibility. All too often the inertia behind the choices already made actively prevents forward looking choices that are inherently disruptive (disruptive in the dictionary sense as well as in the Silicon Valley sense).

The natural outcome of this is a bet on a transitionary period or a bridge technology — the enterprise world embraces such approaches to no end. Unfortunately, history has consistently shown that bets on the bridge not only fail to bridge to a new world, they ultimately prevent you from fully participating in this changes. Even more challenging, is that during the next technology wave you find your product an additional generation behind and an even bigger challenge. This goes beyond shiny new technologies simply because right now whether you build client code, create services or APIs, install infrastructure, or build and analyze data we are in the midst of an exponential rate of change relative to all of those. When you miss a beat during exponential change you simply can’t catch up — such is the power and challenge of that pace of change.

As a result, it is these “bridge” or hope for the “best of both worlds” topics lead to the most challenging hallways conversations. The hope here is to push in the direction of forward, bringing some comfort to those leaving the well-worn and understood past behind.

Create “The Plan” with Quality In Mind

Product management is constantly trying to get more done, in less time, with fewer resources. No one wants to move slow, get mired down in some big company planning effort, or worse fall behind competitors. That’s a given. It is also nothing new.

For quite some time now we as an industry have been on a bit of a roller coaster when it comes to how to plan what to work will get done. At one extreme the whole idea of having a plan was effectively shunned in favor of minimally viable products (a valuable concept often misapplied to mean, “toss it out there”) or in favor of putting something out and then letting failure determine what comes next. The other extreme essentially created a process out of reacting to data — the plans for what to do were always informed by what was going on with the live site or testing of changes to existing products.

This past year is one marked by failures of quality execution by even the biggest companies. You would have to look hard to find companies that made significant product changes without also receiving some (sometimes significant) critical feedback on the quality (robustness) of execution. Apologies, commitment to “less features, more quality”, and fast revisions/reversions seemed to be the norm this year.

At the same time there appears to be a bit of decommit from the discipline of testing. In my view this is more semantic than reality given that the work of designing, building, and running tests still needs to get done. The jury is still out on this and I’m personally not convinced that software is ready to be free of a QA discipline.

One could view this as a pendulum, but in practice this is the price our industry pays to work at planetary scale. It might be the case that your product is used only by early adopters and tech enthusiasts, but those very people are having their quality bar set by the broader industry benchmarks.

Quality is the new cool. Releasing without the need to re-release is the new normal. Testing is the what’s old is new again.

When you’re having the debates about what to get done and when, this will be a year where deciding that getting something done right, done well, should trump getting something done today or halfway, or getting something done that you know isn’t right (in a big way).

You do not/should not need to revert to a classic “waterfall” (a term applied with disdain to any sort of planful process). However, some level of execution rigor beyond the whiteboard is the kind of innovation product management should bring to the table this year. There’s no reason that products can’t work very well when they are first released, even when you know there is much to be done. There’s no reason you can’t have a product plan and a roadmap at a useful level that is written down, without it being a burdensome or overly-structured “task”.

At the risk of self-reference, two posts on these topics from this past year I’d leave you with: Beauty of Testing and Getting (the Right) Stuff Done.


Wishing everyone a Happy Holidays and Best Wishes for the New Year!

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

December 15, 2015 at 11:22 am

Getting (the Right) Stuff Done

S65-42424A key role of product management (PM), whether as the product-focused founder (CEO, CTO) or the PM leader, is making sure product development efforts are focused. But what does it mean to be focused? This isn’t always as clear as it could be for a team. While everyone loves focus, there’s an equal love for agility, action, and moving “forward”. Keeping the trains running is incredibly important, but just as important and often overlooked is making sure the destination is clear.

It might sound crazy, but it is much easier than one might think for teams to move fast, get stuff done, and break things that might not be helping the overall efforts. In fact, in my experience, this challenge has become even greater in recent years with the availability of data and telemetry. With such, it becomes very easy to find work that needs to be done to improve the app or service — the data is telling you right then and there that something is tripping up customers, performing poorly, or going unused. Taking action makes it easy to feel like the right thing is happening. It feels like moving forward. Everyone loves to get stuff done. Everyone feels focused.

But is the team focused on the right work to achieve the right results?

Just a little process

Two important realities represent a constant balancing act when leading a product. As a PM you are applying finite resources to market needs in the march towards product-market fit or are working furiously to maintain a competitive lead. In addition to the new features there is the work that sales or customer success needs and together those greatly exceed what can be delivered by engineering.

This problem doesn’t ever go away and is at the core of the role of product leadership — getting the right stuff done with the right priority.

In every well-run company there is a strong tendency towards action and a strong dislike (tending to revulsion) of process. In practice, the absence of a process is just as much of a process, just one without clear lines between action and result. A little bit of process (aka product management) can go a long way to having real focus and getting the right things done.

With a little bit of process, everyone on the team can have:

  • Shared views on what success looks like
  • Clear understanding of success measures and metrics
  • Easy mechanism to decide what should be cut or pushed out when things aren’t going as planned
  • Visible alignment between what work will impact what elements of success and which measures
  • Honest accounting of resources going into what big picture initiatives
  • Ample opportunity to participate in deciding what gets done and when

It is very easy to overdo process and go from a helpful tool to a burden people run away from. A personal goal has always been to be as lightweight as possible and to have a way of thinking about these needs that scale from projects that last weeks, months, or even longer.

My guiding principle or golden rule of process is to never ask for something from someone that does not directly help them to get their own work done. Process is not about reports or “management”, but about making sure the work each person does is the most important work to do and the time.

Just a little framework

When most people think of coming up with a product roadmap or plan, they think of ends of a spectrum. At one end there’s commonly the one slide version labeled with months or years and a couple of bullet points at varying levels of granularity and decreasing accuracy as time goes on. There’s also the detailed and long-term strategic plan that most people can’t read through that is often the work of consultants or staff at big companies.

There’s something in between that I’ve found very helpful in terms of framing the product roadmap.

The roadmap can be represented as a hierarchy of increasing detail. It starts with a mission covering years of aspirations for the whole company. From the mission follow the goals representing the 12 months of work supported by specific metrics or measures and the various roles or disciplines in the company. Teams then come together to work on projects (or milestones in a longer term project) that take weeks and are delineated by releasing product or programs to the market. Supporting the creating of projects are the day to day tasks at the feature level representing the work of individuals.

Throughout this whole system there is ongoing telemetry that is called upon to support the company with reliable data upon which to make decisions.

Mission

Whole books have been written about mission statements or the process of developing a mission statement. Nothing makes me groan more than the idea of having a meeting to craft out a mission statement. We’ve all seen the results of these efforts that are an awkward combination of passive voice, comma splices, and breathless language. Companies exist for a reason and that’s the mission.

Missions are aspirational and guide you for years and represent the reason for being. Everything you do should aim towards your mission, and how you do that is the work of the rest of the framework. Missions boldly stating that the goal is to “disrupt” tend to be a bit backward looking or focus on the mechanism versus the outcome. Rather a mission that defines a future state of being or a new world view are often the most enduring and more positive. The most important thing about a mission is that there is just one and it endures. Mission statements are best able to be expressed on t-shirts, or something close.

Goals

Most everyone thinks they already have goals. Too often though goals are expressed as metrics or scorecards, like be the most downloaded app or number, daily users, or bookings. These are easy to express and are the lifeblood of a startup. The challenge is they change frequently. Like any good code when faced with something that changes frequently, the best bet is to add a level of indirection. A goal is the abstract view of metrics or measures.

Goals are strategic concepts such as retention, ease of use, acquisition, manageability, scale, success, and more. Through evolving telemetry you develop metrics to support the goals.

By using these abstractions you might come realize you have more goals than engineers (or marketers, success partners, etc.) or that you end up with every person working on too many goals. This is part of the process of being focused about goals. For any one product there can only be 3–5 goals and those fit on one “slide”, which includes the full spectrum of engineering, sales, and customer outcomes. This is a deliberate attempt to put in place some constraints up front.

Goals are then measured in specific ways over time. Metrics are then the lifeblood of goals. Your goal might be acquisition, but the metric might be a specific mechanism of retention for a period of time; or your goal might be to improve scalability of the service but the measure might be compute usage for some time and then storage usage for another.

When thinking about goals, they almost always fall to a specific function (or role) on the team such as marketing, sales, or engineering. Having a full accounting of the goals and the associated metrics allows you to understand what will change as the team’s work progresses — what is measured will be what changes.

Projects

Projects are easy to understand — they are the releases or programs customers and the marketplace use and hear about. A project might be, for example, a full update to the service, the app, a new entry to the market, a launch, a campaign, or a major infrastructure change. Early on it is trivial to name the projects for a company. Very quickly, however, the number of projects can balloon and become increasingly difficult to track (and potentially to justify). There are SDKs, enterprise tools, segment campaigns, apps for different platforms or support for different browsers, and more.

The key reason to maintain a clear list of active projects is because momentum in continuing some project, failing to re-allocate resources, can often be the biggest constraint in getting the important work done. It common to find yourself in the situation of maintaining a project that no longer fits with the immediate needs but there’s inertia that makes it hard to change. The most important task for product management is to make sure everyone is aware of the projects being undertaken. The more the company scales the more critical it is to know what projects are active and what commitments the team is making to those. Even in the biggest companies, there are just dozens of meaningful projects.

A project has an ending date or deadline date — not a month or a quarter (those are 30 or 90 dates) but a single date — and everyone knows when the project releases or is complete.

Tasks

When you work from mission to goals to projects, the most concrete expression of work on the team is the task. A task is the actual code to be deployed, whitepaper to be written, SEO tools that will be employed, launch event to hold, features that will be designed, and so on. While a few people might care deeply or contribute to the mission, and executives generally focus on goals, and managers live whole projects, everyone is invested in tasks.

Tasks are defined by those that will do the work and those same people (or person) will decide how long it takes. Every person contributing to a project might have dozens of tasks. Tasks should be from 1/2 to 2 days — less and the accounting is too painful and more and it is likely the work is not understood enough to reliably schedule.

There are two main benefits from spending the time to create a list of work items. First, the project overall becomes increasingly predictable which is important because of dependencies (such as front end and back end, or marketing activities). Second, when things aren’t going as well as planned there is a clear view of just how far off things are along with a pre-computed list of potential savings to be had by cutting different tasks. Whether it is Asana, Trello, Sheets, Jira or more the key is just having a system that goes beyond post-its around a monitor.

What is often overlooked is how much more effective everyone is when they know the why behind the what. Everyone will do better work if the worklist flows from specific projects which have goals that are measured in a particular way. Much of the work of this framework will prove to be making the connection from task all the way to mission.

Telemetry

One additional element that permeates all of your efforts is telemetry.

The most successful organizations are also fully instrumented organizations. Everything about code, customers, and overall engagement has telemetry.

Keeping an open mind and open eye to a whole variety of measures is super important. Just that as a matter of scale and operation, you cannot hold everyone accountable or change what is being done in response to every measure. If you’re learning something that concerns you then dig in. Maybe you’ll change your plans. But when you do need to change your plans you can do so in the context of an overall framework, not just single data points.

The combination of a framework and telemetry makes it possible to more globally maximize your return on investment. Telemetry alone risks a more local optimization. A framework by itself is just guessing.

It might seem like doing all this is just too much busy-work. There is an investment to be made. Most of the effort, however, will involve “editing” or “culling” from a list that was already too long or contained a lot of things not getting done. The most time intensive work is in creating the task list and is often the most disliked or difficult to make concrete. Everyone seems to have a feeling for what work needs to be done but resistance to putting it out there. The essence of an accountable organization is taking the team through this framework and making it part of every day work.

Just a small tool

There’s a payoff to all of this that is incredibly important. If you stop here all you’ve really done is document things going on. What is really important is that you assemble information so you can have a system in place to deal with change — unexpected failures in the market place, gain or loss of people on the team, new opportunity in the marketplace, or demands from a customer. Maintaining this information in a simple tool provides the product manager with that.

There’s always too much to do, so by definition we know we are not doing everything we can to be successful. Do we know if everything we are doing is essential to the success we hope to achieve?

Seems like a simple question. In practice this is an exceedingly difficult question to answer at any given time and even more difficult question to answer over time as conditions change. In fact, I might argue it is close to impossible and that the best measure of success is to view the efforts overall as a portfolio. Just like a portfolio, however, you need to spend time digging in to understand at a more granular level where you can do better. Failing to do so too often prevents us from ongoing evaluation of work to make sure it is really helping — if there is more work than can be done, the easy path is to just assume everything going on is helping. That’s not true though!

What is suggested below might be simplistic or you might believe you’re already doing all this and more. I’ve found that most projects, especially before product-market fit, can benefit from a more systematic view connecting work to projects and goals supporting the company mission.

My own experience is that this can be accomplished in a surprisingly straight-forward manner. Doing so illuminates the work of the team and provides a great tool for the shared and ongoing management of the team.

Let’s just assume we’re working with a spreadsheet, simply because we’re going to do some math with the data. Feel free of course to use any structured tool that supports features such as collaborative editing, group-by reports/pivot tables, filters, and some basic math. The specific tool isn’t important and should not be the source of the first debate on the team!

We tend to think of the task list as a literally listing of tasks such as Implement OAUTH, Add new chart type, Create sign-up response mail, etc. Simply getting all of these done can often be helpful enough. Most tasks lists will have a name associated with the work and I’d always encourage a single name, or said differently define the task so it is the work of one person. In addition, include a column for the amount of time the task will take (0.5–2 days). For good measure I would suggest also including the date to start/finish as that allows you to use the spreadsheet to understand the relative schedule.

Then take the extra step of making sure the Project is clear. Is it for the iOS App? Does it support the SEO campaign? Is this the Beta program? Adding this label should be some simple accounting as projects are by definition distinct and represent a single release or customer/market visible effort.

The next step is key which is to add one more column which is what goal the work supports. Is this task about something like retention or scalability, for example? Avoid the temptation to think something applies to many goals or rather force yourself to commit the work to supporting a single goal. Keep in mind you already know the goals so all you are doing here is picking from the 3–5 established goals, which in turn are associated with metrics.

With that in hand you have something that looks like the hypothetical below. You can see a connection between tasks, projects, and goals.

1*q3EmMN1dWtlAzXlpcoZTUg

Keep this sheet up to date and you’re able to be ahead of the project. As simple as this seems it is just as often either overlooked or buried in too much detail to be broadly useful.

What can you do with this? There are several key sorts/filters that are enormously useful:

  • Any given person can look at their own work and know where they are and where it fits in
  • At any time, one can see how much of the time (resource) overall is going to support a given project or goal
  • Know which tasks are too far out, too big
  • Know which resources are over-constrained

And so on. This tool is the right level of complexity for projects from 10 to 5000 people in my experience. While many would love more such as dependency analysis or a task hierarchy, my experience is that is where the tool begins to overwhelm. When the tool overwhelms it isn’t used and so it doesn’t matter. (Quick note, when I first came to manage Microsoft Project I learned that the bulk of usage of the tool was not to track projects but to input a bunch of data at the start of a project just to come up with a nice-looking poster-sized Gantt chart printed out once at the project start.)

Managing projects is very difficult. While the bank account is draining and there’s a strong desire to keep moving is galactically more difficult. The fear of slowing everyone down with tools and processes is real and often justified. With so much riding on being efficient, effective, and focused it is worth investing a small amount in managing the work so as to make better decisions about what work to do and what happens when you get new information from the market.

— Steven Sinofsky (@stevesi)


Special thanks to @ProductHunt’s Ryan Hoover who took my suggested framework of Vision, Mission, Strategy, Tactics and made it much more approachable terminology. 🙌

Written by Steven Sinofsky

October 12, 2015 at 10:30 am

Posted in posts

Tagged with , ,

Privacy and Security: Less Rhetoric and More Product

malware advertisement Tim Cook’s recent remarks on privacy described as “blistering” or an “epic subtweet” amplified a discussion about the web and privacy. Given the polarizing framing of this topic, collectively as an industry (and beyond) we have been less than stellar at discussing these topics. We’ve also done a poor job at proposing broad initiatives to address concerns raised in the discussion.


We seem to be caught in that difficult situation of having defined the problem as requiring an all-or-nothing solution, which is never a good place to be because the reality is more nuanced. Dustin Curtis points out the nuance in this post Privacy vs. User Experience.

Rather than debate extremes that are neither desirable nor technically possible, I want to suggest there are technical problems that can and should be solved, and doing so would make the Internet a better place for people using Internet services and businesses providing services.

“Get Over It”

Way back before there were mobile phones, today’s search engines or social networks, or cloud computing, Sun Computer founder Scott McNealy said, “You have zero privacy anyway. Get over it.” Yikes!

The statement at the time had elements of truth, fear, and absurdity. It was pre-bubble, heck it was the 1990’s still and 1984 was still fresh on our collective psyche. The statement did however foretell a significant change in what was going to happen.

Such a debate is not new. While the scale is different, I recall three major products from reputable companies that introduced me to the absolutes and polarization of the privacy “debate”.

Robert Bork was nominated for the Supreme Court of the US in 1987 (and later failed confirmation in the Senate). One of the moments of the very contentious confirmation was the appearance of the nominee’s personal records from a video rental store (delivered to the press as a hand-written list). This was clearly a dubious act later codified to be illegal. I think for my generation, it was the first experience of how things would change in the digital age of record keeping via computer. At the time there were quite a few connections made to how the FBI maintained files on people, but this was the first time the “incriminating” information came from a benign consumer business.

Lotus Marketplace was a product developed in the late 1980s. It had the gall to collect data sources like US Census data and public phone number listings and put it on CD ROMs for marketing people to use with Lotus 1-2-3 to plan and analyze marketing campaigns. Even worse it had household and zip code level data about the US (all based on sources already in existence and available to businesses. Much debate centered on how one could take this data and potentially “triangulate” it to actually learn something about an individuals. Likely due to the massive outcry, the product was never released to the market. From this early experience we can see the combination of an existing data source and distribution of digital data changes the dynamic of privacy.

Credit card companies became famous for the offers inside your monthly statement in the 1980s—little paper inserts with offers to buy custom return address labels, go on cruises, or secure other financial products. Like confetti they would fall out of the envelope. These were the very definition of “junk mail”. Then the companies began to use your previous purchase history to target these inserts. If you were paying attention then you realized that junk mail started to look less and less junky. This “feature” turned into a fear that credit cards were selling your charging history to random companies. Of course that was not true (in fact such information was closely guarded). The way they worked was the credit card company would offer inserts matching specific target customers and insert them for a fee. Because financial companies were already tightly regulated, the path to today’s Byzantine opt-in/opt-out direct mail policies can be traced to this history.

Fast-forward to today and we know that the services we use amass significant information about how we interact with them. The medical establishment has my medical history available to a constellation of caregivers (and to me) that make delivery of quality care easier and faster. Credit card companies know my charging history and patterns and can alert me to fraud instantly (even if too often incorrectly). Netflix knows all the movies I watch (and even how much of them) and uses that to improve a highly valued recommendation engine. Pizza delivery services know what we order and can save time and effort by using that history (and also offer promotions based on that). Google Maps knows where I travel and when and proactively offers suggestions on when I should depart depending on current traffic. The examples are endless. In fact the benefits of maintaining my history of interaction with a service are immense and a deciding factor in which services I choose to use.

The risk that we have assumed on an individual service is that providers cannot maintain the integrity of their own services. This is a network technology risk as we have seen with Target or Home Depot. It is a human risk as we have seen with breaches like Sony where people set out to arbitrarily harm others. It is a national security risk such as we have seen with the recent attack on the federal government in the US, allegedly orchestrated by a nation-state.

The risk that a company will do “bad” things with the data it has as a result of using a service by all appearances is infinitesimal. Will some features feel creepy to some? Of course that is the case. Some people don’t like having their name and order remembered by the barista (a human form of big data).

The risk that a company will be breached and the data put to uses not intended by the company is not only there, but it is significant. This is a technology problem our industry needs to solve. One thing is clear and it is that the biggest companies are the biggest targets and the largest technology companies are (I would assert) the most savvy and adept at these issues. But the problem is incredibly difficult in a world of nation-states leading some attacks.

I am not a “get over it” person, but rather an engineer and product person that sees the desire of companies to use data to deliver far better services and that desire is leading to immense innovation in how commerce is conducted and the internet is used. At the same time this data is very attractive to bad actors for a variety of reasons and that is a technical challenge our industry will rise to as it has time and time again. This is first and foremost a security problem due to bad actors. The privacy challenges come from what happens with the data when used by good actors in the system and this is a much more nuanced challenge.

I do believe that if you simply want to skip using services that compile data then you should be able to opt-out of services or to simply choose to use alternative services, but there is no obligation for any given service to provide the non-personalized, non-targeted, non-historic version of a service. The market for such services is likely to shrink and that might be unfortunate. The free market is like that. Sometimes something highly valued at a point in time becomes non-economic or scarce as companies compete for a larger market. I don’t have an easy answer for customers that want to use the internet without a trail—I strongly strongly support the services and technologies that allow for that (encryption, tor, etc.). I think the evidence is that this is not where most people will go. Historically, if there is money to be made then businesses will be created to seek that opportunity.

But What About Web Privacy

Why all the kerfuffle over privacy, again? My view is that this is rooted in the experience of the web that is just getting worse and many are frustrated. Security breaches of private information compound this concern and are symbolic of technology challenges on the Internet. Security breaches are unrelated to privacy in the sense that breached systems are not ad-funded user profiles, but wholly orthogonal and essential line of business information. The challenge is that our collective experience on the web is the result of a mountain of technologies built out over the past 20 years in an effort to deliver services to consumers that are paid for by advertising.

The act of delivering services paid for by advertising is not only inherently good and beneficial, but also essential to the amazing spread and growth of internet services. It should be readily apparent that the rise of internet advertising supported services is singularly responsible for the mass scale growth of billion-customer services. That is only good.

It isn’t that all my data is in the cloud waiting to be mined—AT&T, Comcast, Blockbuster, American Express, Nordstrom, Safeway, Amazon, UPS, and more already had a crazy amount of information about me and I would love exactly none of that to be in the hands of a bad actor. Even Apple knows every song I ever bought (if this happens to leak, I am saying now that I bought Barry Manilow Live for a friend’s birthday party), every place I ever used Maps to visit, and all my mail and contacts. Google has much of this too. I know Google is not selling my name and that information to anyone, but like a credit card insert they will match an advertisement for services to “people who visit New York”.

The challenge is that in an effort to improve the revenue yield of services all too often technology solutions available were used, abused, or otherwise misused in ways that degraded the overall experience of the internet for too many. The problem is that web ads are awful experiences and getting worse. We need technical solutions to this 20-year pile of legacy features.

My view is that the horrible experience of browsing the web and seeing those ads that “interfere” with using the web, and fear that this experience will become what we all experience on the pristine world of mobile is at least partially and likely largely responsible for using privacy as the anchor of this debate. In Steve Jobs’ “Thoughts on Flash” he was completely accurate about the problems of the runtime. That runtime was used as the basis of ads. He may or may not have been against ads, but many people were quite frustrated by the technical execution of ads in Flash and so his appeal resonated. It is just few of us could do much about it.

The industry did not stand still. Over the years we have seen browsers add pop-up blocking. While advertisers were angry, people cheered. We’ve seen a dramatic rise in ad-blockers. Yet we still see a constant stream of complaints on the web about “wait 5 seconds to see your story” or user experiences that test even the most savvy gamer when it comes to finding and clicking the close box. But this is the technology choice on the web, not the nature of advertising itself.

Fixing the Web

I was a strong supporter of evolving the browser to support features that allowed consumers to choose how to secure their experience. From popup blockers to Do Not Track I advocated for this type of control. The reason was not because I am against “free” services or want everyone to browser anonymously without footprints. The reason is because the web got so messy that the recourse seemed to be to help people as individuals.

On a personal note, championing features like Do Not Track (DNT) was one of the more educational chapters in my own career. I had never experienced the “slippery slope” defense quite like that—the idea that a feature was just an on-ramp to the apocalypse would not overstate the reaction to such a feature. The argument against DNT was that overnight the free internet would vaporize, which was also the argument against popup blockers or the removal of Flash.

There are three parts of today’s web that are technology problems waiting for solutions. The solutions are either difficult or undesirable but I believe solving them would go a long way towards framing the debate as a choice between “selling your personal information” or “there will be no services on the Internet”.

¶ Ads are awful. First and foremost, today’s ad formats are relatively hostile to consumers using services. We all know that ads want to be noticed. That’s a given. The openness and programmability of HTML5 and browsers created an open season on the technology used in ads and while there is plenty of innovation there are more negatives. Even the biggest and most popular sites can grind a browser to a halt on a powerful desktop PC. With television, for years advertisers tried tricks like raising the volume of an ad in order to get you to notice. This was fixed in the US by government regulation in 2012 with the CALM Act. We need such a movement for the internet. I believe the presence of advertising networks and internet standards bodies already in place (that create standard ad formats) could easily create standards around fly overs, popups, tiny close boxes, interstitial timings, audio and video playback, and so much more. We of course don’t want to stifle innovation, shut off A/B testing, or otherwise become the government but certainly we can create better technology and designs for advertisement. The popularity of browser based ad blockers is not about “privacy” but just about a desire to read more stuff and use more services in some reasonable way, I would assert.

¶ Content responsibility is lacking. One of the biggest places where advertising meets real-world security concerns is when advertisements themselves are vectors for security exploits. The advertising networks of today are well-known repositories for the distribution of zero-day exploits and malware insertion. While one could fault the browsers for not being able to secure against this, one must also fault the ad networks for allowing this content in the front door. One can also fault the sites that host this. As a consumer visiting a site, neither the site nor the ad network act responsibly relative to this type of content. All will do takedowns but the effort to own up to this challenge is not what I think it could be (there’s plenty of hard work, but not enough). Ad formats themselves are part of the challenge. Should ads be allowed the full power of the browser and runtimes? Should we define a maximal set of capabilities that ads can use? There is lots to be done here.

¶ Accountability is non-existent. While we were working on “Do Not Track” one of the things that surprised me the most was the lack of accountability for some core information about people that I believe is a privacy challenge. Some have talked about this as the problem with cookies (again a polarizing way to describe something since I also like not having to sign into services I use all the time on my home PC again and again). But mostly this is probably the most unsavory part of the web when it comes to this issue of privacy and security. Quite simply, once I start using a site, especially if I am logged in, then the ability for that site to see and store my internet traversal history is just too easy. The accountability for this is nowhere to be seen. Even the most trustworthy of sites are in need of improvement along these lines. When I visit nytimes.com (just an example of an incredibly reputable site) I am visiting dozens and dozens of other web sites just on the home page. Below you can see a portion of the Web Page Privacy view from Internet Explorer showing all the URLs that compose a page. This is quite a surprise to most people who think that the links might go to a few photos or to some other servers for code or features. These are not subdomains of nytimes.com but whole other sites (honestly, do I really trust the domain ru4.com, which by the way resolves to “Perfect Privacy Incorporated” but has no home page or corporate information page). What are the privacy policies of these sites? What information do they get? Do they combine this information across their customers? It isn’t that they do this, it is that as a consumer I have no knowledge and as a site I visit the Times offers no transparency. This to me feels like a big challenge. I don’t mind the Times having my browsing history for the Times, not at all especially if I am logged on. I do mind all these companies with mysterious URLs following me around. When a company has my transaction history and uses it to deliver services it does not send that history to others, it has others send information to them to use the history. Why can’t sites implement ads and analytics in this manner? You can see the lack of accountability in the terms of service, as shown for the nytimes.com site below.

nytimes.com web page privacy policy

nytimes.com license agreement showing lack of information about linked sites.

Solutions are on the way

I believe there are deep concerns that the mobile internet will devolve into the desktop internet and we will lose the clean slate we currently have. This would be a shame because we need ad-supported services on mobile as well. We know the current experience of using a browser on mobile is racing towards the desktop—ironically because the browsers are getting better with video, script and runtime support and more.

The recent announcements by Facebook and partners show how innovation can happen. By providing a mechanism for ad-supported content to appear within a Facebook app experience natively and in a format that does not (necessarily) support many of the bad practices of the desktop web my view is that advertising can be more natural and at the same time relevant. My hope is that the runtime and/or the policies do not support the arbitrary nature of the desktop web and the experience does improve dramatically and stay that way for a while.

While all of this is taking place in the consumer world, the business computing landscape is being altered by the encroachment of consumer services. The natural reaction of the enterprise is to disable or turn off this access, which many believe is a losing strategy. In this context the biggest concern I would bring is the notion that the business internet will live along side the consumer internet and all will be good. As a consumer when I use my mobile device for work and personal, I want that conflation to exist in the service data I use. If I buy books for my own personal interest but use them for work or even get reimbursed by work, then I want my Amazon profile to have that. If I use Maps to navigate to partners or customers I don’t wan to sign on and sign off or use a different Maps instance, but want to train a single instance. If I use a productivity tool for home and work, I want the usage and quality data to flow to the service so that the product gets better for how I use it. In all cases, the unified view of “me” makes everything I use better and that makes me a better employee using tools. I’d hate for IT to see privacy as another thing to enforce by degrading my experience.

¶ Ultimately, the web will continue to evolve and free services will continue to grow. That is super important to the future for the next 3 billion internet users. There will always be a distribution of views of how much information should be saved and what services will be valued. We should no judge each other on that any more than we can expect every service to cater to ever perspective on this topic. For a moment, we can look at the challenges we’re discussing and see the engineering and product development work that can be done. I believe we can collectively improve the current situation if we take steps to design new products and services that meet the needs of all parties.

—Steven Sinofsky (@stevesi)

# # # # #

Written by Steven Sinofsky

June 7, 2015 at 4:30 pm

Posted in posts

Tagged with , , ,

The Lesson of “Don’t forget all the parts move”

Today’s WSJ has a book excerpt about the demise of RIM/Blackberry. It is a fascinating story but also has a core lesson for product managers (including myself) which is the lesson of “don’t forget all the parts move”.

While hindsight is always 20/20, when you are faced with a potentially disruptive situation you have to take a step back and revisit nearly all of your assumptions, foundational or peripheral, because whether you see it or not, they are all going to face intense reinvention.

In disruptive theory we always talk about the core concept that disruptive products are better in some things but worse in many of the things (tasks, use cases, features) that are currently in use by the incumbent product. This is the basis of the disruption itself. In reading the excerpt it is clear that out of the gate this reality was how the RIM executives chose to view the iPhone as introduced as targeting a different market segment or different use cases:

If the iPhone gained traction, RIM’s senior executives believed, it would be with consumers who cared more about YouTube and other Internet escapes than efficiency and security. RIM’s core business customers valued BlackBerry’s secure and efficient communication systems. Offering mobile access to broader Internet content, says Mr. Conlee, “was not a space where we parked our business.”

There’s a natural business reaction to want to see a new entrant through the lens of a subset of your existing market. Once you can do that you get more comfortable doing battle in a small way rather than head-on.  You feel your market size will trump a “niche” player.

The problem is that such perspective assumes a static view of the market. You’re assuming that all the other attributes of your implementation will remain advantaged and the new competitor will fail to translate that single advantage into a broader attack.

What happens, almost all the time, in technology is that disruptive entrants gain ecosystem momentum. There’s a finite bandwidth in the best people (engineers, partners, channel) to improve, integrate, promote products. Once the new product appears compelling in some way then there’s a race to gain a perceived first mover advantage. Or said another way, the leaders of the old world were already established and so a new platform yields a new chance to a leader. There’s a mad dash to execute whether you’re building leather cases, integrating line of business systems, or selling the product.

When I read that first quote, I thought how crazy to think that the rest of the internet, which includes email and messaging, would not race to try to establish new leadership in the space. The assumption that everyone is sitting still is flawed. Or just as likely, many of those incumbents will choose to assume their small part of the blackberry world will move ahead unscathed.

In a platform transition, everything is up for grabs. If you’re the platform you have to change everything and not just a few little things. First, no matter what you do the change is still going to happen. It means that you don’t have the option of doing nothing. Once a new platform gains momentum and you start losing your partners (of all kinds) or can no longer attract the top talent to the platform you have seen the warning sign and so has everyone else.

As Blackberry learned, you can’t take the path of trying to just change a few things and hope that taking what you perceive as the one missing piece and adding it to your platform will make the competitor go away. You can see how this worked in the example of the Storm device introduction, which aimed to add a bigger screen while maintaining the Blackberry keyboard feel. In other words, the perception was that it was the screen that was the thing that differentiated the device.

The browser was painfully slow, the clickable screen didn’t respond well in the corners and the device often froze and reset. Like most tech companies launching a glitchy product, RIM played for time. Verizon stoked sales with heavy subsidies, while RIM’s engineers raced to introduce software upgrades to eliminate Storm’s many bugs. “It was the best-selling initial product we ever had,” says Mr. Lazaridis, with 1 million devices sold in the first two months. “We couldn’t meet demand.”

Storm’s success was fleeting. By the time Mr. Balsillie was summoned to Verizon’s Basking Ridge, N.J., headquarters in the spring of 2009 to review the carrier’s sales data, RIM’s senior executives knew Storm was a wipeout. Virtually every one of the 1 million Storm phones shipped in 2008 needed replacing, Verizon’s chief marketing officer, John Stratton, told Mr. Balsillie. Many of the replacements were being returned as well. Storm was a complete failure, and Mr. Stratton wanted RIM to pay.

Of course we know now that there were many more elements of the iPhone that changed and it was no single feature or attribute. Every platform shift involves two steps:

  • Introduction of a new platform that does some new things but does many existing things in a suboptimal way.
  • Evolution of the new platform to achieve all those old scenarios but in new ways that often look like “hey we had that back then”.  For example, consider the rise of secure messaging, mobile device management, and new implementations of email. All of these could be viewed as “Blackberry features” just done in a totally different way.

That’s why all the parts are moving, because everything you ever did will get revisited in a new context with a new implementation even if it (a) means the use case goes unanswered for a while and (b) the execution ends up being slightly different.

On a personal note, I was a Blackberry user from the earliest days (because our team made Outlook and the initial Blackberry was a client-side integration). When I saw the iPhone I was one of those people fixated on the keyboard. I was certain it would fail because I couldn’t peck out emails as fast as I could on Blackberry. In fact, I even remember talking about how Windows phones at the time had touchscreens so if that became popular we would have that as well.  That summer, I waited on line to pick up my iPhone and was convinced of the future in just a few minutes.

You would have thought I would have been prepared. Previously, I had experienced a similar lesson. I had yet to be convinced of the utility of the internet on a phone, which the iPhone too solved. Of course my lens was clouded by the execution of the phones I used most (Blackberry and Windows) and the fact that the internet didn’t want to work on small screens and without Flash.  I would visit Japan several times a year and see the DoCoMo i-mode phones and was a big skeptic—my friends from Japan still make fun of me for not seeing the future (by the way, at that time SMS had yet to even gain traction in the US and friends from Europe found that mysterious). What I failed to recognize was that in the i-mode implementation a full ecosystem solved the problem by moving all the parts around. Of course i-mode got disrupted when the whole of the internet moved to mobile. So perhaps it wasn’t just me. No matter what happens, someone always said it would. But saying it would happen and acting are very different things. Though I do recall many exchanges with Blackberry execs trying to convince them to have a great browser once I used the iPhone.

The lesson always comes back to underestimating the power of ecosystem momentum and the desire and ability of new players to do new things on a new platform.

A while back I made a list of all the moving parts of the Blackberry collapse. You can read it here, Disruption and woulda, coulda, shoulda.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

May 23, 2015 at 11:00 am

A Product Person’s Perspective on Enterprise Selling

An architectural diagram for enterprise selling.

An architectural diagram for enterprise selling. ↗️

Many of the technical founders I have the opportunity to work with are well-versed in the architecture and features of their products (and products in general), but when it comes to possessing a similar view of the sales process there’s a good chance they are staring at a blank whiteboard. That’s to be expected because the skills and experience to do enterprise sales, and to do it well, are earned in the trenches over many years. Selling, specifically enterprise selling, is not something that comes naturally to most product-minded people.

Note: This post originally appeared on a16z.com on May 20, 2015.

Just as with code, one can devise an architectural view of how enterprise selling works. And like code, it is best to approach the process of selling using an architecture, rather than just diving in and writing code. Unlike code, if you act in haste or otherwise squander an enterprise opportunity there’s not really a chance for a rewrite or undo so it is best to approach with caution. Of course I’ve made many of my own mistakes and have also had ample time to learn what it was I have done wrong or what invalid assumptions I held. This post is a framework to help make sense of all the motions and actions that go on in the scope of enterprise sales.

Most technology leaders are consistently amazed at the depth and sophistication in enterprise selling. Since most engineers or technologists have little experience big-ticket selling, other than perhaps buying a car, this isn’t a surprise. While you might not be a designer or engineer, as a product person you have an empathy or sense of the skills, roles, and processes used. The same usually can’t be said for sales and selling.

There’s really only one key factor that distinguishes enterprise selling from everything a product person knows, and that is enterprise selling ends with the product and starts with the enterprise. Of course that is the complete opposite of what one might normally think where everything starts with a product. Even with the most amazing and inventive product ever conceived, selling at the enterprise level and enterprise scale requires inverting your perspective. There’s an analogy many often understand. Most product people know you don’t build a product by starting with a specific technology just because it is new, cool, or novel. Rather one starts by solving a problem of some sorts where applying a technology creates an amazing new experience that addresses a need or solves an articulated problem. Enterprise sales is similar in that you don’t start with a solution (your product) and then get to the problem (customer need, articulated or not).

Enterprise selling ends with the product and starts with the enterprise.

There are tons of amazing resources on enterprise selling. Resources go from the specifics of sales motions for a single sales person all the way to models for setting quotas, organizing resources, and training the sales organization. Most every seasoned enterprise sales person has their favorite toolset and part of hiring and managing a team is empowering them to make use of the tools they are comfortable with (just like you would for engineers). One resource I value is the book SPIN Selling, which is sort of a classic and spawned a whole ecosystem of supporting tools and guides.

A framework for product people

At an abstract level you can think of enterprise selling as following three steps:

First you set out to build a relationship with the enterprise customer that rests on a foundation of a deep understanding of their unique context. This relationship is formed by learning about the customer and organization, including how they do what they do, what they are struggling with and where they are heading. The biggest risk in most enterprise sales cycles is assuming you know these things—that one bank is like any other bank, that all Oracle shops are similar, or that everyone is trying to rip out SAP or move to the cloud. Almost all failures in enterprise selling, or at least all deal closing crisis moments, are caused by rushing this step or failing to learn all that needs to be learned about an enterprise.

Second, you articulate your vision or view of the “world” and how given your understanding of the enterprise you can begin to talk about a view for how to add value to an organization. The notion of “adding value” is key to this dialog as “solving problems” can set you up to fail too early in the process. The reason for this is that enterprise IT knows all to well that any new system begins with sunk costs, reduced productivity, and in general a period of investment. All this happens before the return or value is brought to the enterprise.

Finally, the last step is to establish a partnership, based on a mutual understanding. You understand the enterprise. The enterprise understands how you aim to aid value to their organization. The partnership process itself is how you go about going from pilot to implementation to expansion, sometimes called land and expand.

Let’s dive into each of these briefly with the goal of offering a flow or outline of the sorts of actions and motions that should take place. It is worth emphasizing that there’s a lot of unique value in how a given enterprise account manager approaches a specific customer with a specific type of product — so I’m not implying this is a one-size-fits-all model.

Relationship

The goal of the first milestone is a strong understanding of what things are like within the enterprise you are selling. To a product person this is very similar to those first ideas in developing a product-market fit and needing to assess the potential for the market. Much of the early effort will be consumed by understanding if you’re even working with the right team or people within an organization, and often there are multiple parties. Too often a product person believes you start at the CEO and shortcut this step, but an experienced account manager knows great deals can die in the middle of an organization just as easily at the top.

Culture. First you want to understand a bit of the culture of the enterprise. How risk averse they might be (for example, where do regulations fit or how leading edge is the organization as a whole)? You want to understand a bit about how decisions are made and how technologies and products are evaluated. You want to make sure you are sensitive to the basics of how the company likes to do business (how formal, what times do meetings take place, where do coffee, meals, drinks fit or don’t fit, and so on). This is especially true as you venture to industry segments that are unfamiliar to you. If you’ve ever done business in a different country before a good mindset to get in is to treat the initial contacts with an enterprise with the same level of cultural sensitivity and learning—better to learn rather than assume when you’re in an unfamiliar environment.

Organization. Every enterprise sales person I have worked with begins to build out the physical and logical org chart from the first engagements. You want to learn the management reporting structure as well as the power You want to understand the budget and decision making processes. Great enterprise sales people also know that you invest in the full org chart and don’t just focus on the areas of most authority and power—you never know where an advocate or obstacle might appear from as a deal progresses.

Infrastructure. IT is all about infrastructure and understanding how the company really works will be key to speaking their language. Not only are all enterprises subtly different in infrastructure, they take great pride in the differences they maintain and the rationale behind those differences, no matter how odd or crazy they might seem. I remember once pitching Office to an enterprise customer that had customized the installation of Office uniquely for over 100 different job functions. Not only did I think this was nuts, it created a massive support burden for the company. But in their world this was key to their productivity and my job was not to “save” them but to show them how I made their job even easier with a new product (all that comes later, in this step you’re just learning). You need to understand all the basics of infrastructure from authentication, networking, BYO, approved apps, messaging, email, and more many of these environmental variables will almost certainly impact your solution’s applicability and deployment.

Needs. Once you understand the culture, organization, and the technical infrastructure you have the foundation to be able to understand the enterprise’s needs. This is often the trickiest part of the first phase because it turns out enterprise IT, even though they are experts in technology, most often express their needs in terms of their own understanding or expectations of what products can do. As you catalog and understand needs what you are really doing is learning how to bridge from the solution your advocates believe they want to a solution that might be a much bigger leap (and much better, but very different) than expected. The pressure to listen to customers and act directly on needs (often described as requirements) is intense and during the course of product development and sales will be a significant challenge to just about every company.

My earliest days working with enterprise customers taught me a lesson that I have to admit still makes the “here and now” product person in me a bit uncomfortable. It was told to me by a former IBM field sales engineer who said “I sold more product today based on selling the future product than I ever sold by just selling what was ‘in my bag’.” While that’s most decidedly a cynical view, the reality is that enterprise selling is never about what a product can do right this moment—that’s just practical given the purchase, deployment, and training cycles within a large organization. Therefore a huge part of enterprise selling is articulating your unique technology/world view in the context of the relationship you have developed.

 Vision

To many this can sound like selling vaporware, an old term for software that never shipped. That isn’t true at all. This is about selling a broad concept that will both endure for years and take years to fully realize, but can start delivering ROI in the near term within a known time period and cost. Putting this in the context of startups, this is much like when you hear venture capitalists talking about the investment in the team relative to the idea or invention—everyone knows the maze from idea to product to business will change and scope the initial idea, but the bet on the team and people will endure.

Inspiration. What is your inspiration for the product? This is where you talk about the experience that led you rethink the landscape. In the enterprise space this is most often about revisiting assumptions that the industry has made about costs of technology, where there is hardware versus software, or some massive shift such as the move to mobile or the cloud. Relative to a startup, you can think of this as the founder’s story—what led the founder to start a company is very much in line with what inspired the creation of a new enterprise product or service.

Uniqueness. While your inspiration is important it is likely that may people will have the same inspiration. In fact, enterprise products often appear in waves when it appears as though many are doing the “same” thing all at once. If you are in enterprise IT, then for sure every single vendor meeting you have these days is about cloud, mobile, BYO, security, and more. In a sales motion, you don’t want to spend a lot of time being the umteenth person touting the “changing world” but want to quickly articulate the insight, secret ingredient, or radical implementation that you have that you believe is unique. Too often this can be viewed as marketing, but really this needs to come from your product core—what is it that you see that no one else sees (to paraphrase Peter Thiel). Building a better mousetrap is great, but ROI in the enterprise does not come from rip and replace getting you a 10% improvement, so your inspiration should be pretty significant.

Competitors. You are not alone. No one in enterprise IT believes you built the one and only product that does most of what you do. Coming to an enterprise sales engagement with a detailed understanding of competitors shows respect and acknowledgement of reality. There are two types of competitors you need to understand fully. First, you need to be versed in the current marketplace competitors and how you compare to them. Often the best tool to view this is a classic “magic quadrant”—just be forewarned you have to substantiate claims carefully and be prepared for the “fans” of competitors to confront you (and be prepared for your competitors to sell against your characterization). If you’re doing this right, you are not creating new comparison criteria but using incumbent/competitor criteria as a starting point. Second, you need to be versed in how the enterprise is already addressing (or trying to address) the problem space. This is just as much a competitor—in enterprise software the easiest product to buy is the one you’ve already got in place and no one gets fired for doing that. While you might be negative towards your market competitors, it is incredibly important to be respectful of implemented competitors or homegrown solutions even if some in IT might mock their own choices.

Roadmap. The key deliverable for a vision is your roadmap of where you are heading. A roadmap to a product person might look like a schedule and features and some in IT most certainly would love that sort of information. In practice, the sort of tool you want to employ is much less detailed and granular than a product roadmap. Instead, you want to use a roadmap to establish a credible view for how you intend to both refine your existing proposition and expand your solution space. Why is this so critical? Enterprise IT is all about planning and long term within the organization. Budgets, headcount, organization, and internal service relationships all depend on “knowledge of the future”. At the same time, IT also wants to build new capabilities within the company and your roadmap can become a part of the IT roadmap. Obviously everything here is a fine balance between “promise and deliver” and falling into the trap of “over-promise and under-deliver”. One personal example was the introductions of both Outlook and Sharepoint and how adding them to the roadmap caused significant consternation in how IT thought of Office, which then crossed from personal productivity to messaging and then server infrastructure. In hindsight, the introduction of a product literally brought together parts of IT that previously never worked together!

Partnership

Transport yourself a couple of months (!) from that first opportunity to meet a potential customer and you’ve got a chance to really start to sell a product. For most product people, this is about deployment but to an enterprise account manager this last phase is about building a partnership. There is a distinction. The goal is to become long term partners and the tactic is to get the software into deployment and usage, not the other way around.

This phase always takes a bit longer than expected and for the first customers of a new product is a great deal of collaboration between engineering and sales. In later stages this repeatable process tends to become the role of Customer Success. For early stage products and companies, this phase is the equivalent of product-market fit as you work with the customer to refine the product (and pricing and more).

Proof. The first step is literally a proof of concept. The goal is to get the product up and running in their environment which could be as simple as single-sign on or a few dedicated clients or as complex as deployment or an isolated network with server hardware. It is likely during this stage that you will need to gain access to data, users, and systems that make the proof more relevant. It is important to be flexible and patient because for many pilots this is the most frustratingly slow part of the process. Do keep in mind, most every IT organization routinely does dozens of PoC, proof of concept, deals a year across many departments so be careful not to count this as “done” but do count it as “success”.

Implementation. The implementation phase is the time when you go from PoC to a deployed solution, aka production, within a department or company. For those building their first enterprise product, they are often shocked at how long it takes to roll out a new service or system within an enterprise even after it is running and working. We often compare this to signing up for a new SaaS service when in reality most companies are filled with employees that are far more worried about failing to get their work done than they are excited to try new tools and change the way they work. While many think most of the learning happens during the PoC, the astute enterprise product person knows that the real learning and informing of future product features takes place during the phased-in implementation when the product is in use by a wider audience outside of IT (if applicable).

Expansion. From a business perspective, the implementation counts as “land” and the next step is to “expand”. Once you’ve landed and seen early success, your advocates within IT will want to explore different ways to expand—remember IT is like everyone else and when something goes well they want to get credit and get visibility for the solution. Expansion is really the accelerator for a business and as most experienced people will tell you, there’s almost always more revenue with customers already paying you than with starting all over again with new potential customers. Enterprise products should be equipped in both the business and the product to expand in depth and breadth of usage to maximize this phase of growth. There’s potential for a bit of friction here as sales wants to keep the price down and not partition product value in order to land the deal. Management incentives across sales, marketing, product, and engineering play a critical role in finding this balance.

Replacement. The very last step in the partnership with an enterprise customer is replacing an existing system. I purposely put this last because most every product person thinks that when you have a new product the first line of sales is to explain what the customer can replace or decommission if they buy the new product. Every IT person knows that this is exactly the very last thing you do and that the long tail on usage for any implemented system before actual replacement, no matter how inevitable. This is important to internalize in terms of building a partnership because every running system has a champion or advocate who bought and deployed the system so a poor selling technique is to challenge that person too early. If you play everything correctly, someday you will be the system that keeps running long after it should—that’s something to keep in mind!

* * *

If all of this seems like a lot of work and a great deal of calendar time, well then you read it correctly. Average enterprise sales cycles for seven figure sales are easily 3 months and often up to 9 months depending on pre-existing systems. While every once in a while there are shortcuts or magical products, by and large this is how enterprise selling goes. It also makes a lot of sense because you’re going to collect a lot of money every year and your product will become an important part of a business.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

May 22, 2015 at 10:00 am

Posted in a16z

Tagged with , , ,

Hiring Your First Product Manager

picard-and-rikerAs a technical or product-focused CEO/founder of a growing company, a challenge one eventually faces is making that first product manager hire. Like most founders, there’s a good chance that you’re seeing ongoing challenges balancing the ever-increasing workload as CEO and starting to feel that sense of distance from the product as the needs of sales, marketing, hiring, and more pull you away. You might be wondering how you can spend more time on what you love, the product, while recognizing as CEO you must grow the strength of the organization while also focus on the contributions that you can make uniquely from the CEO role.

This is where making the first product manager hire is necessary and also a unique challenge. Most of the time, this hire is postponed as long as possible. You can cover-up for this missing resource with additional late night mails, some extra last minute meetings, and so on. The time this really hits home is when feature work or decisions turn into re-work or re-thinking. That’s the sign it is time to hire some help. Engineering resources are precious and timelines are always tight—being the founder-pm-bottleneck is no way to iterate your way to product-market fit.

The short answer to this next step is two-fold. First, at an emotional level this is extremely difficult for most every founder (btw, in a big company if you are starting a new product team it turns out this same dynamic applies). It is likely you will talk to quite a few candidates and no one will ever seem quite right. Second, you are going to have to trust your team a great deal on the fit and in doing so adjust your own approach to product management as part of the on-boarding process.

Every situation will be unique and there is no single rule for what type of skillset, experience, or seniority will be right for you and your company. The most important thing to think through before you start the process is to agree between your co-founder(s) and team on the profile of the role you wish the new PM to play in the organization.

Traditionally this would be framed along the lines of a job description:

Seniority. Are you looking for a VP, Director, Lead? Most typically you start here but these descriptions might fall short of really defining the role and so you end up seeing a lot of candidates.
Domain. Perhaps you are looking to fill in a specific technology background as part of this hire? As the product evolves you might be expanding into product areas that could use additional strength from product management. The question is really how much this will change the bottleneck you might be seeing.
Skillset. Are you looking for a candidate with more of a design, project management, or engineering background?

Each of these are examples of necessary but not sufficient criteria in kicking off the search for your first product manager. Because the first product manager hire is so unique for a product-focused founder, it is worth offering a framework or characterization of the role that you might start with.

Once you arrive at characterizing the role, then you are in a better position to narrow the search by more traditional experience and skills descriptors. Each of these below can work with the key being clear on hire and in management what the expectations are for the new product manager.

Extra hands. Most every founder I speak with starts the description of needing an extra set of hands, eyes, ears—someone to offload some work to, follow-up, track down, etc. This is often a positive stop-gap and can certainly work short term. Medium to long-term it can also starve you of the opportunity to grow the organization or might mean you set yourself up for yet another product management challenge down the road. If you go with this approach then the important thing to watch is that you did not solve you bottleneck problem by just moving it to another person or adding a level of bottleneck-indirection.

Process chief. Are you looking to offload the unpleasant or less intellectual aspects of product management such as the details of tracking, documenting, and other “process” issues? It is quite natural to be very narrowly focused in hiring the first product manager to want this set of skills added to the team. It isn’t uncommon for engineering to also seek this addition. The good news is this is almost always helpful. The challenge is it also adds another person to the team for the short term but might not really reduce the load or bottleneck but could unintentionally add friction to the process. A careful balance is required.

Apprentice. Many times the goal of bringing on the first product manager is to reduce the risk of hiring a senior or experienced person and going with a relatively junior hire and working to grow him/her into the role that you actually need. This can often be the most comfortable approach for the team because there’s a clear view of who the boss is and relatively clear expectations with the new PM. Generally the challenge can be down the road when you bring in more product managers and have to decide if the apprentice is “senior enough”. To avoid this challenge the best thing to do is really put in place a true training and growing situation. You need to provide the right level of responsibility and feedback/mentoring. Sometimes the difficulty is in hiring at this level but expecting too much, too soon.

Mini-me. Another model I have seen is searching for that first PM that is a reflection of your own skillsets and experience—a mini-me. For many this younger-sibling approach can be comfortable and easy to model and understand. It is a matter of finding the candidate that shares your product point of view and vision, and then a way of getting it done. The interesting challenge with this approach is not the way it works, but the way it might not work. Will the new PM amplify not just your strengths but your weaknesses? Will this miss out on the chance to being more perspective, diversity of thought and approach, or new ideas into the team? Are you being imitated or copied?

Successor. The most difficult or bravest first PM hire is to hire your successor. In hiring this person you’re bringing on a person who will simply be the final voice in product management. This is a scary approach and depending on the direct responsibilities you have and where you are in the product-market fit journey this might be the best approach for you and the team. The challenge I’ve seen most often with this first hire is that the seniority level doesn’t quite match the organization yet. The new hire’s first step is to build out a team and bring on more people. This might be exactly why you’re bringing on the person (just as when you bring on less familiar functions) but generally isn’t the case for product management as most often the first hire needs to be hands on and will buy you some time or runway. On the other hand, often the right candidate comes along—one who has the right level of person contribution, domain knowledge, and scale experience—and that might make for the right fit.

Of course hiring the person that fits this description as well as all the right product skills could turn into a unicorn hire, so definitely be careful about over-constraining the challenge. Of course, hiring is just the first step. As you onboard, assign and delegate work, and manage a new product manager you will also need to be incredibly deliberate in your own evolving role in product management. All too often the most-fitting hires can run into challenges when there is a mismatch between intention and execution of product management responsibility.

One word of caution. If you are concerned or even “afraid” of hiring a strong product person with a point of view, perspective, or just streak of stubbornness then think for a moment. Are you labeling the person a poor fit for “culture” or are you actually more concerned that they might make your own personal transition more challenging? If you’re working to always bring on strong people, don’t compromise at this juncture.

Hiring the first or early product managers for technical/product focused founder(s) is always a big step and a difficult one. When done correctly it can be a positive and rapid accelerator for engineering and a positive for the company overall as it makes room for you as that leader to focus on the work you can do uniquely.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

April 7, 2015 at 2:00 pm

Posted in posts

Tagged with , ,

Frictionless Design Choices

Frustrated Woman at Computer With Stack of PaperNo one wants friction in their products. Everyone works to reduce it. Yet it sneaks in everywhere. We collectively praise a service, app, or design that masterfully reduces friction. We also appreciate minimalism. We love when products are artfully distilled down to their essence. How do we achieve these broadly appreciated design goals?

Frictionless and minimalism are related but not necessarily the same. Often they are conflated which can lead to design debates that are difficult to resolve.

A design can be minimal but still have a great deal of friction. The Linux command line interface is a great example of minimal design with high friction. You can do everything through a single prompt, as long as you know what to type and when. The minimalism is wonderful, but the ability to get going comes with high friction. The Unix philosophy of small cooperating tools is wonderfully minimal (every tool does a small number of things and does them well), but the learning and skills required are high friction.

  • Minimalist design is about reducing the surface area of an experience.
  • Frictionless design is about reducing the energy required by an experience.

When debating a design choice, feature addition, or product direction it can help to clarify whether a point of view originates from a perspective of keeping things minimal or reducing friction. If people discussing a decision start from this common understanding, I bet a decision will be reached sooner. Essentially, is the debate about adding a step or experience fork, or is it about adding something at all?

Product managers need to choose features to add. That is what makes all of this so difficult. As great as it is to stay pure and within original intent, if you and the team don’t enhance the capabilities of your product then someone will do what you do, but with a couple of more things or a different factoring and you’ll be left in the dust.

Therefore the real design challenge is not simply maintaining minimalism, but enhancing a product without adding more friction. Let’s assume you built a product that does something very exciting and has a very low friction to usage and does so with a minimal feature set. The next efforts are not about just watching your product, but about deciding how to address shortcoming, enhance, or otherwise improve the product to grow users, revenue, and popularity. The risk with every change is not simply failing to maintain minimalism, but introducing friction that becomes counterproductive to your goals.

When you look back you will be amazed at how the surface area of the product has expanded and how your view of minimalism has changed. Finding the right expression of new features such that you can maintain a minimalist approach is a big part of the design challenge as well.

There’s an additional design challenge. The first people who use your product will likely be the most enthusiastic, often the most technical, and in general the most desirous of features that introduce friction. In other words you will get the most positive feedback by adding features that ultimately will result in a product with a lot more friction.

Product managers and designers need to find the right balance as the extremes of doing nothing (staying minimal) and listening to customers (adding features) will only accelerate your path to replacement either by a product with more features or a product with less friction.

Low-Friction Design Patterns

Assuming you’re adding features to a product, the following are six design patterns to follow, each essentially reducing friction in your product. They cause the need to learn, consider, futz, or otherwise not race through the product to get something done.

  • Decide on a default rather than options
  • Create one path to a feature or task
  • Offer personalization rather than customization
  • Stick with changes you make
  • Build features, not futzers
  • Guess correctly all the time

Decide on a default rather than options. Everything is a choice. Any choice can be A/B tested or debated as to whether it works or not. The more testing you do the more likely you are to find a cohorts of people who prefer different approaches. The natural tendency will be to add an option or setting to allow people to choose their preference or worse you might interrupt their flow to ask preference. Make a choice. Take a stand. Every option is friction in the system (and code to maintain). When we added the wheel to the mouse in Office 97 there was a split in the team over whether the wheel should scroll down or whether it should zoom in/out. From the very first release there was an option to appease the part of the team that felt zoom was more natural. Even worse, the Word team went and did a ton of work to make zoom performant since it was fairly unnatural at the time.

Create one path to a feature or task. You add a new feature all is good—you’re in X in your product and then you can do Z. Then someone points out that there are times when you are doing Y in your product and you also want to do Z. Where there was once one path to get to a feature you now think about adding a second path. Maybe that sounds easy enough. Then a few iterations down the road and you have 5 different ways to get to Z. This whole design process leads to shortcuts, floating buttons, context menus, and more. Again all of which are favored by your early adopters and add friction for everyone else, and also add code. Pick the flow and sequence and stick with it. The most famous debate of all between Windows and Mac was over right click and it still rages. But the design energy to populate context menus and the cognitive load over knowing what you can or cannot do from there is real. How many people have right clicked on a file in the Windows desktop and clicked “Send” only to be launched into some Outlook configuration dialog when it would have been frictionless to always know that insert attachment in mail works and nothing will fail.

Offer personalization rather than customization. Early adopters of a product love to customize and tweak. That’s the nature of being a tech enthusiast. The theory is that customization makes a product easier to use because every use case is different enough that the time and effort saved by customization is worth it and important. In managing a product over time, customization becomes an engineering impossibility to maintain. When you want to change behavior or add a feature but it isn’t there or moved you introduce an engineering impossibility. The ability in Office to reorganize all the toolbars and menus seemed super cool at the time. Then we wanted to introduce a new scaleable structure that would work across resolutions and input devices (the ribbon). The problem was not just the upgrade but the reality that the friction introduced in using Office by never knowing where the menus might be (at the extreme, one could open a document that would rearrange the UX) was so high the product was unusable. Enterprise customers were rearranging the product such that people couldn’t take courses or buy books on how to use Office. The constraint led to the addition of a single place for personalization (Quick Access Toolbar) which ultimately allowed for a much lower friction design overall by enabling personalized efficiency without tweaking the whole experience.

Stick with changes you make. The ultimate design choice is when you change how a feature used by lots of customers works. You are choosing to deliberately upend their flow and add friction. At the same time the job of designing a product is moving it forward to new scenarios and capabilities and sometimes that means revisiting a design choice perhaps one that is the standard. It takes guts to do this, especially because you’re not always right. Often the path is to introduce a “compatibility mode” or a way to turn your new product into the old and comfortable product. This introduces three problems. First, you have to decide what the default will be (see the first rule above). Second, you have to decide if/how to enhance the old way of doing things while you’re also adding new things. Third, you have to decide when down the road you remove the old way, but in reality that will be never because you already told customers you value it enough to keep it around. But adding compatibility mode seems so easy and customer friendly! Ultimately you’re creating a technical debt that you can never dig out of. At the same time, failing to make big changes like this almost certainly means your product will be surpassed in the marketplace. See this HBS case on the Office 2007 Ribbon design http://www.hbs.edu/faculty/Pages/item.aspx?num=34113 ($).

Build features, not futzers. Tools for creativity are well-known to have elaborate palettes for formatting, effects, and other composition controls. Often these are built on amazing “engines” that manage shapes, text, or image data. Historically, tools of creativity have prided themselves on exposing the full range of capabilities enabled by these engines. These vast palettes of features and capabilities came to define how products and compete in the marketplace. In today’s world of mobility, touch interfaces, and timely/continuous productivity people do not necessarily want to spend time futzing with all the knobs and dials and seek to minimize time from idea to presentation—call this the Instagram effect. Yet even today we see too many tools that are about debugging your work, which is vastly different than getting work done. When a person needs a chart, a table, a diagram or an image how can you enable them to build that out of high-level concepts rather than the primitives that your engine supports? I was recently talking to the founder of an analytics company struggling with customer input on tweaking visualization which was adding complexity and taking engineering time away from adding whole new classes of visualization (like maps or donut charts). You’ll receive a lot of input from early customers to enable slightly different options or adjustments which will both challenge minimalism and add friction to your product without growing the breadth of scenarios your product enables. Staying focused on delivering features will enable your product to do more.

Guess correctly all the time. Many of the latest features, especially those based on machine learning or statistical models involve taking action based on guessing what comes next. These types of features are magical, when they work. The challenge is they don’t always work and that drives a friction-filled user experience. As you expand your product to these areas you’re going to want to find the right balance of how much to add and when, and patience with guessing too much too soon is a good practice. For better or worse, customers tend to love features that guess right 100% of the time and even if you’re wrong only 1% of the time, that 1% feels like a much higher error rate. Since we know we’re going to be learning and iterating in this regard, a best practice is to consider how frictionless you can make incorrect guesses. In other words, how much energy is required to skip a suggestion, undo an action, or otherwise keep the flow going and not stop to correct what the software thought was right but wasn’t. Let’s just call this, lessons from “bullets and numbering” in Word :-)

Finally, a word of caution on what happens as you expand your customer base when it comes to adding features. Anything you want to do in a product can be “obvious” either from usage data or from customer input. The challenge in product management is to create a core set of principles or beliefs about how you want to move the product forward that allow you to maintain the essential nature of your product while adding new features. The tension between maintaining existing customers via stability or incremental improvements versus keeping pace with where the marketplace is heading is the classic design challenge in technology products.

It shouldn’t be much of a surprise, but a great deal of product bloat comes from adding the obvious feature or directly listening to customers, or by failing to stick with design patterns. Ironically, efforts to enhance products for today’s customers are often the very features that add friction, reduce minimalism, and lead to overall bloat.

Bauhaus to Bloatware

This march from Bauhaus to Bloatware is well-known in our industry. It is part of a cycle that is very difficult to avoid. It is not without irony that your best and most engaged customers are often those pushing you to move faster down this path. Most every product in every segment starts minimal and adds features over time. At each juncture in the evolution of the product there is a tension over whether additions are the right marketplace response or simply bloat.

This march (and tension) continues until some complete rethinking introduces a new minimal product addressing most of the same need but from a different perspective. The cycle then starts again. Operating systems, databases, instruction sets, peripheral connection, laptops, interfaces, word processors, and anything you can name has gone through this cycle.

This re-evolution or reimagination of a product is key to the long term viability of any technology.

By adhering to a set of design principles you are able to expand the breadth of use cases your product serves while working to avoid simply adding more friction to the core use cases.

—Steven Sinofsky (@stevesi)

After publication three typos were fixed and the example of personalization clarified. 

Written by Steven Sinofsky

March 16, 2015 at 10:30 am

%d bloggers like this: