Learning by Shipping

products, development, management…

New Posts Have Relocated


Thank you for stopping by and reading posts here on Learning By Shipping.

The learning continues, but new posts can now be found on Medium.  Please visit medium.learningbyshipping.com. This site will remain archived and supported and occasionally will receive cross-posts.

Thank you for your support,

—Steven Sinofsky

Written by Steven Sinofsky

June 28, 2016 at 11:08 am

Posted in admina, posts

Disruption’s Long, Slow, Complex Journey

Screen Shot 2016-05-14 at 3.23.15 PM.png

The original 1995 HBR article on disruption from Joseph Bower and Clay Christensen was titled “Disruptive Technologies: Catching the Wave”.

If you work in traditional retail you had a very bad week of headlines on reported earnings, reset guidance, and public market carnage. If you read those on your smartphone while also chatting with Alexa to place an order, these looked more like headlines for “Duh” magazine.

From Philz to Mo’z to Coupa Café, one does not need to go far in Silicon Valley before bumping into a conversation about disruption in some form or another. Despite being a term that originated on the east coast, disruption is a key part of the language of Silicon Valley. Defining a company or technology as disruptive, or declaring a company or industry to be disrupted is a basic conversation starter. While most startups aim to be disruptors, those that become successful will one day become targets of disruption. That’s why it is always helpful to dig into the complexities of this important business dynamic.

Disruption is a complex dynamic that is much easier to accurately declare after the fact rather than while it is taking place. In fact, if you are part of a successful company or product there’s a good chance a competitor is already using the language of disruption against you. Two of the key elements of disruption that are often overlooked in this dialog are worth some discussion:

  • Duration of entire disruption timeline
  • Impact on every business attribute

Duration of entire disruption timeline

When you read about disruptions that have taken place in the past, such as Blackberry versus iPhone (my old post), one can easily get the impression that disruption can almost be marked by a specific date or event. Blackberry usage did not drop to zero nor did the company shut its doors with the launch of the iPhone in 2007. Having a point in time is great for a narrative, but doesn’t help at all if you are on either side of disruption.

The past couple of weeks of market carnage in the retail sector included Macy’s, Walmart, Gap, Nordstrom, Kohl’s, L Brands, Ralph Lauren, and more. The only common thread in reading about all of these was Amazon which continues to dominate. One story trying to explain the challenges faced by Gap analyzed the situation, “[t]he Gap, which once suffused the zeitgeist, now barely registers.”

That’s pretty harsh. It is also a story from March 2006, more than a decade ago, The Shrinking Gap. While Gap might very well be in its twilight, it has been one very long and slow decline.

Retail’s mass disruption, from a Silicon Valley perspective, probably started when we ordered our first holiday books from Amazon in 1994, over 20 years ago — before the phrase disruption entered into our vocabulary. It is interesting to consider the book category and the impact Amazon has had (directly or indirectly) on Barnes & Noble simply because Barnes & Noble still exists.

One way to measure disruption is to consider the public market view of a company over the course of this increasingly long timeline. Here is $BKS from the time Amazon started selling books:

Barnes & Noble stock price starting from the launch of Amazon.

Why does disruption drag out? I mean we all used our first iPhones with apps in 2008 and Blackberry is still around. It almost seems like cruel and unusual punishment. Shouldn’t companies just be put out of their (or our?) misery?

The journey, at least through the lens of the stock price, is anything by straight down over the past two decades. In fact there are several significant peaks along the way. It is easy to dismiss these as Wall St dynamics like M&A activity, management changes, or even macroeconomic changes. In practice the reason disruption takes so long is due in part because of all the actions incumbents can take to try to avoid becoming the victims of the disruption that is “obvious”. In fact here’s Macy’s, which by all accounts has seen quite a run relative to the market over the past 5 years all while being disrupted:

Macy’s stock price from the arrival of Amazon.com along with the S&P 500.

Why does disruption drag out? I mean we all used our first iPhones with apps in 2008 and Blackberry is still around. It almost seems like cruel and unusual punishment. Shouldn’t companies just be put out of their (or our?) misery? It is not so simple.

First, and most importantly, we should not confuse an ex-post view of the world with what is happening in real time. For every disruption that actually happens there were a lot of people, beyond a single company or technology, saying it would not happen. I remember having dozens of conversations, say in 2005, with customers (“users”), PC makers, and disk drive manufacturers about the rise of flash storage. As a technologist working on operating system support we had to make a bet on the future, but there was a loud and varied chorus of reasons why flash was either a long ways off or would never replace spinning disks: cost, capacity, ever-increasing needs, customer choice, and more. In some sense, everyone was right but we each had different views of the timeline. This week Western Digital finally closed the purchase of SanDisk, Western Digital Starts New Era As SanDisk Acquisition Completed. That sure took a while when I think back to those conversations with disk drive makers a decade ago.

Second, and this is the tricky part, incumbents really don’t just stand still like the proverbial deer in headlights. In fact, because incumbents have market presence, capital, business relationships, and a lot of people, they have the capability (and shareholder responsibility) to take many different actions. These actions tend to look rational and responsible and because of the market presence they often receive considerable attention.

Historically, we can look back on the Barnes & Noble Nook as almost desperate. But if you recall the in-store presence and broad outreach and that the competition was also just an “eReader” then this seems less desperate and more formidable competition for Amazon, except it wasn’t. At the time, though, many put it on equal footing with Amazon. In fact it seemed rather bold and strategic.

When the hard drive makers saw the rise of of flash, they responded on two fronts. First, they focused on very high capacity drives which would be cost prohibitive if done with flash. Second, they added flash to their spinning drives to bring the benefits of flash. That sounded amazing on paper. In practice these ended up bringing the existing unreliability, form factor, and power consumption to customers that already had more storage than they needed for their shrinking, battery-operated devices.

Time and time again, hybrid is the one thing that never works because you can never distill disruption down to a single attribute to be added to your existing business.

In the retail space, all of the earnings calls this week included discussions of the “online segment” of the major retailers. In the retail world, the equivalent of adding flash to a hard drive is “omni-channel”. Retailers talk about how much their online store is growing and how important it is to have both a physical presence and an online presence. The key strategy is that their existing assets are critical parts of the growth that everyone is seeing in the new online business. A Macy’s employee said, “[T]here’s a lot of investment being made in digital growth, which, by the way, is not all digital sales…Part of that is omnichannel investments, so the customers can easily go back and forth between stores and the Internet.” This isn’t new and even Walmart has been talking about that for quite some time.

There’s one word that sums up incumbent response to disruption — hybrid. The pattern is almost always the same, which is the new technology or approach that appears or threatens to disrupt an incumbent is best expressed by combining the new with the old. Time and time again, hybrid is the one thing that never works because you can never distill disruption down to a single attribute to be added to your existing business. More often than not, adding something also makes things worse in every dimension. Yikes.

Impact on every business attribute

It would be really great if as an incumbent facing disruption, all you needed to do to respond and thrive was just add something to all you were already doing. All the spinning hard drive makers needed to do was add flash. All retailers needed to do was stand up a web site. All Kodak needed to do was embrace digital (oh wait, they invented digital).

In our coffee shop discussions about disruption we tend to simplify how disruption is playing out and often zero in on some specific technology or business approach that appears to have incumbents hamstrung. We like to say “cloud” or “SaaS”, for example. As with most things, it turns out there is a lot more there.

Every successful business is made up at the highest level of a vast number of decisions and processes often described as the Four P’s or marketing mix. Within each of Product, Price, Place, Promotion we see many attributes:

One of the many examples of the Four P’s of the marketing mix as a graphic.

The attributes of any of the P’s can be an arbitrarily long list. In fact the more successful a business and product become the most knowledge the company gains about their processes and approach. In turn, these become the very constraints that can’t be solved.

Instead of looking at a well-worn example, let’s look at a hypothetical example of a typical on-premises software company facing a new cloud competitor. From a high-level technology perspective, the difference is clear between cloud and on-prem. Digging into those details, however, one realizes that the architectural approaches are totally different (scale out v scale up). Continuing through the technology stack, you start to think about the tools and languages used which contribute to how the product is built — for example, integrations with third-party APIs via services compared to injecting customization code. One can look at the product from the systems management perspective and consider that most on-prem software was designed to be tweaked, customized, and actively managed by a nearby IT professional compared to cloud software that aims to bring simplicity, reliability, security by minimizing those touch-points. At the extreme, one (me) might assert that if you have on-prem software then something approaching zero lines of your existing code is “appropriate” to developing a modern cloud solution to compete.

That’s a real problem though because all of your features, your value proposition, your positioning, and go to market depend on that code. And you’re behind your new competitor in developing a cloud solution (by the way your competitor probably has a fraction of the features, customers, revenue, profits, partners, and more that you have). The idea of making a fresh start as a product or technology seems, literally, absurd. It is the rough equivalent of shutting down all your retail stores to focus on a web site.

As if that wasn’t enough, beyond Product the other 3 P’s also contribute to your assets — and now your liabilities. An example we see in cloud companies competing with many on-prem companies is a classic channel conflict. The on-prem world of software often served small/medium business with channel partnerships often called VARs. These partners would sell the software, but also sell the services of setting up a local server, deploying, and managing your software. This is a very healthy business and because of the need for a local presence (i.e. someone to come fix the server or back it up) the channel partnerships work well. In the cloud era, the utility view of SaaS all but eliminates this level of complexity. Channel programs can be replaced by broader outreach, lead generation, and a product that can be used without such a deployment step. Once again, an entire “P” has been uprooted and replaced with a new one. If you think this is easy, consider the elaborate relationships Barnes & Noble maintained with the book publishers. Not only could the Nook not disrupt the need for big box stores it had to maintain the relationships with publishers who were not exactly wild about digital books to begin with. Even with the Nook, the company found itself tied to all the other aspects of its business.

This hypothetical example is playing out across industry segments in enterprise software right now. While we talk broadly about the cloud as a technology, there is depth and breadth to the disruption that poses an incredible challenge to incumbents. When something is disruptive it is almost every single aspect of a business that is impacted.

The classic view of a response would be to drop everything and just do what it takes — hire new people, get them a different building, relax all constraints, and so on. Boy is that easier said than done. This is where the ability to change quickly and even the capital (or public market) constraints prove challenging. All the relationships a company develops as it achieves success, from customers to partners to investors and even to employees become challenges to overcome in the face of disruption. That is why attempting to “respond” to disruption is very much like trying to rebuild an airplane while in flight.

The allure of the hybrid shows here. A hybrid gives you the comfort of focusing on a single attribute of disruption and “addressing” it. Considering that disruption is almost always pervasive throughout the 4 P’s, one can see the weakness of such an approach. In the SaaS world, one only need look at the crazy channel programs incumbents employ in order to provide incentives and comfort to existing partners who no longer have servers to deploy and manage all while working to create a “hostable” version of the existing product.

Retail shows this challenge in a very visible way today. Imagine you oversee a 1000’s of retail outlets (Gap, Walmart, anything). These stores are capital intensive and in need of constant nurturing. For example, the Gap business model requires inventory turnover and new displays every couple of weeks. Your whole model is based on the cycle of new merchandise, advertising that, attracting customers, then repeating that. You know that if you slow that cycle down or do less of any element that your sales drop. If your sales drop then employees will become demotivated, the public markets will react, and then of course customers will notice and stop thinking your store is a great place to go. You are stuck in an over-constrained situation.

You have to find the capital to build out an entire “stack” to compete with the likes of Amazon. This includes merchandise that changes every minute, not every two weeks. It includes items you don’t normally carry. You might price things to match entire baskets of multiple brands and items rather than the commitment to specific line. You need to promote what is being bought by consumers, not what you committed to promote based on shelf-space deals or what you acquired. You pay employees for the code they write and data they analyze, not the in-store presence. You have to accomplish all of this while growing your retail business and stores because if you don’t do that then the capital to fund this hypothetical expansion won’t be there. In fact, for every dollar you spend on something new someone will tell you that the old thing is failing simply because you spent that dollar somewhere else! (That dollar can be across any of the 4 P’s!)

During this time there are many actions incumbents will take that will appear like they are going to power through the disruption. Certainly it starts from a hybrid of some form when it comes the product or service offering. Capital will be deployed to channel partners to keep them engaged. Positioning will be used to de-position the disruptor or to re-position the offering (for example, this week we learned a lot about down-market segments of retailers doing well, as if Amazon won’t also be selling those lines).

The fact that disruption is so sweeping is why it takes so long and why it is so difficult for incumbents to respond. As a disruptor one needs to be prepared for a long drawn out battle on many fronts because competitors don’t just pack up and go home or retreat to their existing businesses (well most eventually will do that).

While we talk about disruption in simple terms, it is enormously complex. The next time you’re having that conversation try to think about all the aspects of the company being disrupted and ask yourself what you would do. The lack of an answer might surprise you.

The goal of every startup is to build a business that creates an incredible and enduring foundation. In doing so it means the forces of disruption you once created will be pointed at you. The sooner you recognize the challenge and shift investments and efforts to what is new the better. When considering the likes of Facebook, Google, Netflix, or Amazon one sees a new generation of companies that were created with an intrinsic understanding of disruptive forces. One can see how these companies respond to change differently. That is very exciting because I think there is a new “theory” being created now and a true change in how companies operate.

— Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

May 14, 2016 at 3:25 pm

Posted in posts

My Tablet Has Stickers

1-eM9CEYDNLiCWOGvTa29I8QWhen I received my new 9.7” iPad Pro I decided to break tablet tradition and personalize it with stickers, just as I’ve done on laptops (and my Surfaces) for years. I did so because I began with the mindset that this iPad would replace my laptop(s) for full time use (here laptop means my Surface(s), Yoga, MacBook, and desktops). It has been almost a month and that is exactly what happened. My sticker investment paid off. I don’t feel like I’m forcing myself into this mode of working, but rather I am more productive, futz way less with my “computer”, and find many things easier. Work is different, but better.

You can listen to @BenedictEvans and I discuss the strategic implications of this shift on this latest @a16z podcast, Finally a Tablet that Replaces Your Laptop. This post is about adapting to change and some of the things I learned along the way.

Unlike many “use a product for month” tests this is not an experiment. For me this is a deeply held belief that the rise of smartphones (specifically starting when the iPhone launched) would have a profound impact on the way we all use “computers”.

The transformation spans hardware (thinner, lighter, smaller, cheaper, longer battery life, instant on/off, touch, sensors, connectivity, etc.), operating systems (more: secure, reliable, maintainable, robust, etc.), and app software (refactored, renewed, reimagined, etc.). It is the combination of these attributes, however, causing a change as fundamental as the leap from mainframe to workstation, from character-based to graphical OS, from desktop to laptop, from client/server to web — perhaps equal to all rolled into one shift if for no other reason than the whole planet is involved.

Note: This is not a Mac v. Windows or iOS v. Android discussion, so no snickering please. This is about a shift to a “modern mobile” computing platform from hardware to software and the cultural changes that surround that. These two posts provide context that for me has been long in the making: Continuous Productivity and Mobile OS Paradigm.

The Debate

Every (single) time the discussion comes up about moving from a laptop/desktop (by this I mean an x86 Windows or Mac) to a tablet (by this I mean one running a mobile OS such as Android or iOS) there are at least several visceral reactions or assertions:

  • Tablets are for media consumption and lightweight social.
  • Efficiency requires keyboard, mouse, multiple monitors, and customizations and utilities that don’t exist on tablets.
  • Work requires software tools that don’t/can’t exist on tablet.

Having debated this for 6+ years, now isn’t the time to win anyone over but allow me to share a perspective on each of these (some of which is also discussed in the podcast and detailed in the posts referenced above):

  • Far and away the most used productivity tool is email (like it or not). The reality is that these days most email is first seen (and often acted) on a smartphone. So without even venturing to a tablet we can (must!) agree that one can be productive without laptop, even on a tiny screen. Attachments are viewed, opinions are formed, projects approved and more on our smartphones. Tablets make this even easier by adding a keyboard and a bigger screen. Beyond email, the next most used productivity tool is a browser. The same holds here, as tablets have “full” browsers with fully capable rendering.
  • Invariably, when debating tablet v. laptop the issue starts with the keyboard and mouse. Many forget that there was a time before a mouse. Even the introduction of the mouse talked about the lack of precision compared to the absolute of row/column keyboard positioning. Then both software (drawing programs for example) and work (value of work products relying on those tools) changed. The keyboard, surprise, has found a home on tablets now. In terms of utilities or extensions, many of these lack analogs on tablets. Many are just irrelevant and arguably for the most part represent points of (historic) personal preference. In particular, so much of the x86 world is about managing files, local storage, and devices and those just aren’t tablet things.
  • If you are a developer or a developer-like professional then you’re not going to use a tablet (yet). That’s ok, but no need to try to spoil it for the rest of us. Geez, Windows Office was compiled on Xenix (GIK) machines for the first 10 years. I recognize the special place computer scientists place on bootstrapping a system (it was huge day when we could compile MS C++ with the compiler under development). If all you do is look at 30” spreadsheets writing VBA and creating PivotTables for your execs then a tablet isn’t for you (yet). And so on… This post is about people who use a laptop the way that we know many many people do. It is a good idea for the debate not to center on “developer” scenarios since the vast majority of people don’t do these things, especially when one considers the degree to which many on earth will experience a smartphone as their first and only “computer”.

The crux of all of these is that in times of platform shifts there are two types of people. There are people that embrace the shift, perhaps out of enthusiasm, fandom, or maybe just because they don’t know any better. Then there are people that do know better, but just see the challenges in changing and use those challenges to anchor criticism.

While I am optimistic about change, I am realistic about the pace that change can really permeate through the broad range of people, organizations, cultures, use cases, and more. The fact that change takes time should not cause those of us that know the limitations of something new to dig our heels in. Importantly, if you are a maker then by definition you have to get ahead of the change or you will soon find yourself behind.

Change is difficult, disruptive, and scary so we’ll talk about that. Having gone through quite a few of these shifts, I’ve learned two things.

First, I tend to embrace the shifts sooner and suspend reality sooner. Sometimes this means I ultimately go backwards or undo a change since some shifts don’t really pan out, but it speeds up my own evolution. For what it is worth, most changes eventually happen even if there are a few false starts (Newton, TabletPC).

Second, time and time again our industry finds those that prefer to fixate on obstacles as seeing only one aspect of the change rather than how one change can cause many things to change as a reaction to a new normal. Those who felt the web would never work because dialup was slow, did not predict the rise of low cost and broadly deployed home broadband. People (like me) who said no one could ever communicate through SMS did not consider that we would collectively develop a new way of communicating that was different than email and Word. The way to think about this is that no technology is really the center of a system, but rather a constellation of bodies under the influence of each other.

In one of the amazing Steve Jobs interviews from Walt Mossberg and Kara Swisher (June 2010), when asked about tablets replacing laptops, Jobs said to this functionality gap that it was “just software”.

Respectfully, he was partially right. While more and better software was needed, the other part of this shift is the accompanying broad range of other changes that will take place. If you doubt those changes are happening now, then consider how much of your work life/process/culture has changed by the introduction of smartphones. Tablets just took longer because they are not just additive but substitutes. The change is more like email which took two decades to become something resembling a universal tool even after being around for 20 years.

As difficult as they are, we more often than not over-estimate platform shifts in the short term but under-estimate them in the long term.

The Shift

Platform shifts are difficult.

As difficult as they are, we more often than not over-estimate platform shifts in the short term but under-estimate them in the long term.

By far, the biggest obstacle to change is most people have jobs to do and with those jobs they have bosses, co-workers, customers and others that have little empathy for work not happening because you’re too busy or unable to do something you committed to, the way someone wanted you to do it.

Benedict talks about the “weekly status report” that is a universal tool in most large corporations. Someone crunches numbers, gathers updates and follow ups, and compresses these into a “deck” or an elaborate status mail to be sent out COB Friday or late Sunday night. Most people who do this work don’t get a vote in how the work is done so it remains a fixture in a company.

Then one day an intern or new hire shows up and doesn’t know better and boom there’s a status web page or everyone uses a SaaS product with live data and a dashboard. Not only did the tool change, but the whole process and deliverable is different. That’s how change happens in even the most conservative companies. (Personal note, one weekend in 1995 I came in and moved all the specifications for Office from an IT-maintained SMB file share to a web server running under my desk).

That’s the easy part. The hard part is that change, especially if you personally need to change, requires you to rewire your brain and change the way you do things. That’s very real and very hard and why some get uncomfortable or defensive.

At the heart of the matter is that change is easiest when it is just a generational change. It is easier because you can ride out the change and keep doing what you’ve always done. Whether it is a new programming language, a new paradigm like cloud, or mobile one can almost always find it rich and rewarding to avoid change. But if you embrace change you have to adopt a change-oriented mindset. I watched many consultants and developers stick to client-server and wait out the web simply because there was plenty of work maintaining and enhancing the systems they had put in place. The irony to keep in mind is that those systems came about because at one point those client-server consultants were the renegades displacing a mainframe system. See how that goes?

A change-oriented mindset, especially for technology, is one where you force yourself to let go of the models you developed for how things work and learn new approaches. Re-wiring yourself and letting go of that muscle memory and those patterns that often took years to develop and perfect is incredibly difficult in a technical sense. It is also difficult emotionally. So much of our own sense of empowerment comes from mastery of the tools we use and so changing or replacing tools means we are no longer masters but back to being on equal footing with lots of people. No one likes resetting their station on the tech hierarchy.

That feeling of disempowerment results in so much of the emotional reaction to major changes. A great example of this was the transition from silver-halide film to digital. The very best photographers jumped to technical arguments about the quality of digital images. We can call this the technical buzz-saw when a new technology or approach is dismantled by experts because of very specific and often provable limitations. Over a short time digital got better because of Moore’s Law and digital cameras for professionals were invented. Even more importantly the whole workflow of modern photography changed. Sports, fashion, news were all revolutionized by digital and film could not even compete even if it had more lines of resolution, color fidelity, or just felt better. A whole new generation of pros and experts sprung up overnight, creating images the experts of silver could not even imagine. If you joined in photography after digital you probably look at film as absurdly prehistoric. That’s how change in technology happens.

My four ways to cope with change:

  • Free your brain. You just have to learn the new way of doing something without judging it. People who learned WordPerfect mastered “reveal codes” and once you used a graphical word processor you saw how much easier things could be with WYSIWYG, if you let yourself. My favorite example was always the person with a WordPerfect document “bug” like an incorrect indent or font change that was debugged by looking at the codes and finding the mismatch. Showing them how to just select the text and directly format what you see was a real breakthrough even though many continued to say they preferred the use of reveal codes.
  • Everything was once considered weird, hard, awkward. If you know how to do something one way, being shown a totally different way often scrambles your brain. In fact, just about every new way to do something looks more difficult than what you know. What most people forget is how arbitrary most ways of doing things are in the first place. As much as there is design effort, always remember that someone started with an idea and just honed it within the constraints at the time. In the first computer tools for editing, the general model was to select what you wanted to do (bold, delete) and then choose the text to apply that to. The mouse people decided that selecting text and highlighting first and then choosing the verb would be “better”. And so it was. It is not hard to see how this could have evolved in reverse with no difficulty at all.
  • Things that seem easy are really crazy when you think about it. Some of the more difficult debates center around flows that we get used to yet only in hindsight seem literally crazy. My favorite one of these is solving the problem of drag and drop to an obscured target window. You know what this is — you drag something to the tray/task bar and hold it there over an icon until you get a visual indication of “ok now you are going to highlight a window” and then magically the obscured window pops up and you drag. Putting aside the physical dexterity required, the craziness of this solution only arose because the model for drag and drop could not be reconciled with the model for overlapping windows. Almost no one knows this trick but boy the people that do feel it is essential.
  • Most of problems are solved by not doing it the old way. The most important thing to keep in mind is that when you switch to a new way of doing things, there will be a lot of flows that can be accomplished but are remarkably difficult or seem like you’re fighting the system the whole time. If that is the case, the best thing to do is step back and realize that maybe you don’t need to do that anymore or even better you don’t need a special way of doing that. When the web came along, a lot of programmers worked very hard to turn “screens” (client-server front-ends) into web pages. People wanted PF-function keys and client-side field validation added to forms. It was crazy and those web sites were horrible because the whole of the metaphor was different (and better). The best way to adapt to change is to avoid trying to turn the old thing into the new things.

OK, Some Things Were Annoying

My brain was free and I was willing to unlearn 30 years of computing to switch to a big phone with a keyboard attached. Was it all fun and easy? No. Of course I had some experience in switching to an OS that wasn’t a “full PC” in a prior life and as I mentioned I am motivated.

What kind of productivity did I do?

I did everything I do on a laptop and more. Because of what I do now, I’m often at the receiving end of work products from a lot of people and I don’t get to pick the tools I use — entrepreneurs send documents in any number of formats (Keynote, PDF, Docs, Sheets, Word, Excel, PowerPoint, and more), tools for document signing/viewing/securing, information products that have sites or apps, all of the cloud storage products, line of business tools, and so on. I also do a lot of work in initiating creation across all sorts of data types (structured, words, images, video). I communicate a lot across a dozen or more different tools. I write a lot of long posts. I do a lot of email. I make and consume spreadsheets. I create and deliver presentations. I use line of business services. I do a lot of things with data. I use the web a lot. In short, I use a lot of different software to do a lot of different stuff.

I have not yet experienced something where I had to go back to my laptop.

I found nothing “missing”. I want to offer some balance and provide some fodder for debate. This is not the list or even important stuff, since part of these shifts is everyone finds their own 5 things that drive them nuts. There are some things that I could not figure out how to work around easily and I found about as annoying as trying to drive on the wrong side of the road (i.e. an arbitrary choice I had to adjust to but found difficult).

  • Command-Tab list too short. I switch between apps via command-tab a lot, but iOS limits that to 8 items rather than scroll the list. So if I happen to want to switch to something I used much earlier I have to get to it via home screen. Since I don’t always keep the stack in my head I sometimes end up switching twice. I could do slide over and scroll but that is a lot of work. While trivial, this is the one that continues interrupt my flow the most. It is both a nit and a hit.
  • Apps don’t always have all of iOS APIs implemented. Once you get into using a lot of apps and moving productivity information between them, the connections between apps matters a lot (especially when you don’t think of everything as a file). iCloud Drive has support for third party locations (Box, etc.) but not every app supports going through iCloud Drive to get to data sources (this is the opposite of share to via the share sheet, but the share from. Most apps are good share destinations.). In fact many apps have their own “curated” set of connections. I suspect this will change and is not unlike how most early Windows apps did not use standard OS dialogs (mostly because many didn’t exist until Windows XP). For now it can be rather convoluted to “attach a file” from within some apps. Many talk about this as a PDF problem but it applies to any sort of email attachments where you want to edit and then get it back into the Reply message. I try to be cloud-only but with external connections attachments are often preferred. Android with its native file system can work around these limits, but it also opens up a complex, legacy, namespace that regular people should not ever see.
  • Copy/Paste. Copy and paste was a huge invention in the move to a graphical OS. If you weren’t there at the time then you don’t recall how bumpy a road this was. One constantly battled with “clipboard formats” and whether you had a pair of apps that could talk to each other (DIB, BMP, UNICODE, https://msdn.microsoft.com/en-us/library/windows/desktop/ms649013(v=vs.85).aspx). Tablets and the web have all but eliminated this but still a lot of apps don’t make it easy to copy or paste. The social apps and web sites are notorious in this regard. Most apps are not very good at consuming formatted text or images on the clipboard. This will likely change.
  • Screenshots. Screenshots are something that have seen a dramatic rise in use with smartphones. They have become the universal clipboard format. On Windows I use add-ins such as OneNote. The Mac has had great screen capture since 1984. Smartphones and tablets are still brute force full screen like Windows, but there are no add-ins to select regions or prevent adding the screen to your photos. Capturing and using a region of the screen takes a lot of steps and you end up with an image in your photo roll. One can only imagine everyone at Apple wrestles with this and maybe it will get fixed soon. I also hope that someone uses EXIF/meta data to make it easy to automatically capture ALT-TEXT for accessibility/readers.
  • Cloud storage fragmentation. When you go tablet you really want and need to go cloud. This is a huge positive. I can lose or drop my tablet, or upgrade, without any worries or lost time. I can switch between tablet and phone and never even think about “where’s that file”. On a laptop, many people (a) use their desktop as a work file area and (b) feel the need to keep everything with them all the time (for performance, latency, etc.). I was both of those people. The cloud should fix all this. Except as a practical matter I end up floating in many clouds all the time. Some say this is a new problem, but of course anyone who has worked in a company with a plethora of files servers, SharePoints, and now cloud storage already has this fragmentation problem. This isn’t new on a tablet. What is new for some is that you can’t break all the clouds by making a local copy of a set of related work products for convenience or in other words fool yourself into solving this problem while creating a massive security issue. To solve this I create a temporary cloud solution as my “desktop” if it doesn’t break security rules.
  • Keyboard handling. Even though iOS is based on OS X code and keyboards have been part of the iPad since launch, there is very uneven handling of basic stuff like keyboard focus, tab, shortcuts. A lot of apps, even iPad only, have a lot of bugs in keyboard handling. These manifest itself with things like being stuck in a field unable to leave without a touch or copy failing because command-c isn’t trapped. I suspect very few apps are testing broadly with a keyboard. Android has similar issues as we know. This reminds me of Windows apps that didn’t test with color screens or without a mouse.

These are a few, but not all, of the things I wrestled with this month. None are deal breakers because I’ve committed and because, frankly, I have empathy for both app makers and platform makers. Many are not even new as I used an iPad Mini with a keyboard for many months when it came out. Most of these are consistent with my Android tablet experience as well.

There are also limitations in apps and sites that one runs across where the mobile browser experience is delivered to a tablet. Having transitioned to the web the first time this is totally normal. You can wager that everyone with a product is trying to get their full experience to the mobile OS as fast as possible, whether that is mobile browse or an app.

I have not yet experienced something where I had to go back to my laptop. Some things surprised me because I thought they would not work (for example, signing legal documents). I’m sure I will find some things soon.

OK, Some Things Were Much Easier

So while there were clearly some problems, I also had a longer list of things that improved in my daily work by switching to a tablet. Some of these might surprise, but given the underlying trends I believe to be the case others simply confirm that bias.

  • Flow between social apps. On a laptop there are clearly two types of “tabs” in my browser. There are all the news/social tabs and then the work tabs (or apps). Because they mostly are all contained in one app (browser) there is no OS way of switching. So app switching is kind of awkward (switching between ALT+TAB and CTRL+TAB, or Command-Tab and Command-~). On a tablet I have one model for switching between these meta-contexts because everything is an app. I find myself moving with much more flow and things feel much more elegant because of that. The fact that most social and news sites have apps and are focusing their energy there only makes this more of a benefit.
  • Weight, connectivity, instant on/off. I have a small bag and my paper notepad weighs more than my tablet+keyboard. I don’t carry a hotspot or use my phone battery to drive wifi for my keyboard device. I don’t walk down the hall with my laptop open to avoid going into standby (or worse hibernate). Everything with carry weight, 4G connectivity, and on/off is vastly improved.
  • Second screen. Even when using a laptop or desktop people have been using a second compute screen forever (amazing how many people still use a calculator while using Excel). Smartphones are second screens for many people. The same holds when using a tablet even though they are the same computer “type” and close cousins in form factors. I use my phone to do phone things (get a Lyft while typing out an email) or to avoid context switches while working. This is no different than a PC, but somewhat better because the overall contexts are much closer. I don’t have a clear model but just find myself doing this.
  • Different apps is like having two devices/second screen. The way I feel like I have two “different” devices is that I did not put all the same apps on each. On my phone I have “phone things” or phone-only things. I don’t put any of on-demand or service apps on my tablet (reservations,, transportation, etc.). I do have most productivity apps on my phone though because I often scan things on the go or quickly share things while mobile. I definitely rely on all aspects of iOS Continuity which is a great balance of having “the same but different” contexts across devices. I love being able to use messaging apps with a keyboard, since for many people I work with messaging has replaced email.
  • No window management futzing. I no longer dread a restart (or failed resume from standby) messing up my windows. I spend no energy arranging things anywhere. For everyone that says full screen is difficult I just think of all the time I saved by not futzing. It has long been known that most people run most apps “full screen” on laptops anyway but now you just don’t worry. And for every person that complains about full screen apps there are full screen modes in laptop software and writing tools that minimize distractions anyway! One could expand this to futzing in general and the difference between a PC OS and a mobile OS, but that just starts a flame war. This is very much like giving up on manual transmission or self-inflicted oil changes.
  • Presentations. Having an tablet is great for presentations. I can carry it with me or do a fireside chat holding it and running things exactly as I did when creating it. Using AirPlay is easy because it is in most places I go and if not I do carry the dongle for HDMI (which I need on most of the laptops I use anyway). The ability to drop the keyboard and the overall device size/weight make this one a great scenario. I love having my notes handy as well in a “socially acceptable” form.
  • Photos. I am an aspiring amateur photographer and have used every organizer and editor over the years. Part of this transition is just trying to do things in the native way. I’ve found it incredibly easy to take photos and import them from the storage card and then edit (using a variety of tools). I do really love the ability to easily combine photos from my phone camera and several other cameras as well as photos sent to me into one album that shows up across devices easily. This has been a surprise given how much I am wedded to my old way of working.
  • Don’t worry about local files ever. On my laptop I have to force myself not to worry about files. Apps all default to local storage and so it is easy to create backup problems without thinking about it and solving this with sync engines that drain the battery seems suboptimal. On a tablet you can’t mess this up and it is a big relief (and one security hole closed). I basically don’t think about local files. I don’t create copies or security problems. I just stick to the cloud apps.
  • Universal charging. I don’t carry a charger around because even on all day coast-to-coast flights or full days of writing I don’t have any problems with battery life (please no debates over battery life). What is really cool is that I can find a charger and/or a charge cable anywhere. Hotel alarm clocks can charge my tablet. Lyft drivers offer cables. I can buy a charger/cable at Walgreens if I need to. In the remotest parts of Africa I can find a cable and solar. I travel with a third party two port charger and two cables and never worry. Plus I can put low-cost chargers in different places I work without a significant capital expense!

What’s Next?

For many the jury is still out on this shift. I know that. Many people have jobs that require specific tools or work products that can’t be done on a tablet. Many are part of corporate cultures that take time, effort, and evidence before they change. My view is that this shift is now in full swing and we will very quickly see a world where many many people can and will be tablet first, or tablet only.

As makers, being early is essential, otherwise you are late.

The biggest change that will happen is not with the tablet platform or apps. That change has happened. What needs to happen is the cultural change that will permit the technology change to happen.

In old business culture you communicated with your boss or team through in person meetings or printed memos stuffed in an interoffice mailbox. Email was invented 45 years ago but it didn’t become part of mainstream corporate culture until the late 1990’s or early 2000’s. The tool change was one thing but what really changed was how people worked and the expectations of communication. While it took decades, once the change started happening it became normal for someone 5 levels down to send mail to their boss or to fire off an incomplete thought/idea, or to send rough drafts of documents to many people. What were seen as limitations or defects became the positives of the technology shift. It is hard to overstate what a huge change that was at the time.

The shift to this new form factor and new platform will bring with it cultural changes that take advantage of what are perceived as disadvantages. As makers, being early is essential, otherwise you are late.

— Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

April 29, 2016 at 10:16 pm

Posted in posts

Why The Heck Can’t We Change Our Product?

Screen Shot 2016-02-08 at 10.48.08 AM

I drove by the fork in the road and went straight.
— Jay-Z (see author’s endnote)

One of the most vexing product challenges is evolving the UX (user experience, and/or user UI) over long periods of time, particular when advancing a successful product with a supportive and passionate community.

If you are early and still traveling the idea maze in search of product-market fit, then most change is good change. Even in the early days of traction, most all changes are positive because they address obvious shortcomings.

Once your product is woven into the fabric of the lives of people (aka customers) then change becomes extraordinarily difficult. Actually that is probably an understatement as change might even become impossible, at least in the eyes of your very best customers.

The arguments are well-worn and well-known. “people don’t like change”…”muscle memory”…”takes more time”…”doesn’t take into account how I use the product”…”these changes are bad”…”makes it harder to doX”…”breaks the fundamental law of Y”…”what about advanced users”…”what about new users”…and so on. If you’re lucky, then the debate stays civil. But the bigger the product and the more ardent the “best” (or most vocal?) customers, well then the more things tilt to the personal and/or emotional.

Just this past week, our feeds were filled with Twitter rumored to make a big change (or even changing from Favorite to Like), Uber changing a logo, and even Apple failing to change enough. It turns out that every UI/UX change is fiercely monitored and debated. All too often this is a stressful and unpleasant experience for product designers and an extremely frustrating experience for the customers closest to the product. Even when changes are incredibly well received, often the initial response is extremely challenging.

For all of the debates, a product that fails to dramatically change is one that will certainly be bypassed by the relentless change in how technology is used.

Yet change, even of a core user experience, is an essential part of the evolution of a product. For all of the debates, a product that fails to dramatically change is one that will certainly be bypassed by the relentless change in how technology is used. We do not often consider the reality that most new products (and services) we enjoy are often quite similar to previously successful products, but with a new user experience.

Consider that the graphical interface for spreadsheets and word processors replaced whole companies built around predominantly similar capabilities in character mode interfaces. The competitive landscape for browsers was framed by having minimal interface. Today’s SaaS tools often lead with capabilities similar to legacy products expressed through consumerized experienced and cloud architecture.

Technology platform disruptions are just that, disruptive, and there’s no reason to think that the user experience should be able to smooth out a transition. There’s every reason to think that trying to make a UX transition go smoothly might be a counter-productive or even a losing strategy.

The biggest risk in product design is assuming a static world view where your winning product will continue to win with the same experience improving along the same path that got you success in the first place.

The reality is that if you are not doing more in your product you are doing less, and doing more will eventually require a redesign and rethinking over time. The corollary is that if you are only doing what you’ve always done, but a little better every time, then as a practical matter you are also doing less relative to always-emerging competitive alternatives.

The biggest risk in product design is assuming a static world view where your winning product will continue to win with the same experience improving incrementally along the same path that got you success in the first place.

There are dozens of amazing books that tell you how to design a great user experience. There are seemingly endless books and posts that are critical of existing user experiences. There are countless mock-ups that say how to do a better job at a redesign or how to fix something that is in beta or already changed (see The Homer). Few and far between, however, are the resources (or people) that guide you through a major change to user experience.

No one tells you that you’ll likely face your most difficult product design choices when your product is incredibly successful but facing existential competitive challenges — competitive challenges that your most engaged customers won’t even care about.

This essay is is presented in the following sections:

  • User Experience Is Empowerment
  • Everyone’s A Critic
  • Pressure To Change
  • 5 Ways To Prepare
  • 5 Approaches To Avoid
  • Reality

User Experience Is Empowerment

At the most basic human level, the mastery of a tool (a user interface or experience) is about empowerment. Being able to command and control a tool feeds a need many of us share to be in control of our environment and work.

Historically we (all) seek to be the master of our “life tools” whether an shovel, a horse, a car, a PC, or the arcane commands of a modern social network. Something magical happens for a product (and company) when it is so compelling that people spend the time and effort required to master it.At that moment, your product becomes an essential part of the lives of customers, customers who come to believe their world view is shared by “everyone”.

Those customers become your very best customers who see your tool as a path to mastering some of the ever more complex aspects of life or work.Those same people also become your harshest critics when you try to change, anything.

A Story

Permit me to share an example of how this empowerment can work, deliberately using an historic example few will remember.

Some time in the mid 1990’s I was a product (program in Microsoft lingo) manager on Office and we were trying to figure out how to transition from a consumer app (yes we used the phrase consumer app back then) to an enterprise platform (that word was new to us). There were many elements to this, but one in particular was the setup and deployment of the Office apps.

As is often the case, customers were ahead of us on the product team when it came to figuring out the most efficient way to copy the Office bits (all 50MB of them) from floppies to a file server to desktops and laptops. Customers had all sorts of things they wanted to do when “installing” Office onto a PC — changing default settings, removing unneeded files like clipart, and even choosing which drive to use for the bits. Many of those could be controlled by the setup program that was itself an app requiring human interaction, save for a few select capabilities. PC admins wanted to automate this process.

As you might imagine, admins cleverly reverse-engineered the setup script file that was used to drive the process. This was a file that did for the comma what LISP did for parentheses. It was a giant text file filled with record after record of setup information and actions, with an absurd number of columns delineated by commas. In fact, and this is a little known embarrassment, the file was so unwieldy that to edit it required a non-Microsoft editor that could handle both the text file length and the line width. Crazy as that was, a very large number of PC admins became experts in how to “deploy” Office by tweaking this SETUP.INF. I should mention, one missing comma or unmatched quote and the whole system went haywire since it was never designed to be used by anyone but the half dozen people at Microsoft who understood the system and were backed by a large number of test engineers.

We believed we had a clever idea to solve a broad range of customization, security, and even engineering challenges which was to replace the fragile text file and third party text editor with a robust database and graphical interface. Admins could then push a customized installation to a PC without being in front of it and the PC could even repair an installation if it somehow got corrupted or munged. We really put a lot of effort into solving this with a modern, thoughtful, and enterprise friendly approach. Consumers would see almost no changes.

Then came the first beta test and what could best be described as a complete disaster. Of course the customers in the beta were precisely the small same set of customers that had figured out how to hack the previous system. We had little understanding that these customers had become heroes within world of PC Admins because they could get a new PC up and running with Office in no time flat. Many had become the “goto person” for Office deployment in an era when that was a big deal. Deploying Office had become a profession.

That should have been success for us on the Office team except we changed the product so much that everything special the admins had mastered had become irrelevant. They had lost their sense of mastery over their environment.

Looked at through another lens, implicit in the change is a devaluation of the acquired knowledge and expertise of these customers.

The pain was felt by our team as well. Our goal was to make things better for admins by replacing the tedious and error prone work with a platform and slick new tools. As an added bonus the new platform greatly improved robustness and reliability and added whole new features such as install on demand. All the admins saw, however, was a big change and subsequent loss of empowerment. Looked at through another lens, implicit in the change is a devaluation of the acquired knowledge and expertise of these customers.

It would have been easy to see how to take their very specific and actionable feedback and roll the changes back to what we had before, or to introduce some bridge technology (or other “solution” discussed below). On the other hand, the competitive forces that drove these choices including web-based and browser-based tools were increasingly real. The technology shift was underway. It was clear if we did not change the product and disrupt the established processes we would have only hastened a potential disruption of our business. The market was clear about the failures of the architecture we had in place.

Everyone’s A Critic

If you ever want to go for a record on the world’s longest comment thread then author a post on user interface suggesting how a product should be improved, fixed, re-done, or just rescued. On the internet, no one knows you’re a dog but everyone is a user interface critic and/or designer. Certainly well-intentioned, each person is genuinely responding to challenges they have with a product. Some of debates over the past few months in Windows circles over the hamburger menu or the discussions around iOS’ 3D touch show the many sides to how this plays out from friends and fans alike. As I type this, Twitter even has a trending hashtag on rumored changes to the product.

Some of my favorite posts have always been the ones where someone takes the time and effort to do a rendering to improve upon an experience I worked on/managed — “this is what it should look like”. The internet and tools make it easy to turn a complex and dynamic system design into a debate about static pixels and an image or two (or twenty). The ensuing debate might be narrowly focused on specific affordances such as “the hamburger menu” or whole themes such as “skeuomorphic versus flat” design.

The presence of renderings or “design alternatives” only add stress and uncertainty to what is already an emotionally charged and highly uncertain process while at the same time creating a sense of authority or even viability. The larger the project the more such uncertainty brings trouble to the design, especially as people pile on saying “why not do that?”. It is worth noting that the internet can also be right in this regard, such as this post on iOS keyboards.

The irony is that you’re far more likely to participate in a UX discussion/debate with people with very different starting and end points than those for whom your design is intended.

There’s quite a challenge in the tech dialogs around UX. First, those that participate in the dialog are on the whole representative of power users and technology elite. More often than not, UX design is seeking to include a broader audience with a wide range of skills or even interest in using more depth functionality.

Second, the techies (to use @waltmossberg’s favorite phrase) often prefer innovations that enable shortcuts or options to existing features they see flaws with rather than doing whole new things. Techies tend to want to fix the shortcomings in what they see, which is also not always aligned with solving either broader usage challenges or even business problems. The irony is that you’re far more likely to participate in a UX discussion/debate with people with very different starting and end points than those for whom your design is intended.

While every person can be a critic (often even for products that they do not routinely use), it is also the case that every UI can become old or at least static and open to criticism. Interfaces age both internally (the team) and externally (the market). Internally, you might hit a wall on where to evolve. Customers fall into routines and usage patterns. Nothing new you do is recognized or used.

Externally, your shiny new experience that replaces some old experience (for doing slightly different or totally unrelated things) will eventually become the type of experience that gets replaced. While some think of this in terms of stylistic trends like transparency or gradients, the truth is there are functional aspects to aging as well such as interacting with touch or gestures, bundling of different feature sets, or macro trends in visual design.

When you put together the large number of critics and the certainty of your experience aging, you’re in for a challenging time to evolve your interface. Whether you have a consumer app or site with a billion users, a commerce site, a productivity tool, or a line of business app the challenges are all the same. While the scale or direct economic impact might differ, to those designers and product managers working the problem the challenges and decisions are the same, and to their customers the frustration is just as real.

Pressure To Change

Changing a user experience should in theory be no more or less difficult than a major re-plumbing or even creating the first experience. Some things make changes more difficult, or perhaps at least more open to direct criticism. With a backend change, the most visible changes might be performance or uptime (to be fair, the debates about change within product engineering are just as contested). With UX changes (additions, subtractions, reworking) everyone seems them through a more complex lens.

Changing something that people have an emotional connection to is difficult. An emotional connection creates expectations or even norms, and the natural human reaction is to defend the status quo and maintain control. The discussions of change rapidly deteriorate to preference, taste, or argument by analogy, or assertion all of which are very difficult to counter when compared to facts, stopwatches, or physics.

In my experience there are several key pressure points that drive change, beyond the most obvious of fixing what is broken. You can accept or reject these and advocate for change yourself or leave room for your competitors to capture the leadership and change. Depending on context wisdom could be found from many perspectives.

Pressure to change confronts a successful and engaging product because of:

  • Evolving use cases
  • Locating new capabilities
  • Discovering features
  • Increasing tolerance of complexity
  • Isolating change leads to complex analysis of benefits
  • Competing products and/or changing expectations

Evolving use cases. You might design your product to solve a specific key scenario, but over time you find a different set of use cases coming to dominate. This in turn might require rethinking the flow through the product or the features that are surfaced. In a sense this can be seen as evolving the product to meet real world usage versus theoretical usage, except it will still be a change. We see this in the amount of UX real estate legacy tools devote to print-based formatting or layout design compared to the next generation of tools that surface collaboration and communication features as primary use cases. The changes in Facebook and Facebook Messenger demonstrate this driver. A relatively minor scenario saw increased usage and strategic value driving a significant, and hotly debated, experience change.

Locating new capabilities. As new capabilities are added, most all of the time those features require some UX affordance. Almost never is there room in the existing product for the new feature. In evolving Office, we reached a point where we literally ran out of room on menus and then on toolbars (originally the goal of toolbars was to be a shortcut for things on the menus or dialog boxes, but soon features on toolbars had no counterparts in menus and dialog boxes). This challenge is even more acute in today’s mobile apps that are often “filled” from the very first release. This design challenge can be likened to trying to add a new major appliance to an existing kitchen — there is almost never room to add one until the next major redesign when flows are reconsidered. As a result, when it comes time to add UX for entirely new scenarios your product will change significantly. Recently, LinkedIn chose to invest heavily in content authoring and sharing but the core experience was aimed at jobs and resumes. The new mobile app, which was reviewed positively in many cases, changed the focus substantially to these new scenarios.

Discovering features. It is easy to want common features to be easy to use and new features to be discoverable, but those are increasingly at odds as a product evolves. The first challenge is just in finding the screen real estate for new capabilities as discussed above. More likely, however, is that new capabilities will be subordinate to existing ones in terms of surface area. This leads to affordances such as first-run overlays to explain what all the product might do or what gestures are available, which is itself added complexity (and engineering!). One also sees new capabilities with a disproportionate “front and center” placement in an effort to increase discoverability. Often this results from a “marketing” need to drive awareness of the very features being used in outbound marketing efforts.

Over time, A/B testing or usage data will then drive additional change as features are rotated out to make room for new. This all seems quite natural, but also clearly drives complexity or even confusion. This in turn raises the challenge of even changing a product in the first place. Most Google productivity tools we use experience this challenge. Gmail and Apps are increasingly complex and it is getting more difficult to discover capabilities. Historically Google had Labs features to explore new areas and now even Inbox is a whole new experience for mail.

If you ever doubt the ability for people to tolerate more complexity, then just look at the old version of any famous site, app, or program. You’d be amazed at how sparse it is.

Increasing tolerance of complexity. Everyone loves simplicity and certainly every designer’s goal in creating a system is to maintain the highest level of simplicity while providing the right functionality. Over time there is no way to remain simple as more features are added the ability for someone new to the system to command it necessarily decreases and the usage of the system’s breadth decreases. Nevertheless, people become accustom to this growing complexity. It creates a moat relative to new entrants and a barrier to change. People loved the ironically named Chrome browser when it arrived because it was so clean and simple. Few would argue that level of simplicity remains today, yet the complexity is embraced and there’s little opening for a browser that provides less functionality.

For all the criticisms directed at the complexity of Microsoft Office, few switched away to products that do less simply because they were simpler. If you ever doubt the ability for people to tolerate more complexity, then just look at the old version of any famous site, app, or program. You’d be amazed at how sparse it is. The pressure to reduce and simplify comes from everywhere with technology products, but sometimes a failure to embrace a level of complexity can prevent important and strategic change. The most adaptable part of the entire technology stack is the human being at the very top.

Isolating change leads to complex analysis of benefits. UX experience, new or changed, is almost always viewed in isolation. New UX is viewed relative to the small number of initial capabilities and the ease at which those are done compared to existing solutions (i.e. make a voice call on the original iPhone). Changes to products are viewed through the lens of “deltas” as we see in reviews time and time again — reviews look at the merits of the delta, not the merits of the product overall relative to new scenarios that might be more important and old scenarios that might be less important now (as user needs evolve). When viewed in isolation, change is amplified which then makes change more difficult to execute, absorb, or even accept.

Isolation results in intense levels of discussion among the technologists as alternatives are proposed (after the fact) even for very small changes. More importantly, this dialog amplifies the value of small changes which in the scheme of thing will do nothing to improve the business and everything to prevent larger and more strategic changes from happening. Platforms providing horizontal capabilities to broad audiences are notorious for these debates in isolation. Consider the transition iOS made to a new visual design which is now a distant memory.

Competing products and/or changing expectations. The biggest and most important driver for change are the external market forces of competition. Each of the previous drivers are all within your own world view — these are changes you are driving for products you control with inputs and feedback you can monitor. The competitors you view as strategic are incredibly important inputs relative to the longer term viability of the business.

The fascinating thing is that your best customers are the least likely to be worried about your longer term strategy, especially if they have bet their jobs and are empowered by your product. In fact, they will be just as “dismissive” of competing products or new approaches or solutions as your highly paid sales people that are continuing to close deals or the self-taught expert who can’t wait to join the product team. As a technologist you know that your product will be replaced or superseded by a new product and/or technology. It is just a matter of time.

The most important thing to consider is that it is almost never the case that your direct competitor will serve as motivation for changing expectations. The pressure to change will come from unexpected substitutes or newly crafted combinations of a subset of existing capabilities. Ironically, most all of your inputs will come from people and members of the team/company focused on your direct competitor (and the bigger your presence the more likely this will be).

5 Ways To Prepare

The first rules of product design relative to change are to expect it and to prepare for it. It is common-place to remind ourselves in product design that the enemy of the good is the perfect, but relative to evolving experience the enemy of the good is the past.

Assuming that today’s user experience encompasses the value of your product tomorrow is certain to get you in trouble (just as assuming some specific code or API is the core of your value). A comforting way to approach this is to remind yourself that before your current successful user experience there was a successful experience that was widely used. People gave up that product to learn and use your new and different product.

The following are five ways to prepare your design for a future that will require you to change:

  • Solve the n+1 problem up front
  • Design for the choices you know about
  • Optimize only to a point
  • Decide your app strategy early on
  • Flat is your friend

Solve the n+1 problem up front. One of the most common times a new feature causes UX churn is when you’re adding something you knew about but didn’t have time to engineer. I call this n+1 because across the product there are places where your experience (and code) assumed a finite number of choices and then down the road you find you need one more choice. Commonly this is seen as choices like photo filters, email accounts, team/channels, formatting options, and so on. These changes are recognizable when you go from either no choice to a choice or need to switch from binary/ternary to some list.

The warning signs for this potential change come very early on because you either cut the feature or the feedback is everywhere. It is almost always the case that these are core flows in the experience so designing up front can be a big help. Incidentally, this also holds for engineering and the product architecture where the highest cost additions are often when you need to go back and engineer in a level of indirection to solve for choice where there was no architecture. Some might see this as counter to MVP approaches but nothing comes without a cost.

Design for the choices you know about. As a corollary to above, there is a design approach that says to leave room for the unknown, “just in case”. Such a design often leaves open space in the interface that stares at you like an unused parking spot at the mall. At first this seems practical, but over time this space turns into an obstacle you must work around because nothing ever seems to meet the bar as belonging in the space.

On the other hand, this also serves as a place where everyone on the team is battling to elevate their feature to this valuable “real estate”. Even more challenging, your best fans will have a million ideas for how to fill up this space (and renderings to demonstrate those ideas), and too often that amounts to using it to provide shortcuts to existing functionality. Preparing for what you don’t know by compromising the current does little to postpone the inevitable redesign and does a lot to make the current design suboptimal.

Optimize only to a point. Optimize to a point and recognize that you will change, and assume that the vast majority of input will be focused on areas you were not really expecting (someone on the team probably was expecting but not everyone). In preparing for a future of change one of the most difficult things to do in design is to recognize that where you are at a given point in time for a development cycle is good enough to ship. Stop too soon and the risk of missing is high. Stop too late and the reluctance on the team to change down the road is only increased because of sunk costs and a too much historic baggage.

The most critical rule of thumb in product design is that a product releases “as is” and does not come with all the designs you considered or could consider. When it comes time to disrupt yourself with significant changes, do not underestimate the amount of institutional inertia that will come from a few years of researching and testing every possible alternative to a design. The expression often used, “peacetime generals are always fighting the last war” applies to design and product choices as well.

Decide your app strategy early on. A strategic question facing any broad-based product will be how many mobile apps do you need. In the enterprise if you’re building a full ERP system there’s no way to have a single app, but it could also be very easy to create a sort of app shrapnel and replicate the 1500 legacy web sites that the average large corporation maintains. If your product has either a desktop or web solution and apps are being added, you have to decide early on if your app is a scenario-based companion of the primary/only way you expect people to use a service — you might be considering a mobile capture app to go with a web-based analysis app, or keeping the admin tools on the web to accompany mobile workers in apps.

It is very difficult to switch mindsets down the road so this choice is key. A valuable lesson (in disruption) was learned during the transition from desktop to web. The prevailing broad view that web apps would be supersets of desktop apps proved to be true as many believed, but it just took about three times as long as people thought. If you believe your mobile app is a companion to your site, just be prepared for a large number of customers that only want to access over mobile even if they are not doing so today.

Flat is your friend. Programmers and designers often love hierarchy — hierarchy helps our computer brains to organize and deal with complexity and most techies have no problem navigating hierarchy. Unfortunately most people long ago failed to grasp the Dewey Decimal system and search seems to win out over hierarchical organization in most every instance. Aside from that, the most frustrating changes to experience come when you reorganize a hierarchy (trust me on this one).

Hierarchy is the source of muscle memory and also where much of a sense of mastery comes from. The power users are the people that know where features are hidden or how to drill through panels to find things. Hide and Seek or Concentration are great for the right people, but a poor way to do user experience. A solid approach to avoid a future reorganization is to see how flat you can keep your experience. SEO or A/B testing (or marketing) will always push to keep things above the fold, oddly motivating hierarchy, and not favor scrolling which most everyone understands. The alternative of click/tap to a new place is way more disruptive both today and down the road.

We all wish we could be fully informed about the way we will evolve our product and the way competitors will provide a unique view into the space we are going after. That is never as easy as it could be. The above are just a few ideas to consider if you start from the mindset that once you achieve success you will end up going through a user experience change.

5 Approaches To Avoid

With a successful and deeply used product, when you do make a significant change to your experience the feedback is often swift and clear, and universal praise is exceedingly rare. As a result, the product team discussion will move from expressing frustration to proposing solutions very quickly. Even if you were expecting some pushback, it is never pleasant. At that moment, there is a limited design vocabulary available to make unplanned adjustments, perhaps even more limited than the engineering time you have to execute (assuming that as with most projects you are under pressure to complete all the new stuff).

Similarly, you might anticipate some pushback and are considering a proactive approach with proactive objection handlers or scheduled time for feedback. Regretfully, the solution set is the same since the problem is the change itself, not the way you are changing. The only difference is that the more you engage in defensive engineering efforts the less time you have to get the new work done. More importantly, time spent on salves or bridges only takes away from the existential competitive dynamic that is motivating the need for change.

These potential solutions all arise from the same place, which is that your early adopters, best customers, and front lines sales are all successful with your product and resisting a big change. The resistance is natural — the feeling of empowerment and familiarity. Remember, the reason your are making changes is because your successful product, in your best judgement, is facing an existential threat. This threat is not coming from these early voices, but from the customers you have failed to acquire or are not likely to ever acquire. You are making changes to support future growth not incrementally improving your existing customers.

While the 5 approaches outlined below are typical, they often backfire in predictable ways which is why they should be avoided:

  • Add a new mode
  • Offer customization
  • Solve with a UI level of indirection
  • Downplay the changes
  • Redesign quickly

Add a new mode. Enthusiasts, marketing, and enterprise customers have no problem with change so long as you add an option they can use to get back to the old way of doing things. This feedback can be pretty sneaky. They will say that the option can be hidden and hard to get to because it is really only for power users or admins, or just an objection handler for the sales process. They might even tell you that you can take away the option after some time to adjust. By the way, another variant of this request is to just provide an option to “hide the new stuff”. You see how this is a sneaky ask — eventually there will be something in the new stuff that even these customers will want but without interfering with the existing “old” way. It certainly seems lightweight enough.

The challenge is twofold. First, once you can get back to the old way of doing things then everyone will want to know why that option exists. “Is the new way not good enough?” will be a common refrain. Second, once you have such an option you are designing for two experiences all the time. Everything you add needs to consider the old way and new way of getting to a new product areas. Not only is this super difficult, it is expensive and it takes away from forward-looking strategic needs. In general, modes, whether user-directed or contextual, are a way to postpone making a strategic choice about the future of your product and advertise your own indecisiveness.

By the way, technology enthusiasts love modes because modality (I suppose dating back to VI) implies hierarchy, control, choice, and a priori knowledge of where your are heading in the product that most people don’t have. Almost all choice and modality is ultimately ignored by customers and when the product magically switches modes, there is almost always a level of frustration that comes from the unexpected behavior changes (even think of the cleverness of having views defined by portrait or landscape which tend to be confusing in practice).

Offer customization. Customization permits you to make big changes with three mitigations.

First, you can allow people to customize your functionality one setting at a time until it returns to where it was. Often this is how a product evolves as it tries to automate previously multi-step processes. For example, if you used to manually turn off the lights at home via some IoT app but then add machine learning to guess, the first time the lights are wrong the answer will be to disable the new automation (AutoCorrect in Word was like this). You need to get something right or handle it gracefully when you’re wrong, but turning it off means it will never be successful.

Second, customization is often used to rearrange the user interface to get it back to where it was before when it was good Maybe you took away a share button to use the one provided by the OS or maybe you added a few tabs to the top level UI, well then just a switch to move things around.

Third, customization can be used for when you want to add something and the team can’t even decide whether it is a good idea or not so you add your own way to turn it off or hide it. All of these have the same downstream problems with setting a risky precedent that can’t be maintained (i.e. everything new comes with a way to change it) and adding combinatorics that can’t be managed (the testing matrix).

Making up your mind is the best approach. Longer term, the disruptive innovation will come from when the new product subsumes your product and at that time customizing how your product works will prove to be extraordinarily low-level, almost like a debugger. Think this will never happen, then look at all the options in Office that we totally sweated over. Again, technologists love customizations so you are almost certain to get positive feedback and strong encouragement to providing customization.

Solve with a UI level of indirection. The hamburger, Tools|Options, right click, sub-menus, and more are all ways of adding things without adding things or hiding things you add but don’t want to add. There’s no magic answer to where to squeeze all the things you need in a design into a (very) finite space, but for sure if you find yourself putting something new behind a level of indirection then think twice.

Once you think you can get away with “change” by putting it behind a level of indirection then you might as well not do it. Sometimes this type of approach takes place in enterprise products where you are responding to a competitive dynamic but you don’t agree with the competitor or the competitor’s approach is at odds with your overall design. The theory is to add the checkbox but not break your overall model or experience. Only you can judge whether you are seeking credit with reviewers and analysts or actual humans but be careful thinking that you have solved the problem and not just created a future problem.

The discoverability of your work hidden behind a level of indirection is minimal so always ask if you’re doing the work for customers or to make yourself feel like you’re addressing a business or customer need.

Downplay the changes. If you go through the work to understand why you are going to make a big change, then design and engineer a change, the very worst strategy is to downplay the effort when it comes time to communicate what you did.

If you make a big change and talk about it like it is a little change then many will wonder if you are confused and/or lack empathy. The challenge is that this one is the easiest of all responses since it is simply a different tone or wording in a post describing what is changing. You choose the right screen shots or feature names and things can look more familiar. When customers who were expecting the same or incremental see what you’ve done this tends to increase the backlash and the internet loves a good dose of corporate “wool over the eyes of customers”.

Since significant strategic challenges are driving this change, backing off is a worst of all worlds reaction — you send the message to the market that you’re fighting the last war, thus not engaged in the future, and customers come to expect that as well.

Redesign quickly. If how you communicate the change is the short term response, then the medium term response is to quickly redesign what you just spent a lot of time designing. How quickly can you back out the most egregious changes? How can you undo things with as little engineering work as possible? What if you just added a couple of old things back front and center? These are all things that will be rapidly floated within the team.

The most fascinating aspect of this response is that this is what the internet will do for you, both quickly and broadly. The reason is that once a new design is put forth, incremental changes to that design along the lines of “do this instead” will be offered by the community that is empowered by your product. There will almost certainly be good ideas amongst all of these, and even more likely they will be alternatives you considered (we never really learned why Apple so steadfastly refused to highlight the shift state on the iOS keyboard but it is impossible to believe this was not discussed).

Design and engineering are difficult and we all know that the likelihood of mistakes increases with the pace of reactionary change.


As this post keeps saying, the reason it is so painful and frustrating to change user experience is because right at the moment that your product has successfully reached the point of being empowering and critical to the jobs and lives of your customers it is also facing the most existential competitive and marketplace challenges.

The reality is you have to respond to the marketplace. You can choose to continue to iterate on the same path with the same customers. In the technology world that is, with as high a certainty as you can count, focused on the shrinking market. Disruption is real and it so far it is proving to be much more of a law than a theory.

Is the time to change right? Is the design you chose the right one? Are you focused on the right strategic competitors? The other reality of technology change is that most often the forces that keep a product and company from transitioning from one generation to the next are not an understanding or ability to debate these choices, but the ability to execute across product, engineering, marketing, and sales.

The really good news about all of this is that if you can create the product change and go-to-market execution, the reality is that short-term memory is a real thing, especially in a growing market. If you can make changes that secure new customers and grow or that your typical customers can adjust to without “incident” then there’s a really good chance that memory will be short term.

Those customers that chose to stick with character interfaces or would not move off a web app, got left behind by graphical and mobile users. This happens with every technology platform shift and happens within every category. Growth is the friend of change and if you’re not growing you are by definition shrinking.

I’d like to add one last reality for everyone who both made it this far and is out there critiquing new designs for products they use and love. The people working on products you love are on average as good as you, as thoughtful as you, and as informed as you. They are all open to feedback and good two-way discussion. Treat them the way you would like to be treated in the same situation.

Steven Sinofsky (@stevesi)

Author’s note. I’ve never used a music lyric quote and don’t mean to steal Ben’s intro, but this quote from this song has special meaning to me in this particular situation.

This post originally appeared on Medium.

Written by Steven Sinofsky

February 8, 2016 at 12:00 pm

5 Ways to Compete With [Big] Incumbents

Japan, Jiu-Jitsu-KämpferIn The Stack Fallacy: Why Big Companies Keep Failing Anshu Sharma writes about how difficult it is for a [big] company to move up the stack to adjacent businesses/product categories by building on their successful base. If you are competing against one or more incumbents, even if you believe they will ultimately fail because of this fallacy, it is still an incredibly challenging competitive situation. Using some typical weaknesses as your competitive strengths can increase your potential for success when being the next part of the stack a big company takes on.

In a competitive environment, often a “checklist” battle dominates. This is especially true if you are competing with an enterprise incumbent. There are many ways to compete with a company that has more resources, existing customers, and access to broad communications channels. You can be systematic in product choices and communication approaches and increase the overall competitive approach.

You can think of these as the Jiu-Jitsu of the Stack Fallacy — using the reasons competitors can fail as your strengths:

  • Avoid a “tie is a win”
  • Land between offerings or orgs
  • Know about strategy tax
  • Build out depth
  • Create a job-defining solution

This post is mostly from an enterprise competitive perspective, but the consumer and hardware dynamics are very much the same. While some of this might seem a bit cynical, that is only the case if you think about one side of this battle being better than another — in practice this is much more about a culture, context, and operational model than a value judgment.

Avoid a “tie is a win”

The first reaction of an incumbent (after ignoring then insulting the competition) is to build out some response, almost always piggy-backed on an existing product in an effort to score a “tie” with reviews and product experts (in the enterprise this means places like Gartner). The favorite tool is the “partner” or “services” approach, followed by a quick and dirty integration or add-in. Almost never do you see first party engineering work to compete with you, at least not for 12–24 months following “first sighting”.

Their basic idea is to clear the customer objection to missing some feature and then “get back to work”. In enterprise incumbent-speak, “a tie is a win”.

The best way to compete with this behavior is to go head to head with the idea that a checkbox or add-in does away with the need for your service and worse such an implementation approach will almost always be insufficient over time and hamstrung by the need for integration.

Don’t worry about your competitor pointing out the high cost of your solution or the burden of something new being brought into the enterprise. Both of those will become your strengths over time as we will see.

Land between offerings or orgs

The incumbent’s org chart is almost always the strongest ally of a new competitor. The first step is to understand not only where your competitor is building out a response, but the other product groups that are studying your product and getting “worried”. Keep in mind that big companies have a lot of people that can analyze and create worry about potentially competitive products.

You can bet, for example, that if you have any sort of messaging, data storage, data analysis, API, or visualization and compete with the likes of Oracle, Salesforce, Tableau, or other big company that several groups are going to start thinking about how to incorporate your product in their competitive dialog.

You can almost declare success when you hear from your customers that your product has come up in multiple briefings from a single company. Perhaps the biggest loss in a large company is when a Rep loses a deal to a competitor and news of that travels very fast and drives tactical solutions equally fast — tactical because they are often not coordinated across organizations.

When you find yourself in this position, two things work in your favor. First, there’s a good chance you will soon find yourself competing with two “tie is a win” solutions , one from each org— white papers talking about partners who can “fill in the gaps” or add-ins that “do everything you need”, for example. No P&L or organization wants to lose a deal over a competitor.

Second, you will have time to continue to build out depth because the organizations will begin the process of a coordinated response. This just takes a long time.

The best thing that can happen at this point is if you have a product that competes with two larger companies. At that situation you can bet that you are the thing those companies care the least about and what they care the most about is each other. You might find yourself effectively landing between many organizations then and that spot in the middle is your whitespace for product design and development — go for it!

Know about strategy tax

Once an organization grows and becomes successful, one of the key things it needs to do is define a reason for the whole to be greater than the sum of the parts. The standard way incumbents do this is to have some sort of connection, go-to-market, feature, or common thread that runs through all the offerings. This defines the company strategy and the reason why a given product or service is better when it comes from a particular company (and also the reasoning behind a company being in multiple businesses).

In practice, the internal view of these efforts quickly becomes known as astrategy tax. From a competitive perspective these efforts are like gifts in that they make it clear how to compete. For example, your product might have integrated photos but your competitor needs to point customers to another app to deal with photos. Your product might be supported by channel partners but your competitor will only sell direct (or vice versa). This can go to an API level, particularly if you compete with a platform provider who is strategically wedded to a specific platform API.

A classic example for me was the Sony Memory Stick. If you were making any device that used removable storage then you were clearly going to use CF or SD. But there was Sony, marching forcefully onward with Memory Stick. It was superior. It had encryption. It had higher capacity (in theory). At one point after a trade show I left thinking they are going to add Memory Sticks to televisions and phones, and sure enough they did. What an awesome opening if you needed to compete with a Sony product.

A strategy tax can be like a boat anchor for a competitor. Even when a competitor tries to break out of the format, it will likely be half-hearted. Any time you can use that constraint to your advantage you’ll have a unique opportunity.

Build out depth

The enemy of “tie is a win” is product depth. Nothing frustrates an incumbent more than an increasingly deep feature set. Your job is to find the right place to add depth and to push the incumbent beyond what can be done by bolting capabilities into an existing product via add-ins, partners, or third parties.

Depth is your strength because your competitor is focused a checkbox or a tie, figuring out the internal organization dynamics of a response, or strategizing how to break from the corporate strategy. While you might be out-resourced you are also maniacally focused on delivering on a company-defining scenario or approach.

The best approach to building out depth is to remain focused on the core scenario you brought to market in the first place. For example, if you are doing data visualization then you want to have the richest and most varied visualizations. If you have an API then your API should expose more capabilities and your use of the API should show off more opportunities for developers.

There’s a tendency to believe that you need to build out a solution that is broad and to do that early on. The challenge here is that this takes you into the incumbent’s turf where you need to build not only your product but the existing product as well. So early on, push the depth of your service and become extremely good at that — so good that your competitor simply can’t keep up by using superficial means to compete. This example from Slackcrossed my feed today and shows the depth one can go to when there is a clear focus on doing what you do better than anyone else.

Your goal is to expand the checkbox and to move your one line of the checkbox to several lines. This is how you change the “tie is a win” dynamic — with depth and ultimately defining a whole category, rather than one item.

Create a job-defining solution

When building a new product and company, one of the most significant signs of success is when your product becomes so important it is literally someone’s job. Once you become a job then you are in an incredible feedback loop that makes your product better; you have an opportunity to land and expand to other parts of a big company; and you have an advocate who has bet a career on your product.

New products have a magic opportunity to become job-defining. That’s because they enter a customer to solve a specific problem and if that gets solved then you have an advocate but also a hero within the company. Pretty soon everyone is asking that person how they get their job done so much better or more efficiently and your product spreads.

The amazing thing about this dynamic is that it often goes unnoticed because rarely are you replacing entirely something that is already in use, but simply augmenting the tools already in place. In other words, the incumbent simply goes about their business thinking that your product just complements their existing business.

This obviously sounds like a big leap to accomplish, but it speaks to the product management decisions and how you view both the product and customer. With enterprise products it is almost always a two step process. First you solve the specific user’s problem and then you solve the problems the IT team has in using the product as part of a business process (i.e. authentication, encryption, mobile, management).

This works particularly well because your incumbent’s product has already achieved this milestone, and it is their product that (a) is not working and (b) is almost certainly some other function’s job software. It is another way of landing in the whitespace of the organization. Your competitor’s job is not looking for more to do, especially not someone else’s job, so you have some clear road ahead.

The challenge of existing winners breaking into new or adjacent businesses is real and difficult. Very rarely does this happen. The inherent obstacles, both technically and culturally, new products have specific entry points to compete.

Steven Sinofsky (@stevesi)

This post originally appeared on Medium.

# # # # #

Written by Steven Sinofsky

January 29, 2016 at 9:44 am

Posted in posts

Tagged with , ,


everlaw-600x600-squareAJ Shankar was busy working on his PhD thesis at the University of California, Berkeley in the prestigious Programming Systems Lab, where he published a number of important papers in OOPSLA and PLDI. As a big fan of side projects, he also caught the maker bug.

One of those projects was working as a technical expert for a leading Seattle-based law firm. It led AJ to ask what every entrepreneur asks, “How can this be improved with software?” That’s where Everlaw got started, in 2011.

The problem: A world of legacy software for lawyers

The legal profession — particularly the area of litigation and trials — is a costly, complex, labor-intensive, and, frankly, error-prone process. Beyond that, it is steeped in the complexities of individual courts and jurisdictions dictating, sometimes at the trial level, how technology can be used. Having personally worked through the transition from WordPerfect to Windows over the better part of a decade, I know the challenges of bringing technology to this highly knowledge- and people-intensive process are significant.

AJ and his co-founder, practicing lawyer Jeff Friedman, a former Assistant U.S. Attorney and corporate counsel, know these challenges well from their experiences. They set out to invent something that meets both the demanding technical needs of litigation along with the unique business requirements of law firms, which often do not have the resources or skills required to manage complex software deployments.

In fact, complex deployments of on-premises software defines the current state-of-the-art in litigation support software. Anyone familiar with modern software would look at this “state-of-the-art” and see architecture from another era. That’s not to say those solutions do not provide value and make money, but AJ and Jeff see a far better way.

There is also a need for modern solutions to deeply technical problems — such as searching terabyte corpora for relevant documents (the state-of-the-art is mostly keyword search) or identifying clusters of relevant documents based on machine learning techniques (versus relying on humans to manually sift through and connect millions of documents). Historically, an industry vertical with such a legacy business model and architecture (i.e., very slow to change) would have a very hard time attracting top computer science talent to improve the space.

Law firms also need software to solve the modern problem of “big data.” In this context, big data can mean millions of email messages, chat transcripts, voice mail recordings, scanned documents, entire data sets and social media feeds, and much more. Those are the artifacts of the legal discovery process that flow across both sides of the aisle in ever-increasing volumes. These volumes are beyond what many law firms can deal with, and as some might know, producing large amounts of data can often be part of a legal strategy used against smaller firms.

Finally, the pace of change for software in the litigation industry needs to increase. The model of one-time, slowly updated on-premises software simply isn’t compatible with the fast-paced changes in technologies that can help legal. Part of the legacy world the legal profession faces is the same that any enterprise faces: A desire to move away from high, up-front product costs and transition to a cloud and software-as-a-service (SaaS) model.

The solution: Bringing cloud innovation to lawyers

Everlaw architected a solution that starts from customers: attorneys at small firms, large firms, state offices, and on both the defense and plaintiff side of cases. AJ’s technical background and Jeff’s real-world experience as an attorney proved to be a great place to start. To begin the journey, Everlaw assembled an engineering team of hard-core computer scientists, many from UC Berkeley.

In the Andreessen Horowitz pitch meeting, it turns out a lot of the former CEOs, execs, and founders have been involved in litigation. Our collective experience, especially as defendants, led to an immediate bond with AJ as he detailed the Everlaw solution. Many of us have been through the boxes of documents and questions from counsel about “discovered” documents. We knew how difficult the process was and we loved when AJ detailed Everlaw’s approach:

  1. Bring together core computer science experts from natural language, machine learning, and full-stack development to architect the system.
  2. Build innovative experiences that start with the process of ediscovery and provide a platform for an end-to-end solution for attorneys to collaborate as a case is developed.
  3. Deliver Everlaw as an incredibly secure, highly reliable, totally scaleable cloud-based SaaS service.

So far, their experience with customers has been amazing. Since most attorneys are part of the world of mobile and cloud experiences, as soon as they see Everlaw, they see how much easier, faster, and higher quality their trial preparation and work can be. In fact, customers usually say “why did it take so long” or “this is how it should work”. AJ has written a postthat includes more details on the company’s vision and the success to date.

At Andreessen Horowitz, we are always incredibly excited to see technology founders taking on the hard work of reimagining an industry. It is clear that mobile, machine learning, and cloud delivered via SaaS will revolutionize every vertical, including legal. We love the work that AJ, Jeff and the Everlaw team have done to bring such high-powered efforts to an incredibly important part of the economy.

For those reasons, we could not be more excited to be partnering with Everlaw and leading their Series A funding round, joining the existing investors. I am super excited to be joining the Everlaw board to support their ongoing work. Software eats legal.

Steven Sinofsky (@stevesi)

This post originally appeared on a16z.com

Written by Steven Sinofsky

January 14, 2016 at 10:23 am

Posted in a16z, posts

Tagged with

CES 2016—Observations for Product People

CES 2016—Observations for Product People


This is not me.

CES is the best place to go to see and learn about making products. In one place you can see the technology ingredients available to product makers along with how those ingredients are being put together and how they are interacted with and connected to customers.

I love going to CES and walking the show floor north to south, convention center to Sands and seeing and touching the products, including the way random show-goers perceive and question what is out there.

As much as I love attending, I also love taking a step back and thinking (and writing) about what I learned. Doing so provides great context for me in working with startups on their products, talking with enterprise customers about their needs, and partnering with bigger companies to enhance their go to markets.

As a reminder, CES is not a big electronics store nor is it a research lab. It is somewhere in between. While there are many ready-buy products on display, most are not yet ready to use. Many of the most interesting technologies are not yet in products. Most companies are working to put forth their best vision for where things are heading. It has always worked best for me to think about the show directionally and not as a post-holiday shopping excursion. Equally important is keeping in mind that I’m not the customer for everything one can see.

This is a long post. The breadth of CES is unprecedented. The show is not “consumer electronics” or even “home entertainment” but it every industry. Where else would you see booths from car companies, delivery services, film studios, computer makers, electronics component makers, cable tv companies, mobile phone carriers, micro processor and chip makers, home improvement superstores, and so on. From startups to mega-caps, from every country, from supply chain components to complete products everything is represented. The opportunity is unique.

CES has become a software show. Even the interesting hardware is dominated by firmware, cloud services, and connectivity. It is increasingly clear that if you’re interested in software you have to be interested in pretty much every booth. I’ve heard software is eating the world and that’s on display in Las Vegas.

The major observations impacting product makers and technology decision makers on display at CES 2016 include:

  • Invisible finally making a clear showing (almost)
  • Capable infrastructure is clearly functional (almost)
  • Residential working now, but expectations high and software not there
  • Wearable computing focusing on fitness
  • Flyable is taking off
  • Drivable is the battle between incremental and leapfrog
  • Screens keep getting better
  • Image capture is ubiquitous
  • Small computers better and cheaper for everyone
  • Big computers better but not game changing

Invisible finally making a clear showing (almost)

For many years much of the show floor was dedicated to the problems of where to store bytes, how to move those bytes around a network, how to type, or even how to convert bytes from one format or device to another. What’s most amazing is just how much of all of this is now simply invisible. The whole industry has moved up the stack.

If you go through all the winners of CES “best of show” (note, wow there are a lot of winners!) most all of them have a few things in common:

  • No local storage (for customers to deal with)—everything is cached from the cloud or streamed (i.e. no media servers, no hard drives, no formats to worry about, no backups to do). Yay!
  • No wires—everything is wireless. Even better, most everything is WWAN (mobile phones) Wi-Fi or Bluetooth. This is infrastructure that is now normal—meaning not a point of differentiation or confusion—the mobile ecosystem and supply chain all but guarantee this connectivity and capability. Almost nothing has an RJ-45 network jack and anything that might require one has some sort of wire-closet hub to separate the actual device from the wired connection. Most everything easily tunnels to your Smartphone via cloud services. Yay!
  • No buttons—everything has a touch screen and there are few buttons to deal with. When a complex user experience is needed, it is almost always done with a mobile app (more on that below). What was amazing was just how rare it was to even see a keyboard and certainly gone are rows of rectangular buttons. Yay!
  • Almost, no mains–a lot of focus is going to long life batteries, solar, and certainly wireless charging. Many of the winners such a Bluetooth location devices, cameras, and home automation/security operate on batteries lasting almost a year to two years. That’s long enough to probably never change the battery and just replace with the next generation! Put devices where you want and access them from anywhere. There’s a massive amount of cool engineering and clever approaches that go into being ultra low power. Yay!

This set of attributes represents the starting point for most any product. It is also a huge opportunity for consumers because it means the ability to adjust devices over time, even for residential equipment, is much easier than the past. Imagine when you move, you can just relocate your security camera, for example.

To be fair, there are some wires, but we are down to three: Apple lightning, USB C, and HDMI. USB was ubiquitous throughout the show and devices that should use USB C (like new PCs) and didn’t look like they missed out. Given wireless video casting, even HDMI cables will fade into the background for most people. I’m beginning to think more and more that Apple Lightning is looking more like Firewire in that it was superior at the time but the industry caught up faster than expected. It might even be the case that HDMI will move to the USB C connector form factor (not protocol). Going to/from these three cables is also easy, which is great.


USB C was everywhere.

One fun note is that quite often you see a product that seems clever and/or odd and then you see it again, and again. This year, I saw the identical USB charge station a dozen times. This is the China manufacturing and distribution system at work.

USB charge station seen all over the floor.

USB charge station for when you’re really serious about charging (C version on the way!)

Capable infrastructure is clearly functional (almost)

It is interesting to see the mobile supply chain’s relentless focus on continued integration drive very capable infrastructure into nearly every single device.

Going back, there would have been a CES where “wireless music” was a thing all by itself. Or maybe you recall a CES where just being able to have a camera was a big deal. Most probably remember when GPS was a “thing”.

CES 2016 shows that all of these scenarios have come together and basically in anything you want to make one can have all of these (and more) or pick and choose easily what capabilities to expose. From a base capability perspective this includes:

  • Attaching a camera and sharing captured images/video
  • Streaming audio
  • Controlling the power state and move it around
  • Locating the device
  • Alerting those nearby with sound, vibration or those far away with mobile alerts
  • Lighting the device with tiny LEDs of any color that never burn out and consumer little power
  • Uploading sensor data from the device
  • Sensing the environment

All of these are available to product makers and likely harder and more expensive to acquire discretely than they would be by essentially taking a mobile phone BOM and making a device. If you talk to the makers at the booths, most every device has more capabilities in hardware than is being exposed in the current release of software. Cameras are capable of 4K, SIM slots go unused, sensors collect but don’t share, and so on.

The big challenge is no surprise. Software development is unable to keep up with the hardware. What is going to separate one device from another or one company from another will be the software execution, not just the choice of chipset or specs for a peripheral/sensor. It would be hard to overstate the clear opportunity to build winning products using stronger software relative to competitors. Said another way, spending too many cycles on hardware pits you against the supply chain for most products.

Some of the devices that include most of these include a rubber duck (speaker, remote control), knit cap (music player), light bulb (speaker, camera, climate), walkie-talkies (location, camera), power strip (remote control, telemetry, power usage report), flower pot (soil water level, camera). The list goes on and on!

Screen Shot 2016-01-10 at 11.02.24 AM
Looks like a rubber duck, but it is also a remote controlled streaming media player with kids apps!
Looks like just a knit hat, but a music hat that streams music.

Residential working now, but expectations high and software not there

The most visible example of the ecosystem of components, manufacturers, and distribution coming together is in residential—products to control, protect, and monitor the home. There were dozens of companies showing what looks to be essentially the same product:

  • Wi-Fi or WWAN base station that connects to and controls the sensors in the home while also communicating with a monitoring service
  • Door/window open/close sensors to detect entry
  • Water sensor to detect floods
  • Motion sensor to detect intruders
  • Outlets and switches to control lighting and outlets
  • Smoke/Fire/CO sensors for safety
  • Thermostat for environmental
  • And so on.

In addition, there are more specialized (and harder to make) controllers for legacy home systems like garage door remotes, water heater, sprinkler, and so on.

Plus there are cameras for security monitoring and doorbells and locks to control entry, though many systems struggle with offering and integrating those.

The reality is that all of these basically just work and provide evidence of the supply chain at work. These are offered by startups, white labeled to many local distributors who will handle installation, and all the major home improvement stores carry them. You might have even seen the pitch from Comcast or AT&T for these as well. There were at least a dozen full service companies on the floor.

They are all essentially the same offering. Well, except for for the software and that is where they are all quite different and where the “ready for prime time” evaluation needs to be done. While they all have apps, for many scenarios some of these can prove quite awkward for some basic control—to the point where it is more annoying than helpful to use. For most customers, the app becomes secondary to more traditional keyfobs/dongles and PIN codes.

Once again, this shows where there’s opportunity to focus and potentially win.

Traditionally this has been an area where the reviews clamor for integration and synergy across device. A couple of things became clear this year:

  • Since everyone can offer everything (due to the supply chain) the viability of the company becomes more important than worrying if the company will offer a particular sensor/controller.
  • Integration is happening through a very traditional “consortium” and as nice as this sounds it isn’t clear it is working particularly well. First, much of what makes these easy to use is the way each maker handles out of box setup (which is mostly outside the standard) and adding additional sensors over time. Second, the UX for managing sensors and controllers integrated by third parties is usually least-common-denominator compared to first party.

In fact, this year saw a significant change in integration. Last year most all home automation was integrated with Nest. While that is still the case, as most would not the integration provided little useful capabilities and the “native” apps proved better. This year everyone integrated with Alexa from Amazon Echo. This made for compelling demos to turn lights on/off or adjust temperature. Time will tell if Alexa will be replaced next year or if Nest will up the level of integration.

IFTTT (an a16z portfolio company) was frequently used in demonstrations for conditional and multi-step scenarios. IFTTT replaces “custom installer” macros and other tools that have often plagued “home automation”.

Two great examples of this are programmable door locks and video doorbells. Both of these are logical integrations for the rest of a security system and while basic integration over z-wave is possible, for most scenarios (answering the door, programming new combinations) the vendor specific app is required. These are difficult to make products that need to fit in legacy infrastructure, so this is to be expected.

That said, because of the rising tide of infrastructure, the locks and doorbells have come a long way in the past year. Ring doorbell even released a battery operated (rechargeable) camera to accompany the doorbell (it is basically the motion-activated doorbell camera without the bell). Vivint has done good work to integrate Kwikset locks, a first party doorbell, as well as Amazon Echo to provide a more complete solution.

But for now, the base level capabilities are there and work across many providers. It is likely that these will further coalesce into a market where it is easier and better to get all the components from one company rather than trying to stitch them together. The good news is that this category is a pretty simple DIY project. The better news is that because of the SaaS revenue for monitoring, it is not hard to find an offering that comes with free installation (such as from Comcast).

Home integration happens with this in theory, though in practice the supply chain makes it easier to avoid cross-manufacturer integration if at all possible.

Example of one of many suppliers offering the full range of sensors and controllers.
Even at the low end, all the same sensors, detectors, cameras are available. 
Most cameras now combine motion detection and some machine learning to reduce false alarms. This camera is integrated into a traditional porch/doorway light so no extra wiring is needed.
In an example of the ecosystem at work, this same switch (based on the no-battery, no wires approach of enOcean) shows up in dozens of different systems.

Wearable computing focusing on fitness

The big news last year was all about “smart watches”. This year the focus of many of the same makers turned more to fitness and less about overall lifestyle.

There were certainly many connected measurement devices (body composition, weight, sugar levels, blood pressure, etc.) and every device is able to measure sleep (on your wrist, in your pillow, or in your mattress) or steps taken.

Unlike the home security sensors, there’s still a great deal of science to be done to correctly (accurately, precisely, reliably) measure humans and much science that is needed to make this information actionable. I continue to think we’re measuring more than we can consume and act on, especially on a constant basis.

It looks like the major band makers agree and this year became much more focused on the specifics of exercise. The biggest announcement came from Fitbit with the new Blaze wrist wearable, “smart fitness watch”.

Fitbit Blaze wrist wearable for fitness

In addition, Polar, Garmin, Under Armour and more all had new/improved bands dedicated to fitness. Much of the technology is about adapting algorithms to understand what the telemetry means depending on the sport (i.e. how do you measure fitness goals from your wrist when doing weights).

My view is that these bands are doing amazing work for people that are hardcore training in sports, but that the vast majority of people won’t benefit from the charts and graphs that come from doing a lot of work to set up using these. Speaking purely from the point of view of improving the average person’s fitness, a scale and blood pressure monitor seem more important. For most people, just walking for a fixed amount of time would be an improvement and a watch focused on timing laps for multiple sports is unnecessarily complicated and potentially demotivating. The support that comes from the community aspect for basic measurements and activities is documented and well-known to be a benefit, but that wasn’t the focus of the products on the floor.

The other aspect where these bands both differentiate and are still searching for broad fit is in software. With some sports sharing times (rides, trails, etc.) is a part of the hardcore enthusiast and so the community aspect is important. Again though that isn’t necessarily a mass consumer scenario.

I’m certain that the medical physiology (what measurements mean), sensor technology (how to measure), and medical research (how to act) will continue to evolve in this space. The longer term goal of a device that tracks meaningful body telemetry that regular people can act on themselves is not far off.

Fitness monitoring is not unique to humans. There were a number of products to help to monitor your pet with a pet wearable.

Connected Pet—monitor what your pet does during the day for better fitness.

Flyable is taking off

Drones were more numerous and more capable than last year. As much as the category is maturing, it is worth noting how early this really is.

There are two large players in Parrot and DJI who commanded a significant presence on the floor. Beyond that, once again we can see the supply chain at work as there were countless companies with largely similar products.

The most common experience in the drone booths would be to watch someone come up to a company rep and ask about the range and then follow up asking about the payload. I must have seen this 20 times and each time the person walked away disappointed, as if they where hoping this was the magic booth that had the drone that really could deliver groceries or fly cross-country. The other question was how autonomous the drone was and always the answer was disappointing.

The vast majority of what is going on is still in the realm of traditional radio controlled (RC) flight in new form factors with amazing cameras (made possible by the influence of the smartphone supply chain). Even the major vendors are still in the early stages of the basics of geofencing, route planning, and other scenarios focused on safety.

There’s clearly a product development cycle analogous to PCs v mainframes/minis happening. Drones are never going to be jets or general aviation, just as PCs were never going to be mainframes. When something sees a 10x increase in usage/adoption the new product is always much different at that scale.

On the other hand, things will not evolve so fast and loose the way PCs did because drones share the same airspace as jets (in a way that PCs never shared with mainframes). That’s why I think this evolution will see more “real” aviation get pulled into drones much sooner than we saw mainframes (i.e. servers and server hardware attributes) pulled down into PCs. Reliability, safety, and more will need to happen sooner rather than later. Piloting a drone will be a profession, not a hobby, until they can really pilot themselves (but even then…). With that there will be opportunities.

Like other categories, the difference between companies is not as much in the supply chain components or even the manufacturing/integration but in the software platforms. In the case of drones, it is more the minimal amount of software. There’s still a lot of “pro-sumer” work that needs to happen to get the full cycle of sensors, flight, data gathering all working. One example of this at work was Parrot demonstrating their work with senseFly (a Parrot company) for agriculture.

Another example was this complete “police surveillance operation” kit from Flymotion. It offered the full command center for monitoring in the case of disaster or other need.

IMG_0884Flymotion’s complete Police surveillance drone system.

One of the most crazy and unexpected drones was from EHANG, a China based company (co-founded by an ex-Microsoftie!). Their product is a single passenger drone — basically an autonomous Uber-drone. You get in it and it flies you somewhere. Totally mind blowing. Given the differences in regulatory climates, this product is making fast progress in China and is already airworthy. I don’t often post pictures of me, but here I am to give you a sense of scale of this one.

Here I am exiting after checking out the single passenger EHANG drone.
Another image of EHANG’s drone from their web site.

Next year is going to be an incredibly interesting year for drones. That is certain!

Drivable is the battle between incremental and leapfrog

Back down on earth and on the roads, the biggest battle in the global economy is over the next generation of “car” transport. Given the size of the market and the importance car companies played in the 20th century it is obvious why so much focus is on self driving cars or on alternative powered cars (or both at the same time).

All this coverage needs to be put in context of what was on display at CES. First, it is remarkable that car companies are using CES as a platform for announcing their autonomous work and general innovation in driving—while autos (and the Detroit supply chain) have been at CES for years it was always in the context of the after-market accessories or in building better premium “electronics”.

Second, while the whole North Hall of the convention center is devoted to cars, the vast majority of what is on display is traditional after-market customizations and even standard cars. FCA’s center stage was an interesting revamp of a Jeep interior, independent of autonomy or alternative power, for example.

The most interesting topic to ponder is really the nature of disruption that is taking place. Existing auto companies are seeing every aspect of their business upended. On the one hand all of their expertise in engines, interior design, drive trains is called into question by electric cars. On the other hand, autonomous driving challenges the fundamental business model of these car makers. Together these disrupt the entire process cars are built—a supply chain of parts makers, product managers, brand managers, dealer franchises, and more that has been built up over 100 years.

It is one thing for GM to show a Bolt, which by all accounts looks amazing. Or similarly for VW to show the BUDD-e van ( electric range of 373 miles, be capable of recharging to 80% capacity in about 15 minutes and would have a top speed of 93 m.p.h). But it will be quite another to deliver these at scale, sell them, and change the pricing and business models along the way. That’s just super hard for any company to do. As a reminder, FCA, Ford, GM combined sell light trucks for about 72% of their North America vehicles and those are more of their profits. Here’s a fascinating article on GM and the change underway there.

VW BUDD e electric van to be available in a couple of years.

The role software and hardware (again, the smartphone supply chain) will play and how companies execute those areas will almost certainly be determining factors. For example, it will be much more difficult to built a reliable car if the software and hardware systems are a combination of legacy and new; or if every car needs to be built to handle the “optional” autonomous or driver assist features. Will the car makers look to the existing supply chains in place or be able to make huge and difficult choices to trust new suppliers with new components?

An example of this is NVIDIA which is building out a significant and integrated suite of car electronics. Basically making a car SoC. NVIDIA is not Bosch or Delphi.

NVIDIAs car “SoC”.

While we were at CES Tesla updated their customers vehicles with the ability to summon your car. In a world where car makers still mail out DVDs or USB sticks to update maps, it is interesting to think about how things need to change inside those companies to enable that sort of customer experience.

If you think all of this is just being pro-Valley or cynical, then I would offer this counter example. Mercedes’ announcement at CES was that they intend to announce by the end of the year their strategy for electric cars. So in a year they will announce what they intend to do (of course many people are working on that now). The clear focus is on driver assist leading to autonomy (which they might be very advanced in).

For me, the most exciting transportation product was the Gogoro SmartScooter, which was also at the show last year. Think of the product as an electric Vespa with a max speed of 60mph and a range of about 60 miles at 40mph. But you don’t recharge it while parked, you pop out the battery and pop in a new one (or two) at one of many battery stations around town. You can own the scooter or potentially share them the way Divvy bikes are shared in major cities in the US. The company also has a home station to charge batteries in two hours.

This feels like a potential future of urban transport in most moderate climates.



Gogoro SmartScooter and public charge station.


Gogoro Energy Network showing charge stations in Taiwan.

Screens keep getting better

It used to be that the big (or flat and big) news at CES was about TV. Booths used to be filled with TVs. TVs are important but this year saw a greatly reduced push around smart TVs and a much bigger emphasis on overall image quality.

The reason for this is HDR and 4K. While most people gravitate to 4K (which debuted two years ago and is widely available now, including streaming content) the real news is HDR. HDR is “high dynamic range” or the ability to show more range of brightness. If you imagine scenes from Jessica Jones or Daredevil, HDR makes those scenes so much better, much more like what you would see in person. Unlike more pixels which we all know most people and most rooms can no longer discern, HDR is immediately visible to most viewers. Here’s a great thread on Stack Exchange about dynamic range.

Standard 4K image on left, HDR image on right.

All the major companies were showing off HDR displays. There’s a new industry acronym Ultra HD Premium which signifies an appropriate level of dynamic range. Netflix and other content providers will also be supporting HDR.

Take a moment to consider why this is not like the transition to HD and why it will happen much faster. HD required new content and going back to existing libraries of film and rescanning to make Bluray which you then needed to buy to play in your Bluray player. Network TV had to make the transition. Broadcast spectrum had to be allocated, and so on. Now this is all about software—recording is captured in RAW which has the information to make HDR (though more can be done in sensors for sure, which is a huge opportunity!) and re-encoding with enhanced metadata can be done as desired. Even distribution is no longer focused on just studios with new content coming from new players who have software perspective to bring.

Dolby is doing very exciting work to bring HDR to theaters and to home screens. They also showed some incredible work on sound called ATMOS which is a sound encoding that allows a single speaker bar to use a large number of drivers to deliver 7.1 sound. It was incredibly cool to sit there and hear sound coming from everywhere (Mad Max!) from one sound bar from Yamaha on the super cool LG HDR OLED display.

Still TVs continue to just get better. OLED continues to amaze and seeing a TV that is a sheet of glass is the star of the show. The Samsung SUHD 8K was the one to watch this year.

Samsung 8K HDR Quantum Dot display. Yowza!

In the magic of software and physics department, Sony was showing short throw laser projectors that were mind blowing. One was a 40″ image projected from a 4″ cube speaker essentially against the wall. The other was a 100″ image from about 12″ away. Amazing! (Super interesting how the digital sensor captured the image by the way — some insights into how the lasers work!)

Sony short throw projector. Image is about 40″ projecting from the cube speaker essentially next to the wall.
IMG_0966Sony short throw project. Image is about 100″ projecting from the floor console about 12″ from the wall.

Image capture is ubiquitous

Cameras are everywhere in products. Once again this is enabled by the supply chain create by the pull of smartphones. Incredibly high quality cameras can be integrated into very small places and draw very little power. If everything is connected, then you don’t even need to store images or have an interface to interact with them.

Cameras are gaining more resolution, working better in lower light or infrared, and offering new capabilities driven by software. In particular, motion sensing, face detection, and object recognition software capabilities are becoming key parts of cameras. Though cameras themselves as a stand-alone product are much less interesting than integrating them into environmental or people monitoring, smartphones, cars, doorbells, or industry/job specific functions (police cameras for example). As with home security, the supply chain makes it easy to have the camera, but software is what makes it useful.

A great example of this at work is the Blink camera. By using motion detection software and bluetooth LE this camera becomes completely wireless—it uses CR123A batteries that can last 6–12 months. It is like a completely wireless Dropcam (but not one you would look at all the time unless you wanted to change batteries). Netgear Arlo is a camera taking a similar approach. These cameras communicate over Bluetooth to a small powered base station that connects to a wired network

Blink camera operates completely wirelessly using batteries and Bluetooth LE.

In a dynamic similar to smart watches, the action cameras seemed to struggle this year. There was no pulling back from extreme sports or just extreme in general as the main purpose for GoPro, for example. It isn’t clear to me how much bigger this market can be. There were a vast number of GoPro-like clones out there.

There were quite a few “VR” cameras which were any number of cameras (depending on lens) designed to capture 360°. The playback would use Google Cardboard. A good example is the Nikon KeyMission 360 which captured 4K images.

Kodak’s Super 8 is a high tech Super 8 movie film camera. It is quite the hipster product. It shoots film, which comes in classic Super 8 cartridges that you ship back to Kodak where they are developed and then delivered as scanned footage. As a silver-halide-enthusiast, I find it very neat but I am a bit skeptical. What might have been more interesting it to build a camera that had the UX affordances of shooting with film but the convenience of digital (along the lines of a Nikon Df still camera).

Kodak’s Super 8 film camera and trusty Tri-X film!

One of the additions to CES this year was dedicated space for crowdsourced ideas/companies such as from Kickstarter. One project in this part of the show was the Enlaps camera. In one package the full range of the supply chain comes together — cameras, solar, mobile data, and cloud services. The product+service is incredibly cool and solves what is traditionally a very difficult problem which is capturing long term, time lapsed video from a remote location. While even phones have interval image capture, the ability to manage power, control the camera, and monitor what is going on is enormously complex (see http://ohioline.osu.edu/w-fact/pdf/0021.pdf for wildlife capture which is like time lapsed by motion triggered). I love the “set it and forget it” Enlaps product. It has two 4K cameras, solar power, and a web service that handles the complexity of time lapsed intervals so you can easily stream the results to your phone. You could use this for sunrise/sunset, changing seasons, tidal pools, wildlife migration, construction projects, crowd flow, events, and more.

Enlaps.io is a fully self-contained interval camera. It shoots 4K images at intervals and sends them back to your mobile device. The camera operates off solar power and can be placed remotely for as long as you need.

Once again our pets get some love from cameras too. In this case your pet can “Facetime” with you by pushing a very Pavlovian button. Of course it isn’t enough to just see your pet. With PetChatz you can release pet pleasing smells and treats using your mobile device.

PetChatz is video conferencing and treat dispensing for your pet. No, really it is.

Small computers better and cheaper for everyone

There is not a lot of news at CES for small computers as most companies save that news for GSM World. What is there shows the continued ability for the mobile supply chain to deliver all the components for a small computer and to now package them in ever-improving quality packages at ever-decreasing prices.

All of the vendor phones displayed the floor were of course Android. It is worth noting that one almost never saw Android being shown as part of products in booths unless the product is doing something that it probably shouldn’t be doing (i.e. root kit, peripheral, access to some low-level OS thing). The common thing I heard in the booth booths with Android would be “we’d like to do this but Apple won’t let us”. Personally, this is less of a call for Apple to open things up and more of a call for developers to think up different solutions, at least that’s my view of prioritizing “consumer electronics quality” over “get stuff done”. Most of the scenarios that were Android-only seemed somewhat dubious to me.

That said, there were dozens of phone makers with very high quality builds of phones. This one from nuu mobile is part of a line that goes from $99-$299 direct to consumer. There are lots of these companies differentiating mostly by channel approach (country, carrier, unlocked, rate plans) and less and less by software I think.

Top of the line nuu mobile Z8 retailing for US$299.

At the extreme low end, some of the China manufacturers still show some pretty old school stuff. I just liked how this ODM model had the generic “Brand” on it waiting to be picked up by a wholesaler.

Old school phone labeled “Brand” waiting to be picked up by a wholesaler.

And in a throwback to the smaller-is-better era, here is a full voice/text phone done as an earbud. Those are actual buttons. No word on talk time. I actually saw it work!

Full ear bud phone. No really.

Big computers better but not game changing

There were quite a few new Windows 10 laptops, all-in-ones, and big tablets announced this year (most with Spring availability).

The general trend is thinner, higher resolution screens, and Intel’s new Skylake processors. The Samsung 9 and the LG Gram garnered a lot of attention. The Samsung comes very close to the Macbook in form factor in a larger screen. As a Windows PC it has more ports of course. The LG is crazy light as you can see in the photo below. They were not quoting any battery life and there is no touch screen. Both skipped using USB C for power though which is disappointing in terms of specs.

LG Gram Windows 10 PC is super light.
Samsung 9 is a little thicker and heavier but features a touch screen.

HP, Dell, and Asus also had new PCs.

The area where PCs still currently lead phones is in graphics capabilities. But you can only experience this if you use the massive discrete cards from NVIDIA (primarily) and not with the integrated graphics on every laptop. If you want these insanely powerful graphics capabilities (say for deep learning experiments, bitcoin mining, CAD/3D, or just gaming) you have been stuck with a pretty hard to deal with tower. There’s help on the way in two neat PCs.

One is the MSI 27 XT all in one which takes a classic 27″ AIO and bolts on sort of a backpack for a PCIe graphics card. It isn’t pretty but it is a much more viable way to get the power you need assuming the display is good (which I was not able to see).

 Screen Shot 2016-01-10 at 11.10.16 AM
This MSI 27″ All-In-One has a discrete graphics card cage on the back for a PCIe card.

Razer which builds a great community of PCs and accessories offered up a pretty unique combination. The Razer Blade is a high end ultrabook stylized for gamers (colored LED keyboard lights). It runs high end Intel Skylake parts (Core™ i7–6500U) and has a great screen. It would be an accomplished Ultrabook on its own.

Via Thunderbolt 3 in a USB C form factor you can attach a mid-tower sized box with an external PCIe graphics card (as well as some additional ports). This turn the Ultrabook into a pretty high end workstation. I admit to taking a wait and see regarding the quality over time and security of all those kernel mode drivers via plug-and-play of thunderbolt so I’m looking forward to seeing how this evolves in real world settings.

Razer Blade ultrabook and Thunderbolt 3 connected PCIe desktop graphics Core accessory.

Finally, in the tablet form factor Samsung announced the TabS Pro for Windows 10. The most interesting thing is that it carries integrated LTE which you don’t see often. The tablet itself is a 12″ slab with a great sAMOLED display. It runs the updated Core M processor which runs everything anyone would need to run unless you’re running Visual Studio or full time CAD/CS. The specs are great. In practice the soft, z-fold cover is awkward (just watching the booth folks deal with it), doesn’t stay attached, and doesn’t support the 12″ screen while using touch.

Samsung TabPro S


It is tough to beat this story of entrepreneurial spirit. Meet 13 year-old Taylor Rosenthal. He’s an entrepreneur from Opelika, Alabama. As an avid team sports participant he has more than once run across the challenge of needing the right first aid gear for minor cuts and scrapes on the playing field. He developed a set of kits and a vending machine called RECMED. He made his way to CES to show off his company, which he told me is remaining independent even though he already received a significant buyout offer!

Way to go Taylor!

1-uh7jteCOrxrUoJaPmfktpw Taylor Rosenthal, CEO and Founder of Recmed.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

January 11, 2016 at 10:59 am

Posted in posts

Tagged with , ,

“Hallway Debates”: A 2016 Product Manager Discussion Guide

MeetConstitution3When it comes to innovation and roadmaps there’s nothing special about the start of a new calendar year other than a convenient time to checkpoint the past year and to regroup for the next. Everyone’s feeds are going to be filled with “best of”, “worst of”, and “predictions” for 2016 and those are always fun. What I’ve always found valuable is taking a step back and thinking about the themes that will impact decisions at the product and business level over the next year.

In that spirit, the following is a selection of “hallway debates” that (I think) are sure to occupy the minds of product managers and product leaders over the next year. The hallways can be literal or figurative (i.e. twitter included).

Innovations in 2015

No doubt this year was filled with more than a fair share of “nothing new” or “incremental” views on innovation, but as usual I don’t think that is the case.

The big companies were busy with Watch, iPhone 6S, iPad Pro, Windows 10, Surface Book, HoloLens, Nexus 6P, Pixel C, and more. Amazon delivered an amazing amount of AWS features and capabilities (conveniently listed here). All while this is going on, startups continue to iterate, create new categories, and introduce new technologies (my biased, favorite list is onproducthunt.com every day).

All in all, 2015 was tremendously busy with so many of the products introduced clearly pushing the state of the art.

Musts for 2016

Before getting to the choices that have nuance or subtlety, the following are two top-of-mind factors that get to the heart of building products in 2016. These aren’t debates at all, but creating actionable and measurable plans should be a priority for every team and company.

Diversity and Inclusion

This year, rightfully, brought an overdue and intense focus on the role of diversity and inclusion within leadership, engineering, and businesses in general. The time is right for there to be an exponential change in our approach. We all know this is more than an opportunity, but a necessity. Products are used by an infinite variety of people from all walks of life, backgrounds, and abilities, and it follows that products should be built that way as well.

Looking at this through the lens of rapid change and how working on this early on is so critical, startups need to address this early in a company (and team!) lifecycle or the need to address this dramatically within an existing organization becomes clear. The simple math (and yes this is a simplification) done on this model, highlights just how difficult it can be to catch up if there is just a small and systematic bias along just one dimension (men/women) in an organization.

The discussion needs to happen and from that plans and acting on those plans needs to follow.

Security and Privacy

The run of security breaches in 2015 continued unabated. The damage continues to go up and there seems to be no end in sight to being able to secure legacy infrastructure. There are, however, some good news scenarios.

First, the move to mobile, particularly iOS, and third party SaaS services affords an opportunity to reset the security landscape. Of course these new operating systems are not absolutely secure as we have seen, but the level of security that comes from a new architecture and the investment in up front security places you on much firmer ground. There is no denying this so if you want to be secure, getting more of your use cases to mobile is going to help.

As makers, there are many things that are now part of the first wave of product features that were previously “good to have” features. Start with building on top of existing identity and authentication methods, build all communication channels as encrypted, and encrypt stored data within your own services. Previously these were viewed as enterprise features to be added later or to charge more for, but now these are essential to bootstrapping a service. They are just a start, necessary but not sufficient.

Nearly every discussion about what to purchase or what to build next needs to happen in the context of security and privacy.

Choices for 2016

Product choices are never as binary as we would like — there’s rarely an absolute. In general, there are nuances and more importantly context that drive a particular approach. Therefore, hallway discussions are rarely won (or lost) but are a necessary part of deciding on product strategy.

The best product manager or product leader discussions to have in 2016:

  • Invest in Deep Learning
  • Bet on Mobile OS
  • Ride the Mobile/ARM Ecosystem Wave
  • Compute, Not Just View, on Mobile
  • Go with Public Cloud
  • Choose Platforms Carefully
  • Track Computer Science
  • Avoid the Bridge
  • Create “The Plan” with Quality In Mind

Invest in Deep Learning

This past year has been an incredible year of progress in artificial intelligence or machine learning. Progress has been so significant that the pinnacle of tech leadership has articulated a growing concern of the risks of AI!

Even so, this generation of AI has gone from recognizing cat videos to being able to quickly and easy tag your friends in photo galleries and online services. Nearly every good recommendation engine is now powered by deep neural networks and machine learning.

A couple of great examples of machine learning include visual search at Pinterest and images within Yelp listings. This year even saw Google generate smart email replies for you! These are incredible advances in how problems are solved. The common thread is classifying existing large and labeled data sets within deep neural networks. This contrasts significantly with previous approaches to these same problem areas that would use click streams or other algorithmic approaches.

If you’re still coding recommendations, classifications/labeling, or automated generation of content using algorithms or simple networks, then this is the year to investigate how you would use these maturing approaches. They are all better by leaps and bounds over existing solutions that rely on smaller data sets and algorithmic approaches.

As with every previous AI advance, it is likely that some aspects of these new approaches will be combined with the current state of the art. In particular, the role of existing linguistic solutions will prove incredibly valuable for smaller data sets or difficult to classify solutions for natural language queries or processing in general. Pay close attention to how the research advances though because the role of deep learning for these scenarios is changing quickly.

Bet On Mobile OS

Are tablets turning into laptops? Are laptops turning into tablets? Are tablets losing out to larger screen phones? The permutations of form factors have been dizzying this year. Regretfully, most of this industry dialog is confusing and confused. If you are making client apps, then you could easily get drawn into this confusion and might miss the key decision.

The critical decision point for any new code is to focus on the platform and OS and less on the size and shape of the device. The forward-looking choice is to focus on mobile operating systems: always connected, app stores, touch, security, battery life, and more. These attributes are step function improvements over the x86 platform and given the ongoing investment up and down the stack the gap will only widen. For more on this see a post I wrote “Mobile OS Paradigm”.

Enterprise applications are sure to see a significantly increased level of focus and support on mobile platforms for a number of reasons. First and foremost, the level of security on these platforms is so much improved there isn’t even a debate. Second, enterprise capabilities for managing data landing on these devices continues to improve at a rapid pace and already greatly exceeds all existing approaches. Third, renewed efforts from Apple and Google on enterprise capability enable new scenarios and new approaches.

This topic will continue to generate the debate — “on a phone or tablet, I can’t do everything the way I am used to.” This is a debate that can’t be won and is both a fact and not useful. The reality is that the style and products of work are rapidly changing and so are the tools to support work. Ask yourself a simple question, which is how often do you reach for your phone even when you’re sitting at a desktop or when a laptop is nearby? The more you do that the more that everyone will be making an effort to make sure important stuff happens on that mobile OS.

A large number of workloads will continue, “forever”, to be laptop/desktop centric, starting with software development itself. We’re 30 years into the PC and server revolution and many workloads still happen on mainframes (or with printers or desktop phones). The presence of some scenarios does not invalidate forward-looking decisions. It takes a very long time for an installed base of something to drop to zero.

Ride the Mobile/ARM Ecosystem Wave

The iPhone 6S launched with a new ARM chipset designed by Apple and manufactured by both Samsung and TSMC, along with a wide range of components almost none of which are single sourced. While this is the result of excellent work by Apple it also speaks to the incredible strength of the the ARM/Mobile ecosystem. By any measure, the ability to deliver tens of millions of devices sourcing so many incredibly advanced components from so many vendors is unprecedented.

One of the milestone advances this year was the hardware capabilities of the iPad Pro. The compute performance of this iOS-based tablet exceeds the mainstream laptop. The most striking of the gains came in mobile graphics which are likely to remain a firmly established leadership position. Those clinging to the last generation were quick to point out that there are more powerful devices available or that the software doesn’t allow the same capabilities to shine through. Clearly that is a short-sighted view given the level of investment, number of players, and OS innovation driving the mobile ecosystem.

Such innovation can be mostly transparent to software developers. If you are, however, building hardware or looking to innovate in software using new types of sensors or peripherals then betting on the ARM ecosystem is ano-brainer. The internet-of-things revolution will take place on the ARM ecosystem.

The ecosystem will be on display in full force at the Consumer Electronics Show as it always is each year. We will see dozens of companies making similar products across many industries (the first stage of the Asian supply chain at work). Take note of what is released and you will see the ingredients of the next wave of devices. Integrating multiple devices from a hardware perspective or building innovative cloud services will prove to be great ingredients for new approaches.

Compute, Not Just View, on Mobile

Given the innovation taking place in the ARM/mobile ecosystem, it is fair to ask “what should we do with all this capability?”

The first 10 years of the smartphone apps could be characterized by trying to squeeze the vast capabilities of a cool web service into a tiny screen. With larger screens and innovations in user interface we’re seeing more done in apps. Now with so much more compute and storage on mobile devices we should see innovation in this dimension.

There are many scenarios where the architecture of roundtripping to a server is both slow and costly. The customer experience could be improved with the use of device-side compute or caching on the device.

To illustrate this point consider spell or grammar checking (something which most people don’t implement themselves so it makes a good example). Before connectivity, dictionaries (or rules) were authored by humans and installed locally on devices and rarely updated (how often does language change?) but speedy. The internet showed us that language usage and terms change frequently and spelling in a browser turned to a service with client rendering of results, and high latency along with much better results.

Today the best spelling dictionaries (and suggestions) will be derived from deep learning and training of models on large corpora. Even with great connectivity the latency of a service-only experience is too noticeable. Given the compute on today’s devices, it is not unreasonable to start to see models built on massive data sets packaged up for use locally for specific queries or classifications locally on a device.

This might extend to many examples where learning and models are providing classification or recognition.

Value the Open Source Community as Much as the Code

There is no doubt that the biggest change in software since the internet has been the way open source software now completely dominates the entire industry. By any measure of innovation, open source drives the software that is eating the world.

The not-so-secret ingredient of open source is the community. The community is created from the very start of a project. Early on, the most successful open source projects begin to focus on enrolling the community and creating a shared ownership of both the direction and implementation of the project. You can see this in Github stats around contributors, when people joined and how much they contributed.

Deciding which projects to use in your work should be influenced by the strength of the open source community contributing to a project.

Established companies have benefitted enormously from open source as a foundation, much of that housed within their mass-scale data centers. Recently, many projects which were developed inside companies are being “open sourced” after the fact. This is certainly a positive in many ways, but also offers a different model from what has previously been the most successful.

In contrast to traditional open source projects, these open sourced projects are looking for a community to join in and validate them after the foundation has been built. They are not really looking for a community to shape or influence the project, at least not in ways that will influence what that company sells or offers. Too often it isn’t quite clear how the future of a project will be managed relative to the open sourced code.

It is still early in this new “movement” and worth paying attention. For the time being, joining projects early and/or betting on projects with a genesis as part of an open source community seems to be the best bet. The leaders of any movement tend to people that build, not inherit, projects.

There are companies commercializing existing open source projects. In this case, seeing how those companies continue to contribute to and participate with the community, especially one those founders/individuals created is an easy way to validate a bet on a project. My belief is that the leading companies are created with the leaders that created the project and continue to participate in the evolution. To spark the hallway discussion, check out this post by a16z’s @peter_levine about the uniqueness of RedHat’s success.

Go with Public Cloud

Public cloud or private cloud? To many the answer that sounds best is hybrid cloud — best of both worlds. The lure of the hybrid cloud is incredibly seductive to enterprise customers. The idea that you can get “credit” or “accelerate” the move to the cloud if you just keep some of your existing infrastructure on-prem. The arguments are well-known (data co-location, compliance, etc.) but unfortunately they are never made relative to the two factors that matter most.

First, there is no real “architecture” for a hybrid cloud — the very nature of the cloud is a new way to build and scale applications.

Second, the time and effort to create such an architecture and the complexity introduced all but guarantees building a one-off system that will only grow increasingly harder to maintain, essentially impossible to secure, and then ultimately migrate to a modern cloud.

This is not a semantic debate or a debate of “pure cloud”, but an architecture that is fairly concrete. The cloud is a public cloud and it is far more than easy to create virtual machines in a data center (aka a private cloud). Even if you’re using a public cloud, it is important to consider what your architecture or runtime look like relative to scale — simply moving your VMs to another data center isn’t a cloud either.

From your (potential) customer’s perspective, the arguments for either a private cloud (which isn’t really a cloud) or to simply avoid a public cloud are well-worn and simply not compelling. Security, scale, cost and more all tip in favor of the public cloud without any debate. In almost every regulated environment that touches the internet, the practical view of saying no to the cloud is making less and less sense. It is still going to be important to build that hallway feedback loop from the customer to the product team to make sure you’re well-versed in this dialog and focused on winning the right customers.

If you’re building enterprise software then you’ve already been wrestling with the cost, complexity, and relative difficulty of a customer offering with a consistent, scalable, manageable, and secure on-prem solution. Some will deliver this, but that will not be the norm and enterprise customers should not expect (or want) this from every vendor.

The cloud is not a call to migrate the largest legacy systems, but how to think about new systems and innovating on top of existing systems. That’s the best way to navigate this debate and to build forward-leaning products and services.

If an enterprise is constrained in such a way as to believe it can’t move or create a public cloud then by far the best approach is to avoid investing in a hybrid or “private” approach and stay on the current course and speed until you can invest what is needed.

Choose Platforms Carefully

Everyone building an app or a site wrestles with the platform choice, which has two challenges.
First, everyone with an app wants to make sure it can be used by anyone from most any device.

Second, an app is great but quite a few people (along with enterprises) want to use it from their desktops. In the meantime, doing a great job scaling the product, adding features to win deals, and staying ahead of competition are taking all your time.

The siren song of cross-platform will continue unabated this year. The ability to deliver a winning experience from a single code base targeting increasingly divergent mobile platforms will continue to prove elusive (and the presence of an exception or two does not make it any more possible). As with past transitions and even some current efforts, getting the first bit done can lead one to believe that you’ve found the magic and it can work. This too is part of the pattern of cross-platform. Over time, an increasingly short time due to rapid platform evolution, the real-world catches up with even the best efforts. There’s more on this topic here.

The main mobile platforms are innovating in ways that do not have symmetric or analogous capabilities: the rise of the Swift language, the diverging user interface models (voice, multi-app models), the changing hardware landscape (force touch, tablet sizes and resolutions), and platform services (payments, identity, service integration).

This only leaves one viable, long-term option for any mobile app that is key to the overall value equation, which is to manage platform efforts as dedicated and separate teams. Managing this is always expensive and often complex. The key is how product managers lead the choice and execution of shared features and platform specific features.

As if this isn’t enough, some set of enterprise tools are seeing demand for apps on the desktop that go beyond browser apps. A number of mature enterprise efforts have started to deliver App Store or downloadable traditional desktop apps in addition to the browser. The implementation of these has consistently been to wrap the browser version in a native frame window and use an embedded webview for the app. This affords some base integration such as convenient app switching, window management, and persistent logon. Few offer any in-depth desktop OS integration and most still remain behind the browser in key capabilities (drag and drop for example). This is especially true if the primary use case is running across multiple browsers given the effort to maintain consistency in those implementations.

The real versus perceived value of these webview based solutions is debatable. The resources required and the implied long-term commitment to customers are not. In addition, the implementation choice leaves little room for true desktop integration. Unless you’re willing to commit to building a native experience, it isn’t clear that the investment for these apps represents the best way to win or stay ahead in the marketplace.

Track Computer Science

Some of the most interesting startups are those coming out of university or industry research laboratories with open source projects or those bringing entirely new approaches to what are traditionally “well-understood” computer science fields. It has been a long time since so much of what is new and interesting also happens to be the newest and most interesting topics being researched by new graduate students in university research labs.

Innovation tends to come in waves where whole new ideas take shape and then for a “generation” those ideas are executed on to the n-th degree. A classic example of this is the relational database model, SQL. Born out of IBM’s industrial labs (in what is no doubt a whole other era in corporate labs) and then iterated upon at the database layer by IBM, then Oracle, then many others. It created an industry of tools and products, along with perhaps the largest community of those skilled in the core SQL concepts. Today we have a whole new generation of database technologies all of which have their roots in research labs around the world.

Deep learning has roots in research labs and is now on a very fast pace to broad commercialization. Virtual reality and augmented reality approaches build on a wide array of computer science inputs.

All of this is a way of saying that your hallway discussions and debates this year should include computer science. Track the conferences. Read the summaries. Dig into the papers. Even if you’ve been out of school for a while or never really thought the research world was all that practical or interesting, my view is that we’re in the part of the cycle where there is much to learn by paying attention.

Avoid the Bridge

The hardest product choices remain those to be made by enterprise products. Where as the consumer world it is almost always about moving forward, being new, and building on the current, the enterprise world always has to balance the installed base, legacy, or compatibility. All too often the inertia behind the choices already made actively prevents forward looking choices that are inherently disruptive (disruptive in the dictionary sense as well as in the Silicon Valley sense).

The natural outcome of this is a bet on a transitionary period or a bridge technology — the enterprise world embraces such approaches to no end. Unfortunately, history has consistently shown that bets on the bridge not only fail to bridge to a new world, they ultimately prevent you from fully participating in this changes. Even more challenging, is that during the next technology wave you find your product an additional generation behind and an even bigger challenge. This goes beyond shiny new technologies simply because right now whether you build client code, create services or APIs, install infrastructure, or build and analyze data we are in the midst of an exponential rate of change relative to all of those. When you miss a beat during exponential change you simply can’t catch up — such is the power and challenge of that pace of change.

As a result, it is these “bridge” or hope for the “best of both worlds” topics lead to the most challenging hallways conversations. The hope here is to push in the direction of forward, bringing some comfort to those leaving the well-worn and understood past behind.

Create “The Plan” with Quality In Mind

Product management is constantly trying to get more done, in less time, with fewer resources. No one wants to move slow, get mired down in some big company planning effort, or worse fall behind competitors. That’s a given. It is also nothing new.

For quite some time now we as an industry have been on a bit of a roller coaster when it comes to how to plan what to work will get done. At one extreme the whole idea of having a plan was effectively shunned in favor of minimally viable products (a valuable concept often misapplied to mean, “toss it out there”) or in favor of putting something out and then letting failure determine what comes next. The other extreme essentially created a process out of reacting to data — the plans for what to do were always informed by what was going on with the live site or testing of changes to existing products.

This past year is one marked by failures of quality execution by even the biggest companies. You would have to look hard to find companies that made significant product changes without also receiving some (sometimes significant) critical feedback on the quality (robustness) of execution. Apologies, commitment to “less features, more quality”, and fast revisions/reversions seemed to be the norm this year.

At the same time there appears to be a bit of decommit from the discipline of testing. In my view this is more semantic than reality given that the work of designing, building, and running tests still needs to get done. The jury is still out on this and I’m personally not convinced that software is ready to be free of a QA discipline.

One could view this as a pendulum, but in practice this is the price our industry pays to work at planetary scale. It might be the case that your product is used only by early adopters and tech enthusiasts, but those very people are having their quality bar set by the broader industry benchmarks.

Quality is the new cool. Releasing without the need to re-release is the new normal. Testing is the what’s old is new again.

When you’re having the debates about what to get done and when, this will be a year where deciding that getting something done right, done well, should trump getting something done today or halfway, or getting something done that you know isn’t right (in a big way).

You do not/should not need to revert to a classic “waterfall” (a term applied with disdain to any sort of planful process). However, some level of execution rigor beyond the whiteboard is the kind of innovation product management should bring to the table this year. There’s no reason that products can’t work very well when they are first released, even when you know there is much to be done. There’s no reason you can’t have a product plan and a roadmap at a useful level that is written down, without it being a burdensome or overly-structured “task”.

At the risk of self-reference, two posts on these topics from this past year I’d leave you with: Beauty of Testing and Getting (the Right) Stuff Done.

Wishing everyone a Happy Holidays and Best Wishes for the New Year!

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

December 15, 2015 at 11:22 am

Getting (the Right) Stuff Done

S65-42424A key role of product management (PM), whether as the product-focused founder (CEO, CTO) or the PM leader, is making sure product development efforts are focused. But what does it mean to be focused? This isn’t always as clear as it could be for a team. While everyone loves focus, there’s an equal love for agility, action, and moving “forward”. Keeping the trains running is incredibly important, but just as important and often overlooked is making sure the destination is clear.

It might sound crazy, but it is much easier than one might think for teams to move fast, get stuff done, and break things that might not be helping the overall efforts. In fact, in my experience, this challenge has become even greater in recent years with the availability of data and telemetry. With such, it becomes very easy to find work that needs to be done to improve the app or service — the data is telling you right then and there that something is tripping up customers, performing poorly, or going unused. Taking action makes it easy to feel like the right thing is happening. It feels like moving forward. Everyone loves to get stuff done. Everyone feels focused.

But is the team focused on the right work to achieve the right results?

Just a little process

Two important realities represent a constant balancing act when leading a product. As a PM you are applying finite resources to market needs in the march towards product-market fit or are working furiously to maintain a competitive lead. In addition to the new features there is the work that sales or customer success needs and together those greatly exceed what can be delivered by engineering.

This problem doesn’t ever go away and is at the core of the role of product leadership — getting the right stuff done with the right priority.

In every well-run company there is a strong tendency towards action and a strong dislike (tending to revulsion) of process. In practice, the absence of a process is just as much of a process, just one without clear lines between action and result. A little bit of process (aka product management) can go a long way to having real focus and getting the right things done.

With a little bit of process, everyone on the team can have:

  • Shared views on what success looks like
  • Clear understanding of success measures and metrics
  • Easy mechanism to decide what should be cut or pushed out when things aren’t going as planned
  • Visible alignment between what work will impact what elements of success and which measures
  • Honest accounting of resources going into what big picture initiatives
  • Ample opportunity to participate in deciding what gets done and when

It is very easy to overdo process and go from a helpful tool to a burden people run away from. A personal goal has always been to be as lightweight as possible and to have a way of thinking about these needs that scale from projects that last weeks, months, or even longer.

My guiding principle or golden rule of process is to never ask for something from someone that does not directly help them to get their own work done. Process is not about reports or “management”, but about making sure the work each person does is the most important work to do and the time.

Just a little framework

When most people think of coming up with a product roadmap or plan, they think of ends of a spectrum. At one end there’s commonly the one slide version labeled with months or years and a couple of bullet points at varying levels of granularity and decreasing accuracy as time goes on. There’s also the detailed and long-term strategic plan that most people can’t read through that is often the work of consultants or staff at big companies.

There’s something in between that I’ve found very helpful in terms of framing the product roadmap.

The roadmap can be represented as a hierarchy of increasing detail. It starts with a mission covering years of aspirations for the whole company. From the mission follow the goals representing the 12 months of work supported by specific metrics or measures and the various roles or disciplines in the company. Teams then come together to work on projects (or milestones in a longer term project) that take weeks and are delineated by releasing product or programs to the market. Supporting the creating of projects are the day to day tasks at the feature level representing the work of individuals.

Throughout this whole system there is ongoing telemetry that is called upon to support the company with reliable data upon which to make decisions.


Whole books have been written about mission statements or the process of developing a mission statement. Nothing makes me groan more than the idea of having a meeting to craft out a mission statement. We’ve all seen the results of these efforts that are an awkward combination of passive voice, comma splices, and breathless language. Companies exist for a reason and that’s the mission.

Missions are aspirational and guide you for years and represent the reason for being. Everything you do should aim towards your mission, and how you do that is the work of the rest of the framework. Missions boldly stating that the goal is to “disrupt” tend to be a bit backward looking or focus on the mechanism versus the outcome. Rather a mission that defines a future state of being or a new world view are often the most enduring and more positive. The most important thing about a mission is that there is just one and it endures. Mission statements are best able to be expressed on t-shirts, or something close.


Most everyone thinks they already have goals. Too often though goals are expressed as metrics or scorecards, like be the most downloaded app or number, daily users, or bookings. These are easy to express and are the lifeblood of a startup. The challenge is they change frequently. Like any good code when faced with something that changes frequently, the best bet is to add a level of indirection. A goal is the abstract view of metrics or measures.

Goals are strategic concepts such as retention, ease of use, acquisition, manageability, scale, success, and more. Through evolving telemetry you develop metrics to support the goals.

By using these abstractions you might come realize you have more goals than engineers (or marketers, success partners, etc.) or that you end up with every person working on too many goals. This is part of the process of being focused about goals. For any one product there can only be 3–5 goals and those fit on one “slide”, which includes the full spectrum of engineering, sales, and customer outcomes. This is a deliberate attempt to put in place some constraints up front.

Goals are then measured in specific ways over time. Metrics are then the lifeblood of goals. Your goal might be acquisition, but the metric might be a specific mechanism of retention for a period of time; or your goal might be to improve scalability of the service but the measure might be compute usage for some time and then storage usage for another.

When thinking about goals, they almost always fall to a specific function (or role) on the team such as marketing, sales, or engineering. Having a full accounting of the goals and the associated metrics allows you to understand what will change as the team’s work progresses — what is measured will be what changes.


Projects are easy to understand — they are the releases or programs customers and the marketplace use and hear about. A project might be, for example, a full update to the service, the app, a new entry to the market, a launch, a campaign, or a major infrastructure change. Early on it is trivial to name the projects for a company. Very quickly, however, the number of projects can balloon and become increasingly difficult to track (and potentially to justify). There are SDKs, enterprise tools, segment campaigns, apps for different platforms or support for different browsers, and more.

The key reason to maintain a clear list of active projects is because momentum in continuing some project, failing to re-allocate resources, can often be the biggest constraint in getting the important work done. It common to find yourself in the situation of maintaining a project that no longer fits with the immediate needs but there’s inertia that makes it hard to change. The most important task for product management is to make sure everyone is aware of the projects being undertaken. The more the company scales the more critical it is to know what projects are active and what commitments the team is making to those. Even in the biggest companies, there are just dozens of meaningful projects.

A project has an ending date or deadline date — not a month or a quarter (those are 30 or 90 dates) but a single date — and everyone knows when the project releases or is complete.


When you work from mission to goals to projects, the most concrete expression of work on the team is the task. A task is the actual code to be deployed, whitepaper to be written, SEO tools that will be employed, launch event to hold, features that will be designed, and so on. While a few people might care deeply or contribute to the mission, and executives generally focus on goals, and managers live whole projects, everyone is invested in tasks.

Tasks are defined by those that will do the work and those same people (or person) will decide how long it takes. Every person contributing to a project might have dozens of tasks. Tasks should be from 1/2 to 2 days — less and the accounting is too painful and more and it is likely the work is not understood enough to reliably schedule.

There are two main benefits from spending the time to create a list of work items. First, the project overall becomes increasingly predictable which is important because of dependencies (such as front end and back end, or marketing activities). Second, when things aren’t going as well as planned there is a clear view of just how far off things are along with a pre-computed list of potential savings to be had by cutting different tasks. Whether it is Asana, Trello, Sheets, Jira or more the key is just having a system that goes beyond post-its around a monitor.

What is often overlooked is how much more effective everyone is when they know the why behind the what. Everyone will do better work if the worklist flows from specific projects which have goals that are measured in a particular way. Much of the work of this framework will prove to be making the connection from task all the way to mission.


One additional element that permeates all of your efforts is telemetry.

The most successful organizations are also fully instrumented organizations. Everything about code, customers, and overall engagement has telemetry.

Keeping an open mind and open eye to a whole variety of measures is super important. Just that as a matter of scale and operation, you cannot hold everyone accountable or change what is being done in response to every measure. If you’re learning something that concerns you then dig in. Maybe you’ll change your plans. But when you do need to change your plans you can do so in the context of an overall framework, not just single data points.

The combination of a framework and telemetry makes it possible to more globally maximize your return on investment. Telemetry alone risks a more local optimization. A framework by itself is just guessing.

It might seem like doing all this is just too much busy-work. There is an investment to be made. Most of the effort, however, will involve “editing” or “culling” from a list that was already too long or contained a lot of things not getting done. The most time intensive work is in creating the task list and is often the most disliked or difficult to make concrete. Everyone seems to have a feeling for what work needs to be done but resistance to putting it out there. The essence of an accountable organization is taking the team through this framework and making it part of every day work.

Just a small tool

There’s a payoff to all of this that is incredibly important. If you stop here all you’ve really done is document things going on. What is really important is that you assemble information so you can have a system in place to deal with change — unexpected failures in the market place, gain or loss of people on the team, new opportunity in the marketplace, or demands from a customer. Maintaining this information in a simple tool provides the product manager with that.

There’s always too much to do, so by definition we know we are not doing everything we can to be successful. Do we know if everything we are doing is essential to the success we hope to achieve?

Seems like a simple question. In practice this is an exceedingly difficult question to answer at any given time and even more difficult question to answer over time as conditions change. In fact, I might argue it is close to impossible and that the best measure of success is to view the efforts overall as a portfolio. Just like a portfolio, however, you need to spend time digging in to understand at a more granular level where you can do better. Failing to do so too often prevents us from ongoing evaluation of work to make sure it is really helping — if there is more work than can be done, the easy path is to just assume everything going on is helping. That’s not true though!

What is suggested below might be simplistic or you might believe you’re already doing all this and more. I’ve found that most projects, especially before product-market fit, can benefit from a more systematic view connecting work to projects and goals supporting the company mission.

My own experience is that this can be accomplished in a surprisingly straight-forward manner. Doing so illuminates the work of the team and provides a great tool for the shared and ongoing management of the team.

Let’s just assume we’re working with a spreadsheet, simply because we’re going to do some math with the data. Feel free of course to use any structured tool that supports features such as collaborative editing, group-by reports/pivot tables, filters, and some basic math. The specific tool isn’t important and should not be the source of the first debate on the team!

We tend to think of the task list as a literally listing of tasks such as Implement OAUTH, Add new chart type, Create sign-up response mail, etc. Simply getting all of these done can often be helpful enough. Most tasks lists will have a name associated with the work and I’d always encourage a single name, or said differently define the task so it is the work of one person. In addition, include a column for the amount of time the task will take (0.5–2 days). For good measure I would suggest also including the date to start/finish as that allows you to use the spreadsheet to understand the relative schedule.

Then take the extra step of making sure the Project is clear. Is it for the iOS App? Does it support the SEO campaign? Is this the Beta program? Adding this label should be some simple accounting as projects are by definition distinct and represent a single release or customer/market visible effort.

The next step is key which is to add one more column which is what goal the work supports. Is this task about something like retention or scalability, for example? Avoid the temptation to think something applies to many goals or rather force yourself to commit the work to supporting a single goal. Keep in mind you already know the goals so all you are doing here is picking from the 3–5 established goals, which in turn are associated with metrics.

With that in hand you have something that looks like the hypothetical below. You can see a connection between tasks, projects, and goals.


Keep this sheet up to date and you’re able to be ahead of the project. As simple as this seems it is just as often either overlooked or buried in too much detail to be broadly useful.

What can you do with this? There are several key sorts/filters that are enormously useful:

  • Any given person can look at their own work and know where they are and where it fits in
  • At any time, one can see how much of the time (resource) overall is going to support a given project or goal
  • Know which tasks are too far out, too big
  • Know which resources are over-constrained

And so on. This tool is the right level of complexity for projects from 10 to 5000 people in my experience. While many would love more such as dependency analysis or a task hierarchy, my experience is that is where the tool begins to overwhelm. When the tool overwhelms it isn’t used and so it doesn’t matter. (Quick note, when I first came to manage Microsoft Project I learned that the bulk of usage of the tool was not to track projects but to input a bunch of data at the start of a project just to come up with a nice-looking poster-sized Gantt chart printed out once at the project start.)

Managing projects is very difficult. While the bank account is draining and there’s a strong desire to keep moving is galactically more difficult. The fear of slowing everyone down with tools and processes is real and often justified. With so much riding on being efficient, effective, and focused it is worth investing a small amount in managing the work so as to make better decisions about what work to do and what happens when you get new information from the market.

— Steven Sinofsky (@stevesi)

Special thanks to @ProductHunt’s Ryan Hoover who took my suggested framework of Vision, Mission, Strategy, Tactics and made it much more approachable terminology. 🙌

Written by Steven Sinofsky

October 12, 2015 at 10:30 am

Posted in posts

Tagged with , ,

A Leader’s Guide To Deciding: What, When, and How To Decide

Science fiction (Star Trek) with officers sitting around a meeting table.Decision-making is one of the most difficult skills to master as a manager. A startup CEO literally sees a constant stream of decisions to be made: from hiring and firing, to Android or iOS, all the way to Lack or Billy. As the company grows to 10–20 people (usually mostly engineering) the bonds and shared experiences continue support decision-making at the micro-level. Once a team grows larger there is a need for management and delegation. While growth is a positive it also a stressful time for the company and founder/CEO.

Even for the tightest knit group with founding-team trust, the first decisions made without the CEO (or by simply giving a heads up to the CEO) are nail-biting moments for everyone. The closer the decision is to the core of the CEO experience the trickier. Deciding on UX changes or tweeting from the company account all test the ability for the CEO to delegate and the team to be delegated to.

As a company grows to having leaders and groups across different functions (heads of sales, marketing, eng, operations) the need to be systematic in delegating and deciding goes from optional to mandatory. Many founders resist this move. First founders rightfully care about everything — everything matters — and it is very likely he/she is in the best situation to make the best decision for the company. Second, founders that worked at a big company too often experienced dysfunctional decision-making and part of building a new company is to do better than that.

As a manager, I always found deciding to be both trivially easy and impossibly difficult. It is easy because like many, if you ask for an opinion from me you can get one. It is difficult because like many, the implications of sharing that opinion are not really known at the time.

For me, knowing what a decision really is about, understanding the full context of a decision, even just awareness of what is actually being decided are a few of the many challenges. Add to that all the management-speak about delegation and accountability and quickly I came to conclude I could justify any course of action. I could be anything from the center of a command-and-control decision making machine to a completely non-accountable, human-router-delegator of important choices in the name of building an empowered team.

I wrote this post in an effort to create a framework for how to think about decisions from the vantage point of a CEO/founder/exec as an organization grows beyond when the “hub and spoke” is the expected norm, and how to think about the good and crisis times when it comes to decision-making.

Deciding with Dysfunction

Making decisions is easy on paper, but a lot more difficult in practice. Worse, the more one practices the harder it is to make decisions simply because of knowing more and seeing potential downsides of just about everything. One might think experience would make things easier, and often it does, but it can slow down or even just prevent things from happening. When I was young, I could decide to drop in on a 15’ vertical half-pipe without hesitation. That was before I knew the downsides of injury (it was also before YouTube so all we had to look at for examples were photos)!

As the team grows, CEOs (and execs, depending on company size) face many challenges in getting things done or just picking the right things to do. One of the main factors is the increasing number of people involved in any choices. Deciding iOS or Android first will seem easy or “obvious” when resources are very tight and there’s no Android developer anyway.

Fast forward and consider the complexity when there is a newly hired sales leader who knows “works anywhere you do” will close deals, an engineering manager who still can’t hire an Android lead, a QA manager who is all about fragmentation complexity, or a newly hired BD lead who can choose between two partnerships but the better logo requires both iOS and Android. All of a sudden there’s no simple decision to be made and a lot of people contributing who feel their success hinges on going with their point of view. Play out over time the various potential outcomes and who is promoted, gets more responsibility, or is credited with success to see that this challenge is far more than just getting the right answer. That is normal.

As a result of people living this pattern over and over, many decision-making techniques and tools have been created. These codify a “decision making process”. In fact there is a whole Wikipedia article describing responsibility assignment matrix. These have a variety of acronyms like OARP, RACI, ARCI and more. The letters are various ways to define roles in important decisions such as reviewer/approver, recommends/consulted, responsible/accountable and more. These are tools designed to speed decision-making, clarify execution accountability, and ensure robust choices. Leaders figure out what to decide and assign various roles to participants and like magic decisions are made.

In my own experience these tools more often than not slow things down, make no one accountable, and all but guarantee compromises with which no one is satisfied. There are innumerable stories of how a well-intentioned taxonomy like the above can be used to grind work to a halt, push accountability around without establishing it, and turn a specific decision into a general battle over the meta. That’s unfortunate because making the right choices is incredibly important and having tools that add reliability and consistency to the effort would really help.

Rather than dissect a meta-process, I’d propose taking a step back and saying that the most important thing a CEO or executive must do is to keep the output and velocity of the team at the highest levels and focused on doing the right things. The first VP I worked for, way way back, hypothetically suggested “maybe a star could do the work of 3, or 5, or even 10 people” then he went on “the problem is we are doing work with 20, 40, or 50 people”. The implication of that math is clear. No matter how smart, hard-working, on top of things, or just amazing someone is, they need to work with other people and count on their work without actually doing it.

The core problem with a decision-making process starts with defining a decision and related inputs. Almost never is there all the information needed or wanted to make a choice and wait long enough then chances are context and options have changed enough to make the decision framing all wrong. So all that was really accomplished was slow everything down by simply by trying to define the decision to be made and what inputs were needed (apparently this also holds for the Federal Reserve Bank).

We know that almost nothing is made up of a single decision at a single point in time. Ask yourself how many discussions or meetings you had where you thought there was a yes/no, or a choice to be made, and after running over time the only decision was to wait, learn more, or come up with new options? It is only long after clear success or failure does the historic narrative turn into the “big decision” (and associated drama).

The essence of an agile organization is figuring out how to make fast progress against larger goals, learning and adjusting along the way. In that context, many at the company make decisions every day.

Stepping Up For Growth

With so much happening every day, rhythm is the biggest enabler of agility. If a team operates in a Flow, then everyone can do their best work. We all know what it is like to groove. We also know what it is like to have that groove interrupted by questioning the choices made or pausing to have choices validated, reconsidered, or tweaked.

Trying to get ahead of decisions sounds great. This quickly turns into trying to know what isn’t known and worse to prepare for it. Finding a way to codify the set of problems that will trigger a review or joint decision turns into a meta-planning exercise trying to agree on potential challenges.

In my experience, the real challenge is in trying to define decisions, isolate important ones, and then figure out the stakeholders for that once instance. Decisions are everywhere. Deciders are all over the place. Every identifiable decision is made up of many other choices, constraints, and stakeholders.

The following is my own idealization of the role a CEO or exec can play in decisions. Instead of kicking off a process for a single decision, spend the energy up front to decide what and how the CEO will work. Thinking this way allows for a variety of approaches for getting involved, delegating, or empowering. It avoids the complexities of figuring out what it is that is actually being decided and hairsplitting accountabilities among individuals. It introduces agility into a process rather than rigidity.

My own experience led me to believe that before I decided something that was brought to me, I had to have an idea of what role I was playing in the process. It is easy to be a boss and assume people need your wisdom, advice, or context. It can be tricky to know if you are really adding that, affirming the choices of those who work for you, randomizing them, or simply doing their job for them.

With that experience, my view is that before deciding something a CEO or exec should be clear how he/she contributes to a project or work, using one of the following:

  • Initiator. Kicking off new projects.
  • Connector. Connecting people to others so the work gets better.
  • Amplifier. Amplifying the things that are working well or not so there is awareness of success and learning.
  • Editor. Fixing or changing things while they are being done.

Let’s look at each of these in a little more detail.


As a practical matter, assuming the organization is running at > 100% capacity then there’s a finite amount of initiating that can be done. Almost everything kicked off at the exec level looks like the strategy, new projects, new organization, or the prioritization of work. Creating or seizing new opportunities from the leadership vantage point is precisely it means to be the initiator.

  • Examples: Kicking off something entirely new (hiring process, first sales deck, new product release, opening a new position. Deciding to make a big bet on a new technology, platform. Starting a new strategic or transformative initiative.
  • Tools: Write it down in a way that someone joining the project later can quickly get up to speed. I believe writing is thinking, so taking the time to gain clarity on what to do and what success looks like only helps. Pro tip: Drafts circulated to a small set of key people, but quickly along with seeking and acting on feedback can be very valuable.
  • Granularity: Almost always this will be fairly coarse like “launch day” or “new product” but can scale all the way down to being the specific such as “add this feature”, “create a partnership”, “publish APIs”.
  • Time: When operating in a rhythm (product cycles, launches, adding new leads) initiating becomes a key part of team rhythm. About 10% of overall work is initiating new work or projects.
  • Mindset. The most important work of an exec is in kicking off a new project so put the most effort into this work product. Initiating is the only time an exec does the work of an individual contributor.


Most all decision-making challenges happen across domains. Rarely does a solid engineering leader pick the wrong toolset. Rather the toolset is wrong because the full scope of the challenge was not known. Anything crossing functional lines requires a connection. The exec is in a unique position to be constantly connecting functions and work. Helping people to know more about what is going on with others on the team while recognizing this almost never happens by chance.

  • Examples: There are classic examples across functional lines such as connecting sales and product. Many of the most interesting examples are the non-traditional ones, which can help people to gain more experience such as connecting engineering to customers. Facilitated connections rather than simple introductions afford the chance to see how parties use the connection and to drive an agenda consistent with a rhythm.
  • Tools: Presence, listening and talking have no substitutes. Even with everyone being online and available, don’t assume everyone or the right people will read everything. With any size team, the amount of “FYI” or automated information will quickly overwhelm even the most diligent. Rather than believing status reports and scorecards will connect, execs must focus on making specific connecting efforts with clear intentions to move things along. Most work has natural points where checking in makes sense. Rather than relying on ad hoc connections, use a structured approach to surface the right issues.
  • Granularity: The reason connecting is fun is because when it not randomizing or micro-managing, but simply sharing and getting head of potential problems. Connecting peers together is a service to the team.
  • Time: Spending more than half, perhaps 60%, of the time connecting is routine among CEOs and execs. A useful way to spend time is on structured, periodic checkpoints that connect the status of the current work to the start of the project. Pro tip: Most every interaction can serve as a chance to connect one part of the team to another or a CEO’s external context to what is going on internally while also using the opportunity to demonstrate listening while not trying to solve everything along the way.
  • Mindset. Connecting is the field work of the exec. Execs should be thinking about what to connect almost all of the time. Connecting skillfully is a force multiplier.


The CEO or exec soapbox is a super valuable tool. Within a company there are not only good things worth letting everyone know about, but lessons learned that could be shared. Since it is a given that not everything can be thought through in advance, seeing and amplifying the creative ideas or great execution seen along the way can be seen as a proxy for initiating new things. When a CEO stands up in front of the team and celebrates success or learning it just matters more.

  • Examples: The best example is to amplify the good work of the team or to turn failure into a lesson. Amplifying the good work of competitors, sales wins, or collaboration across functions gives leaders a chance to call our efforts that cross many parts of the company.
  • Tools: The team meeting is the most valuable forum for amplifying and often supporting this with something in writing (or a t-shirt or sticker!) works best. Amplifying work is also related to connecting work and often they are two sides of the same coin. The most compelling efforts to amplify are supported by great stories — when taking the time to amplify something, do so with the whole story and the background that makes it worth amplifying. It is one thing to tell people something important, but another to describe why it is important.
  • Granularity: Amplify things that apply to many people on the team but do so with a frequency that maintains meaning and impact. Pro tip: When celebrating success, execs should be sure to give the right credit to the right people.
  • Time: Execs have a unique ability to amplify success, which in turn makes this a high value effort so it is worth spending 10% of time.
  • Mindset. It is fair to view amplifying as a bit of a tax. As is often said in management, “what’s worth saying is worth repeating”. Ronald Reagan was famous for not veering off important messages because he believed even though it was old news for the press corp, it was new to those in the live audience.


In a big company, people just wish execs would go away and let them do their jobs. In a new startup, execs are doing the work (i.e. coding, selling specific accounts, designing the UX). Everywhere else many people are doing the work and the exec is figuring out when to contribute. Somewhere between “swoop and poop” and “micro-managing” is the role of contributing editor. I use the term “editor” specifically because this is about changing (or improving) the work created by others. Editing is the right of a boss, but also a privilege in a team made up of smart and creative people. Everything about editing is in how it is done. Poorly done, editing makes people feel bad and never generates the best work (except for Steve Jobs as many like to say). Done well, and editing makes everyone feel better about the collective effort.

  • Examples: Redlining a document, drilling into a design review, redoing a positioning framework, creating an outline for a blog post or release, designing the 2×2 competitive framework are all examples of editing.
  • Tools: The most important “tool” to use when editing is to do so in-person. The vast majority of editing feedback comes across as negative and critical. While there is always enough bad work to go around, finding ways to be constructive is always worthwhile. For better or worse, editing without verbal context or worse just being critical over email almost always makes things worse.
  • Granularity: Chances are the granularity of editorial input is very fine-grained. The question is not really whether to edit or not, but how many times the same thing (even for the same person) is edited. Repetition means there’s a systematic problem with communication or the people doing the work.
  • Time: Editing is a big time commitment, but not the biggest. It also comes in fits and starts. More important than time is timing. The later in the process or the closer to the deadline the higher “cost” being an editor has. Overall about 20% of exec work is editing.
  • Mindset. This is the comfort zone for every exec, especially when it is the type of work he/she did individually. The important thing is for the exec to really know the importance of the changes.

With a framework or roles like this, there is clarity in how an exec is working with the team and how work is started, iterated, and completed. I would also encourage using this framework by managers at all levels in a company. One changes the time devoted to each type of interaction.

Diving In For Saves

At this point, you probably think this all sounds well and good, but what about when things are going sideways?

When things go sideways none of this matters.

In fact, none of this really matters if the company are such an early stage that there are no more than a couple of dozen people. It also doesn’t matter if there is not yet product-market fit.

In Ben Horowitz’s Hard Thing About Hard Things, he describes the differences between a “peacetime” and a “wartime” CEO styles (specific post is here). I love his framing of decisions that pose existential questions for the company. That is a fantastic way to know when there really aren’t decision making processes other than whatever the CEO wants to do to survive to another day. Rarely are exec level choices existential at the same level, which is worth noting.

In a startup, the wartime CEO pretty much describes things until there is a solid revenue and profit stream coming from a product that works for the market. Most everything amounts to an existential choice or decision.

In an organization that is more or less working, this is a more subtle challenge. At some point few decisions are truly existential, though it turns out it is tough to know this is the case while everything is happening.

There was a recent Facebook thread among a few of the original leaders of an old Microsoft product. From the outside, most people see only the victory and to read their comments one would think it was all a cakewalk. From inside the team, every day felt existential and every choice seemed to be about the success or failure of the whole. This is the norm at big companies and explains the origin of those questionable “decision support tools”.

This all boils down to product-market fit and deciding when in fact that state was achieved. Before that time, everything is existential. After that time, most decisions are relatively minor. Unfortunately, knowing this dividing line isn’t so straightforward and the clouds of disruption always loom on the horizon. This is the fog of war in a big organization.
Operating all the time in crisis mode or thinking every choice is existential is just not sustainable as a company or leader. As fun as it seems, a whole yoga practice upside down is not a good idea.

As a company scales, the long tail of choices and decisions feels existential from within even if they aren’t. There’s no magic answer as to when one is at war or peace, even in hindsight. My view is that a CEO can decide the one thing no one else can, which is to decide things using a poorly defined process. Just keep in mind that even the most successful CEOs and execs can eventually use up that luxury and either adapt how they work or watch people leave the company.

The original generals, in the military, have evolved their thinking and modern warfare now employs the notion of commander intent as a leadership practice for just these reasons (and more). If you are interested in how the largest and most process-oriented organizations evolve, the report linked to is fascinating.

With the most robust product market fit, there will be existential moments demanding wartime decisions: a major security breach, service outage, failed sale, or losing key members of the team. Don’t ever be worried about deciding in the most top-down, non-empowered, toe-stepping manner when facing a true crisis. That’s what leadership is all about. Using the framework above, a crisis situation just demands a great deal more editorial actions.

Making decisions is something that at first comes naturally, even easily, and then as a company grows the complexity creeps up on CEOs and execs. Some of this is because with experience comes awareness of all the things that can go wrong. Most comes from the challenges of scale and decisions of when and how to let go of some things. Decision making challenges come when an unlimited amount of passion meets a finite amount of time. Taking on this challenge without introducing a mindless bureaucracy is an opportunity for a growing company.

— Steven Sinofsky (@stevesi)

Note: Also sim-published this on @medium.

Written by Steven Sinofsky

September 10, 2015 at 10:59 am

Posted in posts

Tagged with , ,

%d bloggers like this: