Learning by Shipping

products, development, management…

Archive for October 2013

Thoughts on reviewing tech products

Borat-thumbs-upI’ve been surprised at the “feedback” I receive when I talk about products that compete with those made by Microsoft.  While I spent a lot of time there, one thing I learned was just how important it is to immerse yourself in competitive products to gain their perspective.  It helps in so many ways (see https://blog.learningbyshipping.com/2013/01/14/learning-from-competition/).

Dave Winer (@davewiner) wrote a thoughtful post on How the Times reviews tech today. As I reflected on the post, it seemed worth considering why this challenge might be unique to tech and how it relates to the use of competitive products.

When considering creative works, it takes ~two hours to see a film or slightly more for other productions. Even a day or two for a book. After which you can collect your thoughts and analysis and offer a review. Your collected experience in the art form is relatively easily recalled and put to good use in a thoughtful review.

When talking about technology products, the same approach might hold for casually used services or content consumption services.  In considering tools for “intellectual work” as Winer described (loved that phrase), things start to look significantly different.Software tools (for “intellectual work”) are complex because they do complex things. In order to accomplish something you need to first have something to accomplish and then accomplish it. It is akin to reviewing the latest cameras for making films or the latest cookware for making food. While you can shoot a few frames or make a single meal, tools like these require many hours and different tasks. You shouldn’t “try” them as much as “use” them for something that really matters.  Only then can you collect your thoughts and analysis.Because tools of depth offer many paths and ways to use them there is an implicit “model” to how they are used. Models take a time to adapt to. A cinematographer that uses film shouldn’t judge a digital camera after a few test frames and maybe not even after the first completed work.

The tools for writing, thinking, creating that exist today present models for usage.  Whether it is a smartphone, a tablet, a “word processor”, or a photo editor these devices and accompanying software define models for usage that are sophisticated in how they are approached, the flow of control, and points of entry.  They are hard to use because they do hard things.

The fact that many of those that write reviews rely on an existing set of tools, software, devices to for their intellectual pursuits implies that conceptual models they know and love are baked into their perspective.  It means tools that come along and present a new way of working or seeing the technology space must first find a way to get a clean perspective.

This of course is not possible.  One can’t unlearn something.  We all know that reviewers are professionals and just as we expect a journalist covering national policy debates must not let their bias show, tech reviewers must do the same.  This implicit “model bias” is much more difficult to overcome because it simply takes longer to see and use a product than it does to learn about and understand (but not necessarily practice) a point of view in a policy debate.  The tell-tale sign of “this review composed on the new…” is great, but we also know right after the review the writer has the option of returning to their favorite way of working.

As an example, I recall the tremendous difficulty in the early days of graphical user interface word processors.  The incumbent WordPerfect was a character based word processor that was the very definition of a word processor.  The one feature that we heard relentlessly was called reveal codes which was a way of essentially seeing the formatting of the document as codes surrounding text (well today we think of that as HTML).  Word for Windows was a WYSIWYG word processor in Windows and so you just formatted things directly.  If it was bold on screen then it was implicitly surrounded by <B> and </B> (not literally but conceptually those codes).

Reviewers (and customers) time and time again felt Word needed reveal codes.  That was the model for usage of a “word processor”.  It was an uphill battle to move the overall usage of the product to a new level of abstraction.  There were things that were more difficult in Word and many things much easier, but reveal codes was simply a model and not the answer to the challenges.  The tech  world is seeing this again with the rise of new productivity tools such as Quip, Box Notes, Evernote, and more.  They don’t do the same things and they do many things differently.  They have different models for usage.

At the business level this is the chasm challenge for new products.  But at the reviewer level this is a challenge because it simply takes time to either understand or appreciate a new product.  Not every new product, or even most, changes the rules of the predecessor successfully.  But some do.  The initial reaction to the iPhone’s lack of keyboard or even de-emphasizing voice calls shows how quickly everyone jumped to the then current definition of smartphone as the evaluation criteria.Unfortunately all of this is incompatible with the news cycle for the onslaught of new products or the desire to have a collective judgement by the time the event is over (or even before it starts).This is a difficult proposition. It starts to sound like blaming politicians for not discussing the issues. Or blaming the networks for airing too much reality tv. Isn’t is just as much what peole will click through as it is what reviewers would write about. Would anyone be interested in reading a Samsung review or pulling another ios 7 review after the 8 weeks of usage that the product deserves?

The focus on youth and new users as the baseline for review is simply because they do not have the “baggage” or “legacy” when it comes to appreciating a new product.  The disconnect we see in excitement and usage is because new to the category users do not need to spend time mapping their model and just dive in and start to use something for what it was supposed to do.  Youth just represents a target audience for early adopters and the fastest path to crossing the chasm.

Here are a few things on my to-do list for how to evaluate a new product. The reason I use things for a long time is because I think in our world with so many different models

  1. Use defaults. Quite a few times when you first approach a product you want to immediately customize it to make it seem like what you’re familiar with.  While many products have customization, stick with the defaults as long as possible.  Don’t like where the browser launching button is, leave there anyway.  There’s almost always a reason.  I find the changes in the default layout of iOS 6 v. 7 interesting enough to see what the shift in priorities means for how you use the product.
  2. Don’t fight the system.  When using a new product, if something seems hard that used to seem easy then take a deep breath and decide it probably isn’t the way the product was meant to do that thing.  It might even mean that the thing you’re trying to do isn’t necessarily something you need to do with the new product.  In DOS WordPerfect people would use tables to create columns of text.  But in Word there was a columns feature and using a table for a newsletter layout was not the best way to do that.  Sure there needed to be “Help” to do this, but then again someone had to figure that out in WordPerfect too.
  3. Don’t jump to doing the complex task you already figured out in the old tool.  Often as a torture test, upon first look at a product you might try to do the thing you know is very difficult–that side by side chart, reducing overexposed highlights, or some complex formatting.  Your natural tendency will be to use the same model and steps to figure this out.  I got used to one complicated way of using levels to reduce underexposed faces in photos and completely missed out on the “fill flash” command in a photo editor.
  4. Don’t do things the way you are used to.  Related to this is tendency to use one device the way you were used to.  For example, you might be used to going to the camera app and taking a picture then choosing email.  But the new phone “prefers” to be in email and insert an image (new or just taken) into a message.  It might seem inconvenient (or even wrong) at first, but over time this difference will go away.  This is just like learning gear shift patterns or even the layout of a new grocery store perhaps.
  5. Don’t assume the designers were dumb and missed the obvious. Often connected to trying to do something the way you are used to is the reality that something might just seem impossible and thus the designers obviously missed something or worse.  There is always a (good) chance something is poorly done or missing, but that shouldn’t be the first conclusion.

But most of all, give it time.  It often takes 4-8 weeks to really adjust to a new system and the more expert you are the more time it takes.  I’ve been using Macs on and off since before the product was released to the public, but even today it has taken me the better part of six months to feel “native”.  It took me about 3 months of Android usage before I stopped thinking like an iPhone user.  You might say I am wired too much or you might conclude it really does take a long time to appreciate a design for what it is supposed to do.  I chuckle at the things that used to frustrate me and think about how silly my concerns were at day 0, day 7, and even day 30–where the volume button was, the charger orientation, the way the PIN worked, going backwards, and more.

–Steven Sinofsky

Written by Steven Sinofsky

October 29, 2013 at 12:00 pm

Posted in posts

Tagged with , ,

On the exploitation of APIs

Not a stepLinkedIn engineer Martin Kleppmann wrote a wonderful post detailing the magical and thoughtful engineering behind the new LinkedIn Intro iOS app.  I was literally verklepmpt reading the post–thinking about all those nights trying different things until he (and the team) ultimately achieved what he set out to do, what his management hoped he would do, and what folks at LinkedIn felt would be great for LinkedIn customers.

The internet has done what the internet does which is to unleash indignation upon Martin, LinkedIn, and thus the cycle begins. The post was updated with caveats and disclaimers.  It is now riding atop of techmeme.  Privacy.  Security. etc.

Whether those concerns are legitimate or not (after all this is a massive public company based on the trust of a network), the reality is this app points out a longstanding architectural challenge in API design.  The rise of modern operating systems (iOS, Android, Windows RT, and more) have inherent advantages over the PC-era operating systems (OS X, Windows, Linux) when it comes to maintaining the integrity as designed of the system overall. Yet we’re not done innovating around this challenge.


I remember my very first exploit.  I figured out how to use a disk sector editor on CP/M and modified the operating system to remove the file delete command, ERA.  I managed to do this by just nulling out the “ERA” string in what appeared to me to be the command table.  I was so proud of myself I (attempted) to show my father my success.

The folks that put the command table there were just solving a problem.  It was not an API to CP/M, or was it?  The sector editor was really a tool for recovering information from defective floppies, or was it?  My goal was to make a floppy with WordStar on it that I could give to my father to use but would be safe from him accidentally deleting a file.  My intention was good.  I used information and tools available to me in ways that the system architects clearly did not intend.  I stood on the top step of a ladder.  I used a screwdriver as a pry bar.  I used a wrench as a hammer.

The history of the PC architecture is filled with examples of APIs exposed for one purpose put to use for another purpose.  In fact, the power of the PC platform is a result of inventors bringing technology to market with one purpose in mind and then seeing it get used for other purposes.  Whether hardware or software, unintended uses of extensibility have come to define the flexibility, utility, and durability of the PC architecture.  There are so many examples: the first terminate and stay resident programs in MS-DOS, the Z80 softcard for the Apple ][, drawing low voltage power from USB to power a coffee warmer, all the way to that most favorite shell extension in Windows or OS X extension that adds that missing feature from Finder.

These are easily described and high-level uses of extensibility.  Your everyday computing experience is literally filled with uses of underlying extensibility that were not foreseen by the original designers. In fact, I would go as far as to say that if computers and software were only allowed to do things that the original designers intended, computing would be particularly boring.

Yet it would also be free of viruses, malware, DLL hell, system rot, and TV commercials promising to make your PC faster.

Take for example, the role of extensibility in email, Outlook even in particular.  The original design for Outlook had a wonderful API that enabled one to create an add-in that would automate routine tasks in Outlook.  You could for example have a program that would automatically send out a notification email to the appropriate contacts based on some action you would take.  You could also receive useful email attachments that could streamline tasks just by opening them (for example, before we all had a PDF reader it was very common to receive an executable that when opened would self-extract a document along with a viewer).  These became a huge part of the value of the platform and an important part of the utility of the PC in the workplace at the time.

Then one day in 1999 we all (literally) received email from our friend Melissa.  This was a virus that spread by using these same APIs for an obviously terrible usage.  What this code did was nothing different than all those add-ins did, but it did it at Internet scale to everyone in an unsuspecting way.

Thus was born the age of “consent” on PCs.  When you think about all those messages you see today (“use your location”, “change your default”, “access your address book”) you see the direct descendants of Melissa. A follow on virus professed broad love for all of us, I LOVE YOU.  From that came the (perceived) draconian steps of simply disabling much of the extensibility/utility described above.

What else could be done?  A ladder is always going to have a top step–some people will step on it.  The vast majority will get work done and be fine.

From my perspective, it doesn’t matter how one perceives something on a spectrum from good to “bad”–the challenge is APIs get used for many different things and developers are always going to push the limits of what they do.  LinkedIn Intro is not a virus.  It is not a tool to invade your privacy.  It is simply a clever (ne hack) that uses existing extensibility in new ways.  There’s no defense against this.  The system was not poorly designed.  Even though there was no intent to do what Intro did when those services were designed, there is simply no way to prevent clever uses anymore than you can prevent me from using my screwdriver as a pry bar.

Modern example

I wanted to offer a modern example that for me sums up the exploitation of APIs and also how challenging this problem is.

On Android an app can add one or more sharing targets.  In fact Android APIs were even improved to make it easier in release after release and now it is simply a declarative step of a couple of lines of XML and some code.

As a result, many Play apps add several share targets.  I installed a printing app that added 4 different ways to share (Share link, share to Chrome, share email, share over Bluetooth).  All of these seemed perfectly legitimate and I’m sure the designers thought they were just making their product easier to use.  Obviously, I must want to use the functionality since I went to the Play store, downloaded it and everything.  I bet the folks that designed this are quite proud of how many taps they saved for these key scenarios.

After 20 apps, my share list is crazy.  Of course sharing with twitter is now a lot of scrolling because the list is alphabetical.  Lucky for me the Messages app bubbles up the most recent target to a shortcut in the action bar.  But that seems a bit like a kludge.

Then along comes Andmade Share.  It is another Play app that lets me customize the share list and remove things.  Phew.  Except now I am the manager of a sharing list and every time I install an app I have to go and “fix” my share target list.

Ironically, the Andmade app uses almost precisely the same extensibility to manage the sharing list as is used to pollute it.  So hypothetically restricting/disabling the ability of apps to add share targets also prevents this utility from working.

The system could also be much more rigorous about what can be added.  For example, apps could only add a single share target (Windows 8) or the OS could just not allow apps to add more (essentially iOS).  But 99% of uses are legitimate.  All are harmless.  So even in “modern” times with modern software, the API surface area can be exploited and lead to a degraded user experience even if that experience degrades in a relatively benign way.

Anyone that ever complained about startup programs or shell extensions is just seeing the results of developers using extensibility.  Whether it is used or abused is a matter of perspective.  Whether is degrades the overall system is dependent on many factors and also on perspective (since every benefit has a potential cost, if you benefit from a feature then you’re ok with the cost).


There will be calls to remove the app from the app store. Sure that can be done. Steps will be taken to close off extensibility mechanisms that got used in ways far off the intended usage patterns. There will be cost and unintended side effects of those actions. Realistically, what was done by LinkedIn (or a myriad of examples) was done with the best of intentions (and a lot of hard work).  Realistically, what was done was exploiting the extensibility of the system in a way never considered by the designers (or most users).

This leads to 5 realities of system design:

  1. Everything is an API.  Every bit of a system is an API.  From the layout of files, to the places settings are stored, to actual published APIs, everything in a system as it is released serves as an interface to people who want to extend, customize, or modify your work. Services don’t escape this because APIs are in a cloud behind REST APIs.  For example, reverse engineering packets or scraping HTML is no different — the HTML used by a site can come to be relied on essentially as an API.  The Windows registry is just a place to store stuff–the fact that people went in and modified it outside the intended parameters is what caused problems, not the existence of a place to store stuff.  Cookies?  Just a mechanism.

  2. APIs can’t tell you the full intent.  APIs are simply tools.  The documentation and examples show you the mainstream or an intended use of an API.  But they don’t tell you all the intended uses or even the limits of using an API.  As a platform provider, falling back on documentation is fairly impossible considering both the history of software platforms (and most of the success of a platform coming from people using it in a creative ways) and the reality that no one could read all the documentation that would have to explain all the uses of a single API when there are literally tens of thousands of extensibility points (plus all the undocumented ones, see #1).

  3. Once discovered, any clever use of an API will be replicated by many actors for good or not.  Once one developer finds a way to get something done by working through the clever mechanism of extensibility, if there’s value to it then others will follow. If one share target is good, then having 5 must be 5 times better.  The system through some means will ultimately need to find a way to control the very way extensibility or APIs are used.  Whether this is through policy or code is a matter of choice. We haven’t seen the last “Intro” at least until some action is taken for iOS.

  4. Platform providers carry the burden of maintaining APIs over time.  Since the vast majority of actors are doing valuable things you maintain an API or extensibility point–that’s what constitutes a platform promise.  Some of your APIs are “undocumented” but end up being conventions or just happenstance.  When you produce a platform, try as hard as you want to define what is the official platform and what isn’t but your implied promise is ultimately to maintain the integrity of everything overall.

  5. Using extensibility will produce good and bad results, but what is good and bad will depend highly on the context.  It might seem easy to judge something broadly on the internet as good or bad.  In reality, downloading an app and opt-ing in.  What should you really warn about and how?  To me this seems remarkably difficult.  I am not sure we’re in a better place because every action on my modern device has a potential warning message or a choice from a very long list I need to manage.

We’re not there yet collectively as an industry on balancing the extensibility of platforms and the desire for safety, security, performance, predictability, and more.  Modern platforms are a huge step in a better direction.

Let’s be careful collectively about how we move forward when faced with a pattern we’re all familiar with.


28-10-13 Fixed a couple of typos.

Written by Steven Sinofsky

October 25, 2013 at 9:30 am

Posted in posts

Tagged with , ,

Coding through silos (5 tips on sharing code)

We are trying to change a culture of compartmentalized, start-from-scratch style development here. I’m curious if there are any good examples of Enterprise “Open Source” that we can learn from.

—Question from reader with a strong history in engineering management

MK6_TITAN_IIWhen starting a new product line or dealing with multiple existing products, there’s always a question about how to share code. Even the most ardent open source developers know the challenges of sharing code—it is easy to pick up a library of “done” code, not so hard to share something that you can snapshot, but remarkably difficult to share code that is also moving at a high velocity like your work.

Developers love to talk about sharing code probably much more than they love to share code in practice.  Yet, sharing code happens all the time—everyone uses an OS, web server, programming languages, and more that are all shared code.  Where it gets tricky is when the shared code is an integral part of the product you’re developing.  That’s when shared code goes from “fastest way to get moving” to “a potential (difficult) constraint” or to “likely a critical path”.  Ironically, this is usually more true inside of a single company where one team needs to “depend” on another team for shared code than it is on developers sharing code from outside the company.

Organizationally, sharing code takes on varying degrees of difficulty depending on the “org distance” between developers.  For example, two developers working for the same manager don’t even think about “sharing code” as much as they think about “working together”.  At the other end of the spectrum, developers on different products with different code bases (perhaps started at different times with early thoughts that the products were unrelated or maybe one code base was acquired) think naturally about shipping their code base and working on their product first and foremost.

This latter case is often viewed as an organizational silo—a team of engineering, testing, product, operations, design, and perhaps even separate marketing or P&L responsibility.  This might be the preferred org design (focus on business agility) or it might be because of intrinsic org structures (like geography, history, leadership approach).  The larger these types of organizations the more the “needs of the org” tend to trump the “needs of the code”.

Let’s assume everyone is well-meaning and would share code, but it just isn’t happening organically.  What are 5 things the team overall can do?

  1. Ship together. The most straight-forward attribute two teams can modify in order to effectively share code is to have a release/ship schedule that is aligned.  Sharing code is the most difficult when one team is locked down and the other team is just getting started.  Things get progressively easier the closer to aligned each team becomes.  Even on very short cycles of 30-60 days, the difference in mindset about what code can change and how can quickly grow to be a share-stopper. Even when creating a new product alongside an existing product, picking a scheduling milestone that is aligned can be remarkably helpful in encouraging sharing rather than a “new product silo” which only digs a future hole that will need to be filled.

  2. Organize together to engineer together.  If you’re looking at trying to share code across engineering organizations that have an org distance that involves general management, revenue or P&L, or different products, then there’s an opportunity to use organization approaches to share code.  When one engineering manager can look at a shared code challenge across all of his/her responsibilities there more of a chance that an engineering leader will see this as an opportunity rather than a tax/burden.  The dialog about efficacy or reality of sharing code does not span managers or importantly disciplines, and the resulting accountability rests within straight-forward engineering functions.  This approach has limits (the graph theory of org size as well as the challenges of organizing substantially different products together).

  3. Allocate resources for sharing.  A large organization that has enough resources to duplicate code turns out to be the biggest barrier to sharing code.  If there’s a desire to share code, especially if this means re-architecting something that works (to replace it with some shared code, presumably with a mutual benefit) then the larger team has a built-in mechanism to avoid the shared code tax.  As painful as it sounds, the most straight-forward approach to addressing this challenge is to allocate resources such that a team doesn’t really have the option to just duplicate code.  This approach often works best when combined with organizing together, since one engineering manager can simply load balance the projects more effectively.  But even across silos, careful attention (and transparency) to how engineering resources are spent will often make this approach attainable.

  4. Establish provider/consumer relationships.  Often shared code can look like a “shared code library” that needs to be developed.  It is quite common and can be quite effective to form a separate team, a provider, that exists entirely to provide code to other parts of the company, a consumer. The consumer team will tend to look at the provider team as an extension to their team and all can work well.  On the other hand, there are almost always multiple consumers (otherwise the code isn’t really shared) and then the challenges of which team to serve and when (and where requirements might come from) all surface.  Groups dedicated to being the producers of shared code can work, but they can quickly take on the characteristics of yet another silo in the company. Resource allocation and schedules are often quite challenging with a priori shared code groups.

  5. Avoid the technical buzz-saw. Developers given a goal to share code and a desire to avoid doing so will often resort to a drawn-out analysis phase of the code and/or team.  This will be thoughtful and high-integrity.  But one person’s approach to being thorough can also look to another as a delay or avoidance tactic.  No matter how genuine the analysis might be, the reality is that it can come across as a technical buzz-saw making all but the most idealized code sharing impossible. My own experience has been that simply avoiding this process is best—a bake-off or ongoing suitability-to-task discussion will only drive a wedge between teams. At some level sharing code is a leap of faith that a lot of folks need to take and when it works everyone is happy and if it doesn’t there’s a good chance someone is likely to say “told you so”. Most every bet one makes in engineering has skeptics.  Spending some effort to hear out the skeptics is critical.  A winners/losers process is almost always a negative for all involved.

The common thread about all of these is that they all seem impossible at first.  As with any initiative, there’s a non-zero cost to obtaining goals that require behavior change.  If sharing code is important and not happening, there’s a good chance you’re working against some of the existing constraints in the approach. Smart and empowered teams act with the best intentions to balance a seemingly endless set of inbound issues and constraints, and shared code might just be one of those things that doesn’t make the cut.

Keeping in mind that at any given time an engineering organization is probably overloaded and at capacity just getting stuff done, there’s not a lot of room to just overlay new goals.

Sharing code is like sharing any other aspect of a larger team—from best practices in tools, engineering approaches, team management—things don’t happen organically unless there’s a uniform benefit across teams.  The role of management is to put in place the right constraints that benefit the overall goals without compromising other goals.  This effort requires ongoing monitoring and feedback to make sure the right balance is achieved.

For those interested in some history, this is a Harvard Business School case on the very early Office (paid article) team and the challenges/questions around organizing around a set of related products (hint, this only seems relatively straight-forward in hindsight).


Written by Steven Sinofsky

October 20, 2013 at 4:00 pm

Disruption and woulda, coulda, shoulda

jenga-fallingWith the latest pivot for Blackberry much has been said about disruption and what it can do to companies. The story, Inside the fall of BlackBerry: How the smartphone inventor failed to adapt, by Sean Silcoff, Jacquie Mcnish and Steve Ladurantaye in The Globe and Mail is a wonderful account.

Disruption has a couple of characteristics that make it fun to talk about.  While it is happening even with a chorus of people claiming it is happening, it is actually very difficult to see. After it has happened the chorus of “told you so” grows even louder and more matter of fact. After the fact, everyone has a view of what could have been done to “prevent” disruption.  Finally, the description of disruption tends to lose all of the details leading up to the failure as things get characterized at the broad company level or a simple characteristic (keyboard v. touch) when the situation is far more complex.  Those nuances are what product folks deal with day to day and where all the learning can be found.

Like many challenges in business, there’s no easy solution and no pattern to follow.  The decision moments, technology changes, and business realities are all happening to people that have the same skills and backgrounds as the chorus, but the real-world constraints of actually doing something about them.

The case of Blackberry is interesting because the breadth of disruptive forces is so great.  It is not likely that a case like this will be seen again for a while—a case where a company has such an incredible position of strength in technology and business gained over a relatively short time and then essentially erased in a short time.

I loved my Blackberry.  The first time I used one was before they were released (because there was integration with Outlook I was lucky enough to be using one some time in 1998—I even read the entire DOJ filing against Microsoft on one while stopped on the tarmac at JFK).  Using the original 850 was a moment when you immediately felt propelled into the future.  Using one felt like the first time I saw a graphical interface (Alto) or a GPS.  Upon using one you just knew our technology lives would be different.

What went wrong is almost exactly the opposite of what went right and that’s what makes this such an interesting story and unbelievably difficult challenge for those involved.  Even today I look at what went on and think of how galactic the challenges were for that amazing group of people that transported us all to the future with one product.


When you build a product you make a lot of assumptions about the state of the art of technology, the best business practices, and potential customer usage/behavior.  Any new product that is even little bit revolutionary makes these choices at an instinctual level—no matter what news stories you read about research or surveys or whatever, I think we all know that there’s a certain gut feeling that comes into play.

This is especially the case for products that change our collective world view.

Whether made deliberately or not these assumptions play a crucial role in how a product evolves over time. I’ve never seen a new product developed where the folks wrote down a long list of assumptions.  I wouldn’t even know where to start—so many of them are not even thought through and represent just an engineer or product manager “state of the art”, “best practice”, or “this is what I know”.

It turns out these assumptions, implicit or explicit, become your competitive advantage and allow you to take the market by storm.

But then along come technology advances, business model changes, or new customer behaviors and seemingly overnight your assumptions are invalidated.

In a relatively simple product (note, no product is simple to the folks making it) these assumptions might all be within the domain.  Christensen famously studied the early days of the disk drive industry.  To many of us these assumptions are all contained within one system or component and it is hard to see how disruption could take hold.  Fast forward and we just assume solid-state storage, yet even this transition as obvious as it is to us, requires a whole new world view for people who engineer spinning disks.

In a complex product like the entirety of the Blackberry experience there are assumptions that cross hardware, software, communications networks, channel relationships, business models and more.  When you bring all these together into a single picture one realizes the enormity of what was accomplished.

It is instructive to consider the many assumptions or ingredients of Blackberry success that go beyond the popular “keyboard v. touch”.  In thinking about my own experience with the product, the following list just a few things that were essentially revisited by the iPhone from the perspective of the Blackberry device/team:

  • Keyboard to touch.  The most visible difference and most easily debated is this change.  From crackberry thumbs to contests over who could type faster, your keyboard was clearly a major innovation. The move to touch would challenge you in technology, behavior, and more.
  • Small (b&w) screens to large color.  Closely connected with the shift to touch was a change in perspective that consuming information on a bigger screen would trump the use of the real estate for (arguably) more efficient input.  Your whole notion of industrial design, supply chain, OS, and more would be challenged.  As an aside, the power consumption of large screens immediately seemed like a non-starter to a team insanely focused on battery life.
  • GPRS to 3G then LTE. Your heritage in radios, starting with the pager network, placed a premium on using the lowest power/bandwidth radio and focusing on efficiency therein.  The iPhone, while 2G early, quickly turned around a game changing 3G device.  You had been almost dragged into using the newer higher powered radios because your focus had been to treat radio usage as a premium resource.
  • Minimize bandwidth to assume bandwidth is free.  Your focus on reducing bytes over the wire was met with a device that just assumed bytes would be “free” or at least easily purchased.  Many of the early comments on the iPhone focused on this but few assumed the way the communications companies would respond to an appetite for bandwidth.  Imagine thinking how sloppy the iPhone was with bandwidth usage and how fast the battery would drain.  Assuming a specific resource is high cost is often a path to disruption when someone makes a different assumption.
  • No general web support v. general web support.  Despite demand, the Blackberry avoided offered generalized web browsing support.  The partnership with carriers also precluded this given their concern about network responsiveness and capacity.  Again, few would have assumed a network buildout that would support mobile browsing the way it does today.  The disruptor had the advantage of growing slowly (relatively) compared to flipping a switch on a giant installed base.
  • WiFi as “present” to nearly ubiquitous.  The physics of WiFi coverage (along with power consumption, chip surface area and more) assumed WiFi would be expensive and hard to find.  Even with whole city WiFi projects in early 2000’s people didn’t see WiFi as a big part of the solution.  Few thought about the presence of WiFi at home and new usage scenarios or that every urban setting, hotel, airport, and more would have WiFi.  Even the carriers built out WiFi to offload traffic and include it for free in their plans.  The elegant and seamless integration of WiFi on the iPhone became a quick advantage.
  • Device update/mgmt by tethering to off air.  Blackberry required tethering for some routine operations and for many the only way to integrate corporate mail was to keep a PC running all the time. The PC was an integral part of the Blackberry experience for many. While the iPhone was tethered for music and videos, the presence of WiFi and march towards PC-free experiences was an early assumption in the architecture that just took time to play out.
  • Business to consumer. Your Blackberry was clearly a business device.  Through much of the period of high success consumers flocked to devices like the SideKick.  While there was some consumer success, you anchored in business scenarios from Exchange and Notes integration to network security.  The iPhone comes along and out of the gate is aimed at consumers with a camera, MMS, and more.  This disruption hits at the hardware, the software, the service integration, and even how the device is sold at carriers.
  • Data center based service to broad set of cloud based services.  Your connection to the enterprise was anchored in a server that business operated.  This was a significant business upside as well as a key part of the value proposition for business. This server became a source for valuable business information propagated to the Blackberry (rather than use the web).  The absence of an iPhone server seemed like a huge opportunity yet in fact it turned into an asset in terms of spreading the device.  Instead the iPhone relied on the web (and subsequently apps) to deliver services rather than programmed and curated services.
  • Deep channel partnership/revenue sharing to somewhat tense relationship.  By most accounts, your Blackberry business was an incredible win-win with telcos around the world.  Story after story talked of the amazing partnerships between carriers and Blackberry.  At the same time, stories (and blame game) between Apple and AT&T in the US became somewhat legendary.  Yet even with this tension, the iPhone was bringing very valuable customers to AT&T and unseating Blackberry customers.
  • Ubiquitous channel presence to exclusives. Your global partnership strength was unmatched and yet disrupted. The iPhone launched with single carriers in limited markets, on purpose.  Many viewed that as a liability, including Blackberry.  Yet in hindsight this only increased the value to the selected partners and created demand from other potential partners (even with the tension).
  • Revenue sharing to data plan.  One of the main assets that was mostly invisible to consumers was the revenue to Blackberry for each device on the network.  This was because Blackberry was running a secure email service as a major anchor of the offering. Most thought no one was going to give up this revenue, including the carrier ability to up-charge for your Blackberry. Few saw a transition to a heavily subsidized business model with high priced data plans purchased by consumers.

These are just a few and any one of these is probably debatable. The point is really the breadth of changes the iPhone introduced to the Blackberry offering and roadmap.  Some of these are assumptions about the technology, some about the business model, some about the ecosystem, some about physics even!

Imagine you’ve just changed the world and everything you did to change the world—your entire world view—has been changed by a new product.  Now imagine that the new product is not universally applauded and many folks not only say your product is better and more useful, but that the new product is simply inferior.

Put yourself in those shoes…


Disruption happens when a new product comes along and changes the underlying assumptions of the incumbent, as we all know.

Incumbent products and businesses respond by often downplaying the impact of a particular feature or offering.  And more often than folks might notice, disruption doesn’t happen so easily.  In practice, established businesses and products can withstand a few perturbations to their offering.  Products can be rearchitected. Prices can be changed.  Features can be added.

What happens though when nearly every assumption is challenged?  What you see is a complete redefinition of your entire company.  And seeing this happen in real time is both hard to see and even harder to acknowledge.  Even in the case of Blackberry there was a time window of perhaps 2 years to respond—is that really enough time to re-engineer everything about your product, company, and business?

One way to look at this case is that disruption rarely happens from a single vector or attribute, even though the chorus might claim X disrupts Y because of price or a single feature, for example.  We can see this in the case of something like desktop Linux—being lower priced/open source are interesting attributes but it is fair to say that disruption never really happened to the degree that might have been claimed early on.

However, if you look at Linux in the data center the combination of using Linux for proprietary data center architectures and services combined with the benefit of open source/low price brought with it a much more powerful disruptive capability.

One might take away from this case and other examples, that the disruption to watch out for the most would be the one that combined multiple elements of the traditional marketing mix  of product, price, place, promotion. When considering these dimensions it is also worth understanding the full breadth of assumptions, both implicit and explicit, in your product and business when defending against disruption. Likewise, if you’re intending to disrupt you want to consider the multiple dimensions of your approach in order to bypass the intrinsic defenses of incumbents.

It is not difficult to talk about disruption in our industry.  As product and business leaders it is instructive to dive into a case of disruption and consider not just all the factors that contributed but how would you respond personally.  Could you really lead a team through the process of creating a product that literally inverted almost every business and technology assumption that created $80B or so in market cap over  a 10 year period?

In The Sun Also Rises, Hemingway wrote:

How did you go bankrupt? Two ways. Gradually, then suddenly.

That is how disruption happens.

—Steven Sinofsky

Written by Steven Sinofsky

October 3, 2013 at 9:00 am

Posted in posts

Tagged with ,

%d bloggers like this: