Learning by Shipping

products, development, management…

Archive for the ‘posts’ Category

Realities of Performance Appraisal

Much has been written recently about performance ratings and management at some large and successful companies. Amazon has surfaced as a company implementing OLRs, organization and leadership reviews, which target the least effective 10% of an organization for appropriate action. Yahoo recently implemented QPRs, quarterly performance reviews, which rates people as “misses” or “occasionally misses” among other ratings. And just so we don’t think this is something unique to tech, every year about this time Wall St firms begin the annual bonus process which is filled with any number of legendary dysfunctions given the massive sums of money in play. Even the Air Force has a legendary process for feedback and appraisal.

This essay looks at the challenges of performance review in a large organization. The primary purpose is to help share the realities that designing and implementing a system for such an incredibly sensitive topic is a monumental challenge when viewed in isolation. If you overlay the environment of an organization (stock price, public perception, revenue or profit, local competition for talent, etc.) then any system at all can seem anywhere from tyrannical to fair to kick-ass for some period of time, and then swing the other way when the context changes. For as much as we think of performance management as numeric and thus perfectly quantifiable, it is as much a product of context and social science as the products we design and develop. We want quantitative certainty and simplicity, but context is crucial and fluid, and qualitative. We desire fair, which is a relative term, but insist on the truth, which is absolute.

While there is an endless quest for simplicity, much as with airline tickets, car prices, or tax codes it is naive to believe that simplicity can truly be achieved or maintained over time. The challenge doesn’t change the universally shared goal of simplicity (believe me HR people do not like complex systems any more than everyone else does) but as a practical matter such purity is unattainable. Therefore comparing any system to some ideal (“flat tax”, “fixed pricing”) only serves to widen the gap between desire and implementation and thus increases the frustration and even fear of a system.

If this topic were simple there would not be over 25,000 books listed on Amazon’s US book site for the query “performance review”. Worse, the top selling books are about how to write your review, game the system, impress your boss, or tell employees they are doing well when they really aren’t doing well. You get pretty far down the list before you get to books that actually try to define a system and even then those books are filled with caveats. My own view is that the best book on the topic is Measuring and Managing Performance in Organizations. It is not about the perfect review system but about the traps and pitfalls of just measuring stuff in general. I love this book because it is a reminder of everything you know about measuring, from “measure twice, cut once” to “measure what you can change” to “if you measure something it goes up” and so on.

Notes. I am not an HR professional and don’t get wrapped up in the nuances of terminology between “performance review” or “performance management” or “performance rating”. I recognize the differences and respect them but will tend to intermix the terms for the purposes of discussion. I also recognize that for the most part, people executing such a system generally don’t see the subtle distinctions in these words as much as they might mean something within the academy. I am also not a lawyer, so what I say here may or may not be legally permitted in your place of doing business (geography, company size, sector). Finally, this post is not about any specific company practice past or present and any similarity is unintended coincidence.

This post will say some things that are likely controversial or appear plain wrong to some. I’ll be following this on twitter to see what transpires, @stevesi.

5 Essential Realities

There are several essential realities to performance reviews:

  • Performance systems conflate performance and compensation with organizational budgets. No matter how you look at it, one person cannot be evaluated and paid in isolation of budgets. The company as a whole has a budget for how much to pay people (salary, bonus, stock, etc.) No matter what an individual’s compensation is part of a system that ultimately has a budget. The vast majority of mechanical or quantitative effort in the system is not about one person’s performance but about determining how to pay everyone within the budget. While it is desirable to distinguish between professional development and compensation, that will almost certainly get lost once a person sees their compensation or once a manager has to assign a rating. Any suggestion as to how to be more fair, allow for more flexibility, provide more absolute ratings, or otherwise separate performance from compensation must still come up with a way to stick to a budget. The presence of a budget drives the existence of a system. There is always a budget and don’t be fooled by “found money” as that’s just a budget trick.
  • In a group of any size there is a distribution of performance. At some point a group of people working towards similar goals will exhibit a distribution of performance. From our earliest days in school we see this with schoolwork. In the workplace there are infinite variables that influence the performance of any individual but the variability exists. In an ideal system one could isolate all the variables from some innate notion of “pure contribution” or “pure skill” in order to evaluate someone. But that can’t be done so the distribution one sees essentially lumps together many performance related variables.
  • In a system where you have m labels for performance, people who get all but the most rewarding one believe they are “so close” to the higher one. In school, teachers have letter grades or numeric ranges that break up test scores into “buckets”. In the workplace, performance systems generally implement some notion of grades or ratings and assign distributions to each of those. Much like a forced curve in a physics test, the system says that only a certain percentage of a population can get the highest performance rating and likewise a certain percentage of the team gets the lowest rating. The result is that most everyone in the organization believes they are extremely close to the next rating much like looking at a test and thinking if you could just get that one extra point you’d get the next letter grade. Because of human nature, any such system almost certain follows the corollary that managers are likely to imply or one being managed likely to hear evidence of just how close a call their review score was. There is a corollary: “everyone believes they are above average“.
  • Among any set of groups, almost all the groups think their group is delivering more and other groups are delivering less. In a company with many groups, managers generally believe their group as a whole is performing better by relevant measures and thus should not be held to the same distribution or should have a larger budget. Groups tend to believe their work is harder, more strategic, or just more valuable while underestimating those contributions from other groups. Once groups realize that there is a fixed budget, some strive to solve the overall challenges by allowing for higher budgets on some teams. In this way you could either use a different distribution of people (more at the top) or just elevate the compensation for people within a group. Any suggestion to do this would need to also provide guidance as to how groups as a whole are to be measured relative to each other (which sounds an awful lot like how individuals would be measured relative to each other).
  • Measurement is not an absolute but is relative. To measure performance it must be measured relative to something. Sales is the “easiest” since if you have a sales quota then your compensation is just a measure of how much you beat the quota. Such simplicity masks the knife fight that is quota settings and the process by which a comp sheet is built out, but it is still a relative measure. Most product have squishier goals such as “finish the product” or “market the product”. The larger the company the more these goals make sense but the less any individual’s day to day actions are directly related (“If I fix this bug will the sale really close?”). Thus in a large company, goal setting becomes increasingly futile as it starts to look like “get my work done” as the interconnection between other people and their work is impossibly hard to codify. Much of the writing about performance reviews focuses on goal setting and the skill in writing goals you can always brag about, unfortunately. All of this has taken a rather dramatic turn with the focus on agility where it is almost the antithesis of well-run to state months in advance what success looks like. As a result, measuring performance relative to peers “doing their work” is far more reliable, but has the downside that the big goals all fall to the top level managers. That’s why for the most part this entire topic is a big company thing—in a startup or a small company, actions translate into sales, marketing, products, and customers all very directly.

10 Real World Attributes

Once you take these realities you realize there will in fact be some sort of system. The goal of the system is to figure out how much to pay people. For all the words about career management, feedback, and so on that is not what anyone really focuses on at the moment they check their “score”. It certainly isn’t what is going on around the table of managers trying to figure out how to fit their team within the rules of the system.

Those that look to the once a year performance rating as the place for either learning how they are doing or for sharing feedback with an employee are simply doing it wrong—there simply shouldn’t be surprises during the process. If there are surprises then that’s a mistake no system can fix. There are no substitutes for concrete, actionable, and ongoing feedback about performance. If you’re not getting that then you need to ask.

At the same time, you can’t expect to have a daily/weekly rating for how you are doing. That’s because your performance is relative to something and that something isn’t determined on a daily basis. Finding that happy place is a challenge for individuals and managers, with the burden to avoid surprises falling to both equally. As much as one expects a manager to communicate well individuals must listen well.

Putting a system in place for allocating compensation is enormously challenging. There’s simply too much at stake for the company, the team, managers, and individuals. Ironically, because so much is at stake that materially impacts the lives of people, it is not unusual for the routine implementation of the process to take months of a given year and for it to occupy far more brain cycles than the actual externally facing work of the organization. Ironically, the more you try to make the process something HR worries about the even more disconnected it becomes from work and the more stress. As a result, performance management occupies a disproportionate amount of time and energy in large organizations.

Because of this, everyone in a company has enough experience to be critical of the system and has ideas how to improve it. Much like when a company does TV advertising and everyone can offer suggestions—simply because we all watch TV and buy stuff—when it comes to performance reviews since we all do work and get reviewed we all can offer insights and perspectives on the system. Designing a system from scratch is rather different than being critical of anecdotes of an existing system.

Given that so much is at stake and everyone has ideas how to improve the system, the actual implementation is enormously complex. While one can attempt to codify a set of rules, one cannot codify the way humans will implement the rules. One can keep iterating, adding more and more rules, more checks and balances, but eventually a process that already takes too much time becomes a crushing burden. Even after all that, statistically a lot of people are not going to be happy.

Therefore the best bet with any system is to define the variables and recognize that choices are being made and that people will be working within a system that by definition is not perfect. One can view this as gaming the system, if one believes the outcome is not tilted towards goodness. Alternatively, one can view this as doing the right thing, if one believes the outcome is tilted in the direction of goodness.

My own experience, is that there are so many complexities it is pointless to attempt to fully codify a system. Rather everyone just goes in with open eyes and a realistic view of the difficulty of the challenge and iterates through the variables throughout the entire dialog. Fixating on any one to the exclusion of others is when ultimately the system breaks down.

The following are ten of the most common attributes that must be considered and balanced when developing a performance review system:

  1. Determining team size. There is critical mass of “like” employees (job function, seniority, familiarity, responsibility) required to make any system even possible. If you have less than about 100 people no system will really work. At the same time, at about 100 people you are absolutely assured of having a sample size large enough to see a diversity in performance. There is going to be a constant tension between employees who believe the only fair way of evaluation is to have intimate knowledge of their work and a system that needs a lot of data points. In practice, somewhere between 1 and 5 people are likely to have intimate understanding of the work of an individual, but said another way any given manager is likely to have intimate knowledge of between 5 and about 50 people. At some point the system requires every level of management to honestly assess people based on a dialog of imperfect information. Team size also matters because small “rounding” efforts become enormous. Imagine something where you need to find 10% of the population and you have a team of 15 people to do that with. You obviously pick either 1 or 2 people (1 if the 10% is “bad”, 2 if it is “good”). Then imagine this rolls up to 15,000 people. Rather than 1500, you have either 1000 or 2000 people in that 10%. That’s either very depressing or very expensive relative to the budget. Best practice: Implement a system in groups of about 100 in seniority and role.
  2. Conflating seniority, job function, and projects does not create a peer group. Attempting to define relative contribution of a college new hire and a 10 year industry vet, or a designer and a QA engineer, manager or not, or a front-end v. ops tools are all impossibly difficult. The dysfunction is one where invariably as the process moves up the management chain there will be a bias that builds—the most visible people, the highest paid people or jobs, scarcest talent, the work that is understood and so on will become the things that get airtime in dialogs. There’s nothing inherently evil about this but it can get very tricky very quickly if those dialogs lead to higher ratings/compensation for these dimensions. This can get challenging if these groups are not sized as above and so you’ll find it a necessary balancing act. Best practice: Define peer groups based on seniority and job function within project teams as best you can.
  3. Measuring against goals. It is entirely possible to base a system of evaluation and compensation on pre-determined goals. Doing so will guarantee two things. First, however much time you think you save on the review process you will spend up front on an elaborate process of goal-setting. Second, in any effort of any complexity there is no way to have goals that are self-contained and so failure to meet goals becomes an exercise in documenting what went wrong. Once everyone realizes their compensation depends on others, the whole work process becomes clouded by constant discussion about accountability, expectation setting, and other efforts not directly related to actually working things out. And worse, management will always have the out of saying “you had the goal so you should have worked it out”. There’s nothing more challenging in the process of evaluation than actually setting goals and all of this is compounded enormously when the endeavor is a creative one where agility, pivots, and learning are part of the overall process. Best practice: let individuals and their manager arrive at goals that foster a sense of mastery of skills and success of the project, while focusing evaluation on the relative (and qualitative) contribution to the broader mission.
  4. Understanding cross-organization performance. Performance measurement is always relative, but determining performance across multiple organizations in a relative sense requires apples to oranges comparisons, even within similar job functions (i.e. engineering). If one team is winding down a release and another starting, or if one team is on an established business and another on a new business, or if one team has no competitors and another is in an intense battle, or if one team has a lot of sales support and another doesn’t, and so on are all situations which make it non-obvious how to “compare” multiple teams, yet this is what must happen at some level. Compounding this situation is that at some point in evaluation the basis for relative comparison might dramatically change—for example, at one level of management the accomplishment of multiple teams might be looked at through a lens that can be far removed from what members of those teams might be able to impact in their daily work. Best practice: do not pit organizations against each other by competing for rewards and foster cross-group collaboration via planning and execution of shared bets.
  5. Maintaining a system that both rates and rewards. Systems often have some sort of score or a grade and they also have compensation. Some think this is essential. Some think this is redundant. Some care deeply about one, but only when they are either very happy or very unhappy with the other. A system can be developed where these are perfectly correlated in which case one can claim they are redundant. A system where there is a loose correlation might as well have no correlation because both individuals and managers involved are hearing what they want to hear or saying one thing and doing another. At the same time, we’re all conditioned for a “score” and somehow a bonus of 9.67% doesn’t feel like a score because you don’t know what this means relative (so even though people want to be rated absolutely it doesn’t take long before they want to know where that stands relatively). Best practice: A clear rating that lets individuals know where they stand relative to their peer group along with compensation derived from that with the ability of a manager with the most intimate knowledge of the work to adjust compensation within some rating-defined range.
  6. Writing a performance appraisal is not making a case. Almost all of the books on Amazon about performance reviews focus on the art of writing reviews. Your performance review is not a trial and one can’t make or break the past year/month/quarter by an exercise in strong or creative writing. This holds for individuals hoping to make their performance shine and importantly for managers hoping to make up for their lack of feedback/action. The worst moments in a performance process are when an employee dissects a managers comments and attempts to refute them or when a manager pulls up a bunch of new examples of things that were not talked about when they were happening. Best practice: Lower the stakes for the document itself and make it clear that it is not the decision-making tool for the process.
  7. Ranking and calibrating are different. Much has been said about the notion of “stack rank” which often is used as a catch phrase for a process that assigns each member of a group a “one through n” score. This is always always a terrible process. There is simply no way to have the level of accuracy implied by such a system. What would one say to someone trying to explain the difference between being number 63 and number 64 on a 100 person team? The practice of calibrating is one of relative performance between members of peer groups as described above. The size and number of these groups is fixed and when done with adequate population size can with near certainty avoid endless discussion over boundary cases. Best practice: Define performance groups where members of a team fall but do not attempt to rank with more granularity or “proximity” to other groups.
  8. Encouraging excellent teams. Most managers believe their teams are excellent teams, and uniquely so. Strong performers have been hired to the team. Weak performers are naturally managed out if they somehow made it on to the team. Results show this. It becomes increasingly difficult to implement a performance review system because organizations become increasingly strong and effective. This is how it should be. At the same time this cannot possibly be a permanent state (even the sports teams get new players that don’t pan out over the course of a season). In a dynamic system there will be some years where a team is truly excellent and some years where it is not, but you can’t really know that in an absolute sense. In fact, the most ossifying element of performance appraisal is to assume that a given team or given person has reached a point where they are just excellent in an absolute sense and thus the system no longer applies. Whether the team is 100 engineers just crushing it or an executive team firing on all cylinders, it is very tempting to say the system doesn’t apply. But if the system doesn’t apply you don’t really have a system. Perhaps your organization will have a concept of “tenure” or you have a job function primarily compensated by quota based on quantitative measures—those are ways to have different systems. Best practice: Make a system that applies to everyone or have multiple systems and clear rules how membership in different systems is determined.
  9. Allowing for exceptions creates an exception-based process. When a team adds all of the potential constraints up and attempt to finally close in on performance of individuals, there is a tendency to “feel the pain” of all the rules and to create a model for exceptions. For example, you might have a 10% group but allow for up to 1% exceptions. Doing so will invariably create either the 9% or 11% group depending on if it is better to except up or down. If managers have the option of giving someone a low rating by extra money along with other people getting a high rating with less money, then invariably most people will get this mixed message. All of these exceptions quickly permeate an organization and individuals end up considering getting an exception a normal part of the process. Best practice: If there is going to be a system, then stick to it and don’t encourage exceptions.
  10. Embracing diversity in all dimensions. Far too often in performance appraisal and rewards, even within peer groups there is potential for the pull of sameness. This can manifest itself through any number of professional characteristics that can be viewed as either style or actual performance traits. One of the earliest stories I heard of this was about a manager that preferred people to set very aggressive goals for adding features. Unfortunately there was no measure for quality of the work. Other members of the team would focus on a combination of features and quality. Members of the latter group felt penalized relative to the person with the high bug count. At the same time, the team tended to be one that got a lot of features done early but had a much longer tail. Depending on when performance reviews got done, the story could be quite different. Perhaps both styles of work are acceptable, but not appreciating the “perceived slow and steady” is a failing of that manager to embrace styles. The same can be said for personal traits such as the always present quiet v. loud, or oral v. written, and so. Best practice: Any strong and sustainable team will be diverse in all dimensions.

Finally

Much more could be said about the way performance appraisal and reward can and should work in organizations. Far too much of what is said is negative and assumes a tone dominated by us v. them or worse a view that this is all a very straight forward process that management just consistently gets wrong. Like so many things in business there is no right answer or perfect approach. If there was, then there would be one performance system that everyone would use and it would work all the time. There is not.

Some suggest that the only way to solve this problem is to just have a compensation budget and let some level of management be responsible. That is a manager just determines compensation for each member of a team based on their own criteria. This too is a system—the absence of a system is itself a system. In fact this is not a single system but n systems, one for each manager. Every group will arrive at a way to distribute money and ratings that meets the needs of that team. There will be peanut butter teams, there will be teams that do the “big bonus”, and more. There will even be teams that use the system as a recruiting tool.

As much as any system is maligned, having a system that is visible, has some framework, and a level of cross-organization consistency provides many benefits to the organization as whole. These benefits accrue even with all the challenges that also exist.

To end this post here are three survival tips for everyone, individuals and managers, going through a performance process that seems unfair, opaque, or crazy:

  • No one has all the data. Individuals love to remind some level of management that they do not have all the data about a given employee. Managers love to remind people that they see more data points than any one individual. HR loves to remind people that they have competitive salary data for the industry. Executives remind people they have data for a lot of teams. The bottom line is that no one person has a complete picture of the process. This means everyone is operating with imperfect information. But it does not follow that everyone is operating imperfectly.
  • Share success, take responsibility. No matter what is happening and in what context, everyone benefits when successes are shared and responsibility is taken. Even with an imperfect system, if you do well be sure to consider how others contributed and thank them as publicly as you can. If you think you are getting a bad deal, don’t push accountability away or point fingers, but look to yourself to make things better.
  • Things work out in the end. Since no system is perfect it is tempting to think that one data point of imperfection is a permanent problem. Things will go wrong. We don’t talk about it much, but some people will get a rating and pay much higher than they probably deserve at some point. And yes, some people will have a tough time that they might not really deserve in hindsight. In a knowledge economy, talent wins out over time. No manager will hold one datapoint against a talented person who gracefully recovers from a misstep. It takes discipline and effort to work within a complex and imperfect system—this is actually one of the skills required for anyone over the course of a career. Whether it is project planning, performance management, strategic choices, management processes and more all of these are social science and all subject to context, error rates, and most importantly learning and iteration.

—Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

November 9, 2013 at 11:00 am

Posted in posts

Thoughts on reviewing tech products

Borat-thumbs-upI’ve been surprised at the “feedback” I receive when I talk about products that compete with those made by Microsoft.  While I spent a lot of time there, one thing I learned was just how important it is to immerse yourself in competitive products to gain their perspective.  It helps in so many ways (see https://blog.learningbyshipping.com/2013/01/14/learning-from-competition/).

Dave Winer (@davewiner) wrote a thoughtful post on How the Times reviews tech today. As I reflected on the post, it seemed worth considering why this challenge might be unique to tech and how it relates to the use of competitive products.

When considering creative works, it takes ~two hours to see a film or slightly more for other productions. Even a day or two for a book. After which you can collect your thoughts and analysis and offer a review. Your collected experience in the art form is relatively easily recalled and put to good use in a thoughtful review.

When talking about technology products, the same approach might hold for casually used services or content consumption services.  In considering tools for “intellectual work” as Winer described (loved that phrase), things start to look significantly different.Software tools (for “intellectual work”) are complex because they do complex things. In order to accomplish something you need to first have something to accomplish and then accomplish it. It is akin to reviewing the latest cameras for making films or the latest cookware for making food. While you can shoot a few frames or make a single meal, tools like these require many hours and different tasks. You shouldn’t “try” them as much as “use” them for something that really matters.  Only then can you collect your thoughts and analysis.Because tools of depth offer many paths and ways to use them there is an implicit “model” to how they are used. Models take a time to adapt to. A cinematographer that uses film shouldn’t judge a digital camera after a few test frames and maybe not even after the first completed work.

The tools for writing, thinking, creating that exist today present models for usage.  Whether it is a smartphone, a tablet, a “word processor”, or a photo editor these devices and accompanying software define models for usage that are sophisticated in how they are approached, the flow of control, and points of entry.  They are hard to use because they do hard things.

The fact that many of those that write reviews rely on an existing set of tools, software, devices to for their intellectual pursuits implies that conceptual models they know and love are baked into their perspective.  It means tools that come along and present a new way of working or seeing the technology space must first find a way to get a clean perspective.

This of course is not possible.  One can’t unlearn something.  We all know that reviewers are professionals and just as we expect a journalist covering national policy debates must not let their bias show, tech reviewers must do the same.  This implicit “model bias” is much more difficult to overcome because it simply takes longer to see and use a product than it does to learn about and understand (but not necessarily practice) a point of view in a policy debate.  The tell-tale sign of “this review composed on the new…” is great, but we also know right after the review the writer has the option of returning to their favorite way of working.

As an example, I recall the tremendous difficulty in the early days of graphical user interface word processors.  The incumbent WordPerfect was a character based word processor that was the very definition of a word processor.  The one feature that we heard relentlessly was called reveal codes which was a way of essentially seeing the formatting of the document as codes surrounding text (well today we think of that as HTML).  Word for Windows was a WYSIWYG word processor in Windows and so you just formatted things directly.  If it was bold on screen then it was implicitly surrounded by <B> and </B> (not literally but conceptually those codes).

Reviewers (and customers) time and time again felt Word needed reveal codes.  That was the model for usage of a “word processor”.  It was an uphill battle to move the overall usage of the product to a new level of abstraction.  There were things that were more difficult in Word and many things much easier, but reveal codes was simply a model and not the answer to the challenges.  The tech  world is seeing this again with the rise of new productivity tools such as Quip, Box Notes, Evernote, and more.  They don’t do the same things and they do many things differently.  They have different models for usage.

At the business level this is the chasm challenge for new products.  But at the reviewer level this is a challenge because it simply takes time to either understand or appreciate a new product.  Not every new product, or even most, changes the rules of the predecessor successfully.  But some do.  The initial reaction to the iPhone’s lack of keyboard or even de-emphasizing voice calls shows how quickly everyone jumped to the then current definition of smartphone as the evaluation criteria.Unfortunately all of this is incompatible with the news cycle for the onslaught of new products or the desire to have a collective judgement by the time the event is over (or even before it starts).This is a difficult proposition. It starts to sound like blaming politicians for not discussing the issues. Or blaming the networks for airing too much reality tv. Isn’t is just as much what peole will click through as it is what reviewers would write about. Would anyone be interested in reading a Samsung review or pulling another ios 7 review after the 8 weeks of usage that the product deserves?

The focus on youth and new users as the baseline for review is simply because they do not have the “baggage” or “legacy” when it comes to appreciating a new product.  The disconnect we see in excitement and usage is because new to the category users do not need to spend time mapping their model and just dive in and start to use something for what it was supposed to do.  Youth just represents a target audience for early adopters and the fastest path to crossing the chasm.

Here are a few things on my to-do list for how to evaluate a new product. The reason I use things for a long time is because I think in our world with so many different models

  1. Use defaults. Quite a few times when you first approach a product you want to immediately customize it to make it seem like what you’re familiar with.  While many products have customization, stick with the defaults as long as possible.  Don’t like where the browser launching button is, leave there anyway.  There’s almost always a reason.  I find the changes in the default layout of iOS 6 v. 7 interesting enough to see what the shift in priorities means for how you use the product.
  2. Don’t fight the system.  When using a new product, if something seems hard that used to seem easy then take a deep breath and decide it probably isn’t the way the product was meant to do that thing.  It might even mean that the thing you’re trying to do isn’t necessarily something you need to do with the new product.  In DOS WordPerfect people would use tables to create columns of text.  But in Word there was a columns feature and using a table for a newsletter layout was not the best way to do that.  Sure there needed to be “Help” to do this, but then again someone had to figure that out in WordPerfect too.
  3. Don’t jump to doing the complex task you already figured out in the old tool.  Often as a torture test, upon first look at a product you might try to do the thing you know is very difficult–that side by side chart, reducing overexposed highlights, or some complex formatting.  Your natural tendency will be to use the same model and steps to figure this out.  I got used to one complicated way of using levels to reduce underexposed faces in photos and completely missed out on the “fill flash” command in a photo editor.
  4. Don’t do things the way you are used to.  Related to this is tendency to use one device the way you were used to.  For example, you might be used to going to the camera app and taking a picture then choosing email.  But the new phone “prefers” to be in email and insert an image (new or just taken) into a message.  It might seem inconvenient (or even wrong) at first, but over time this difference will go away.  This is just like learning gear shift patterns or even the layout of a new grocery store perhaps.
  5. Don’t assume the designers were dumb and missed the obvious. Often connected to trying to do something the way you are used to is the reality that something might just seem impossible and thus the designers obviously missed something or worse.  There is always a (good) chance something is poorly done or missing, but that shouldn’t be the first conclusion.

But most of all, give it time.  It often takes 4-8 weeks to really adjust to a new system and the more expert you are the more time it takes.  I’ve been using Macs on and off since before the product was released to the public, but even today it has taken me the better part of six months to feel “native”.  It took me about 3 months of Android usage before I stopped thinking like an iPhone user.  You might say I am wired too much or you might conclude it really does take a long time to appreciate a design for what it is supposed to do.  I chuckle at the things that used to frustrate me and think about how silly my concerns were at day 0, day 7, and even day 30–where the volume button was, the charger orientation, the way the PIN worked, going backwards, and more.

–Steven Sinofsky

Written by Steven Sinofsky

October 29, 2013 at 12:00 pm

Posted in posts

Tagged with , ,

On the exploitation of APIs

Not a stepLinkedIn engineer Martin Kleppmann wrote a wonderful post detailing the magical and thoughtful engineering behind the new LinkedIn Intro iOS app.  I was literally verklepmpt reading the post–thinking about all those nights trying different things until he (and the team) ultimately achieved what he set out to do, what his management hoped he would do, and what folks at LinkedIn felt would be great for LinkedIn customers.

The internet has done what the internet does which is to unleash indignation upon Martin, LinkedIn, and thus the cycle begins. The post was updated with caveats and disclaimers.  It is now riding atop of techmeme.  Privacy.  Security. etc.

Whether those concerns are legitimate or not (after all this is a massive public company based on the trust of a network), the reality is this app points out a longstanding architectural challenge in API design.  The rise of modern operating systems (iOS, Android, Windows RT, and more) have inherent advantages over the PC-era operating systems (OS X, Windows, Linux) when it comes to maintaining the integrity as designed of the system overall. Yet we’re not done innovating around this challenge.

History

I remember my very first exploit.  I figured out how to use a disk sector editor on CP/M and modified the operating system to remove the file delete command, ERA.  I managed to do this by just nulling out the “ERA” string in what appeared to me to be the command table.  I was so proud of myself I (attempted) to show my father my success.

The folks that put the command table there were just solving a problem.  It was not an API to CP/M, or was it?  The sector editor was really a tool for recovering information from defective floppies, or was it?  My goal was to make a floppy with WordStar on it that I could give to my father to use but would be safe from him accidentally deleting a file.  My intention was good.  I used information and tools available to me in ways that the system architects clearly did not intend.  I stood on the top step of a ladder.  I used a screwdriver as a pry bar.  I used a wrench as a hammer.

The history of the PC architecture is filled with examples of APIs exposed for one purpose put to use for another purpose.  In fact, the power of the PC platform is a result of inventors bringing technology to market with one purpose in mind and then seeing it get used for other purposes.  Whether hardware or software, unintended uses of extensibility have come to define the flexibility, utility, and durability of the PC architecture.  There are so many examples: the first terminate and stay resident programs in MS-DOS, the Z80 softcard for the Apple ][, drawing low voltage power from USB to power a coffee warmer, all the way to that most favorite shell extension in Windows or OS X extension that adds that missing feature from Finder.

These are easily described and high-level uses of extensibility.  Your everyday computing experience is literally filled with uses of underlying extensibility that were not foreseen by the original designers. In fact, I would go as far as to say that if computers and software were only allowed to do things that the original designers intended, computing would be particularly boring.

Yet it would also be free of viruses, malware, DLL hell, system rot, and TV commercials promising to make your PC faster.

Take for example, the role of extensibility in email, Outlook even in particular.  The original design for Outlook had a wonderful API that enabled one to create an add-in that would automate routine tasks in Outlook.  You could for example have a program that would automatically send out a notification email to the appropriate contacts based on some action you would take.  You could also receive useful email attachments that could streamline tasks just by opening them (for example, before we all had a PDF reader it was very common to receive an executable that when opened would self-extract a document along with a viewer).  These became a huge part of the value of the platform and an important part of the utility of the PC in the workplace at the time.

Then one day in 1999 we all (literally) received email from our friend Melissa.  This was a virus that spread by using these same APIs for an obviously terrible usage.  What this code did was nothing different than all those add-ins did, but it did it at Internet scale to everyone in an unsuspecting way.

Thus was born the age of “consent” on PCs.  When you think about all those messages you see today (“use your location”, “change your default”, “access your address book”) you see the direct descendants of Melissa. A follow on virus professed broad love for all of us, I LOVE YOU.  From that came the (perceived) draconian steps of simply disabling much of the extensibility/utility described above.

What else could be done?  A ladder is always going to have a top step–some people will step on it.  The vast majority will get work done and be fine.

From my perspective, it doesn’t matter how one perceives something on a spectrum from good to “bad”–the challenge is APIs get used for many different things and developers are always going to push the limits of what they do.  LinkedIn Intro is not a virus.  It is not a tool to invade your privacy.  It is simply a clever (ne hack) that uses existing extensibility in new ways.  There’s no defense against this.  The system was not poorly designed.  Even though there was no intent to do what Intro did when those services were designed, there is simply no way to prevent clever uses anymore than you can prevent me from using my screwdriver as a pry bar.

Modern example

I wanted to offer a modern example that for me sums up the exploitation of APIs and also how challenging this problem is.

On Android an app can add one or more sharing targets.  In fact Android APIs were even improved to make it easier in release after release and now it is simply a declarative step of a couple of lines of XML and some code.

As a result, many Play apps add several share targets.  I installed a printing app that added 4 different ways to share (Share link, share to Chrome, share email, share over Bluetooth).  All of these seemed perfectly legitimate and I’m sure the designers thought they were just making their product easier to use.  Obviously, I must want to use the functionality since I went to the Play store, downloaded it and everything.  I bet the folks that designed this are quite proud of how many taps they saved for these key scenarios.

After 20 apps, my share list is crazy.  Of course sharing with twitter is now a lot of scrolling because the list is alphabetical.  Lucky for me the Messages app bubbles up the most recent target to a shortcut in the action bar.  But that seems a bit like a kludge.

Then along comes Andmade Share.  It is another Play app that lets me customize the share list and remove things.  Phew.  Except now I am the manager of a sharing list and every time I install an app I have to go and “fix” my share target list.

Ironically, the Andmade app uses almost precisely the same extensibility to manage the sharing list as is used to pollute it.  So hypothetically restricting/disabling the ability of apps to add share targets also prevents this utility from working.

The system could also be much more rigorous about what can be added.  For example, apps could only add a single share target (Windows 8) or the OS could just not allow apps to add more (essentially iOS).  But 99% of uses are legitimate.  All are harmless.  So even in “modern” times with modern software, the API surface area can be exploited and lead to a degraded user experience even if that experience degrades in a relatively benign way.

Anyone that ever complained about startup programs or shell extensions is just seeing the results of developers using extensibility.  Whether it is used or abused is a matter of perspective.  Whether is degrades the overall system is dependent on many factors and also on perspective (since every benefit has a potential cost, if you benefit from a feature then you’re ok with the cost).

Reality

There will be calls to remove the app from the app store. Sure that can be done. Steps will be taken to close off extensibility mechanisms that got used in ways far off the intended usage patterns. There will be cost and unintended side effects of those actions. Realistically, what was done by LinkedIn (or a myriad of examples) was done with the best of intentions (and a lot of hard work).  Realistically, what was done was exploiting the extensibility of the system in a way never considered by the designers (or most users).

This leads to 5 realities of system design:

  1. Everything is an API.  Every bit of a system is an API.  From the layout of files, to the places settings are stored, to actual published APIs, everything in a system as it is released serves as an interface to people who want to extend, customize, or modify your work. Services don’t escape this because APIs are in a cloud behind REST APIs.  For example, reverse engineering packets or scraping HTML is no different — the HTML used by a site can come to be relied on essentially as an API.  The Windows registry is just a place to store stuff–the fact that people went in and modified it outside the intended parameters is what caused problems, not the existence of a place to store stuff.  Cookies?  Just a mechanism.

  2. APIs can’t tell you the full intent.  APIs are simply tools.  The documentation and examples show you the mainstream or an intended use of an API.  But they don’t tell you all the intended uses or even the limits of using an API.  As a platform provider, falling back on documentation is fairly impossible considering both the history of software platforms (and most of the success of a platform coming from people using it in a creative ways) and the reality that no one could read all the documentation that would have to explain all the uses of a single API when there are literally tens of thousands of extensibility points (plus all the undocumented ones, see #1).

  3. Once discovered, any clever use of an API will be replicated by many actors for good or not.  Once one developer finds a way to get something done by working through the clever mechanism of extensibility, if there’s value to it then others will follow. If one share target is good, then having 5 must be 5 times better.  The system through some means will ultimately need to find a way to control the very way extensibility or APIs are used.  Whether this is through policy or code is a matter of choice. We haven’t seen the last “Intro” at least until some action is taken for iOS.

  4. Platform providers carry the burden of maintaining APIs over time.  Since the vast majority of actors are doing valuable things you maintain an API or extensibility point–that’s what constitutes a platform promise.  Some of your APIs are “undocumented” but end up being conventions or just happenstance.  When you produce a platform, try as hard as you want to define what is the official platform and what isn’t but your implied promise is ultimately to maintain the integrity of everything overall.

  5. Using extensibility will produce good and bad results, but what is good and bad will depend highly on the context.  It might seem easy to judge something broadly on the internet as good or bad.  In reality, downloading an app and opt-ing in.  What should you really warn about and how?  To me this seems remarkably difficult.  I am not sure we’re in a better place because every action on my modern device has a potential warning message or a choice from a very long list I need to manage.

We’re not there yet collectively as an industry on balancing the extensibility of platforms and the desire for safety, security, performance, predictability, and more.  Modern platforms are a huge step in a better direction.

Let’s be careful collectively about how we move forward when faced with a pattern we’re all familiar with.

–Steven

28-10-13 Fixed a couple of typos.

Written by Steven Sinofsky

October 25, 2013 at 9:30 am

Posted in posts

Tagged with , ,

Coding through silos (5 tips on sharing code)

We are trying to change a culture of compartmentalized, start-from-scratch style development here. I’m curious if there are any good examples of Enterprise “Open Source” that we can learn from.

—Question from reader with a strong history in engineering management

MK6_TITAN_IIWhen starting a new product line or dealing with multiple existing products, there’s always a question about how to share code. Even the most ardent open source developers know the challenges of sharing code—it is easy to pick up a library of “done” code, not so hard to share something that you can snapshot, but remarkably difficult to share code that is also moving at a high velocity like your work.

Developers love to talk about sharing code probably much more than they love to share code in practice.  Yet, sharing code happens all the time—everyone uses an OS, web server, programming languages, and more that are all shared code.  Where it gets tricky is when the shared code is an integral part of the product you’re developing.  That’s when shared code goes from “fastest way to get moving” to “a potential (difficult) constraint” or to “likely a critical path”.  Ironically, this is usually more true inside of a single company where one team needs to “depend” on another team for shared code than it is on developers sharing code from outside the company.

Organizationally, sharing code takes on varying degrees of difficulty depending on the “org distance” between developers.  For example, two developers working for the same manager don’t even think about “sharing code” as much as they think about “working together”.  At the other end of the spectrum, developers on different products with different code bases (perhaps started at different times with early thoughts that the products were unrelated or maybe one code base was acquired) think naturally about shipping their code base and working on their product first and foremost.

This latter case is often viewed as an organizational silo—a team of engineering, testing, product, operations, design, and perhaps even separate marketing or P&L responsibility.  This might be the preferred org design (focus on business agility) or it might be because of intrinsic org structures (like geography, history, leadership approach).  The larger these types of organizations the more the “needs of the org” tend to trump the “needs of the code”.

Let’s assume everyone is well-meaning and would share code, but it just isn’t happening organically.  What are 5 things the team overall can do?

  1. Ship together. The most straight-forward attribute two teams can modify in order to effectively share code is to have a release/ship schedule that is aligned.  Sharing code is the most difficult when one team is locked down and the other team is just getting started.  Things get progressively easier the closer to aligned each team becomes.  Even on very short cycles of 30-60 days, the difference in mindset about what code can change and how can quickly grow to be a share-stopper. Even when creating a new product alongside an existing product, picking a scheduling milestone that is aligned can be remarkably helpful in encouraging sharing rather than a “new product silo” which only digs a future hole that will need to be filled.

  2. Organize together to engineer together.  If you’re looking at trying to share code across engineering organizations that have an org distance that involves general management, revenue or P&L, or different products, then there’s an opportunity to use organization approaches to share code.  When one engineering manager can look at a shared code challenge across all of his/her responsibilities there more of a chance that an engineering leader will see this as an opportunity rather than a tax/burden.  The dialog about efficacy or reality of sharing code does not span managers or importantly disciplines, and the resulting accountability rests within straight-forward engineering functions.  This approach has limits (the graph theory of org size as well as the challenges of organizing substantially different products together).

  3. Allocate resources for sharing.  A large organization that has enough resources to duplicate code turns out to be the biggest barrier to sharing code.  If there’s a desire to share code, especially if this means re-architecting something that works (to replace it with some shared code, presumably with a mutual benefit) then the larger team has a built-in mechanism to avoid the shared code tax.  As painful as it sounds, the most straight-forward approach to addressing this challenge is to allocate resources such that a team doesn’t really have the option to just duplicate code.  This approach often works best when combined with organizing together, since one engineering manager can simply load balance the projects more effectively.  But even across silos, careful attention (and transparency) to how engineering resources are spent will often make this approach attainable.

  4. Establish provider/consumer relationships.  Often shared code can look like a “shared code library” that needs to be developed.  It is quite common and can be quite effective to form a separate team, a provider, that exists entirely to provide code to other parts of the company, a consumer. The consumer team will tend to look at the provider team as an extension to their team and all can work well.  On the other hand, there are almost always multiple consumers (otherwise the code isn’t really shared) and then the challenges of which team to serve and when (and where requirements might come from) all surface.  Groups dedicated to being the producers of shared code can work, but they can quickly take on the characteristics of yet another silo in the company. Resource allocation and schedules are often quite challenging with a priori shared code groups.

  5. Avoid the technical buzz-saw. Developers given a goal to share code and a desire to avoid doing so will often resort to a drawn-out analysis phase of the code and/or team.  This will be thoughtful and high-integrity.  But one person’s approach to being thorough can also look to another as a delay or avoidance tactic.  No matter how genuine the analysis might be, the reality is that it can come across as a technical buzz-saw making all but the most idealized code sharing impossible. My own experience has been that simply avoiding this process is best—a bake-off or ongoing suitability-to-task discussion will only drive a wedge between teams. At some level sharing code is a leap of faith that a lot of folks need to take and when it works everyone is happy and if it doesn’t there’s a good chance someone is likely to say “told you so”. Most every bet one makes in engineering has skeptics.  Spending some effort to hear out the skeptics is critical.  A winners/losers process is almost always a negative for all involved.

The common thread about all of these is that they all seem impossible at first.  As with any initiative, there’s a non-zero cost to obtaining goals that require behavior change.  If sharing code is important and not happening, there’s a good chance you’re working against some of the existing constraints in the approach. Smart and empowered teams act with the best intentions to balance a seemingly endless set of inbound issues and constraints, and shared code might just be one of those things that doesn’t make the cut.

Keeping in mind that at any given time an engineering organization is probably overloaded and at capacity just getting stuff done, there’s not a lot of room to just overlay new goals.

Sharing code is like sharing any other aspect of a larger team—from best practices in tools, engineering approaches, team management—things don’t happen organically unless there’s a uniform benefit across teams.  The role of management is to put in place the right constraints that benefit the overall goals without compromising other goals.  This effort requires ongoing monitoring and feedback to make sure the right balance is achieved.

For those interested in some history, this is a Harvard Business School case on the very early Office (paid article) team and the challenges/questions around organizing around a set of related products (hint, this only seems relatively straight-forward in hindsight).

—Steven

Written by Steven Sinofsky

October 20, 2013 at 4:00 pm

Disruption and woulda, coulda, shoulda

jenga-fallingWith the latest pivot for Blackberry much has been said about disruption and what it can do to companies. The story, Inside the fall of BlackBerry: How the smartphone inventor failed to adapt, by Sean Silcoff, Jacquie Mcnish and Steve Ladurantaye in The Globe and Mail is a wonderful account.

Disruption has a couple of characteristics that make it fun to talk about.  While it is happening even with a chorus of people claiming it is happening, it is actually very difficult to see. After it has happened the chorus of “told you so” grows even louder and more matter of fact. After the fact, everyone has a view of what could have been done to “prevent” disruption.  Finally, the description of disruption tends to lose all of the details leading up to the failure as things get characterized at the broad company level or a simple characteristic (keyboard v. touch) when the situation is far more complex.  Those nuances are what product folks deal with day to day and where all the learning can be found.

Like many challenges in business, there’s no easy solution and no pattern to follow.  The decision moments, technology changes, and business realities are all happening to people that have the same skills and backgrounds as the chorus, but the real-world constraints of actually doing something about them.

The case of Blackberry is interesting because the breadth of disruptive forces is so great.  It is not likely that a case like this will be seen again for a while—a case where a company has such an incredible position of strength in technology and business gained over a relatively short time and then essentially erased in a short time.

I loved my Blackberry.  The first time I used one was before they were released (because there was integration with Outlook I was lucky enough to be using one some time in 1998—I even read the entire DOJ filing against Microsoft on one while stopped on the tarmac at JFK).  Using the original 850 was a moment when you immediately felt propelled into the future.  Using one felt like the first time I saw a graphical interface (Alto) or a GPS.  Upon using one you just knew our technology lives would be different.

What went wrong is almost exactly the opposite of what went right and that’s what makes this such an interesting story and unbelievably difficult challenge for those involved.  Even today I look at what went on and think of how galactic the challenges were for that amazing group of people that transported us all to the future with one product.

Assumptions

When you build a product you make a lot of assumptions about the state of the art of technology, the best business practices, and potential customer usage/behavior.  Any new product that is even little bit revolutionary makes these choices at an instinctual level—no matter what news stories you read about research or surveys or whatever, I think we all know that there’s a certain gut feeling that comes into play.

This is especially the case for products that change our collective world view.

Whether made deliberately or not these assumptions play a crucial role in how a product evolves over time. I’ve never seen a new product developed where the folks wrote down a long list of assumptions.  I wouldn’t even know where to start—so many of them are not even thought through and represent just an engineer or product manager “state of the art”, “best practice”, or “this is what I know”.

It turns out these assumptions, implicit or explicit, become your competitive advantage and allow you to take the market by storm.

But then along come technology advances, business model changes, or new customer behaviors and seemingly overnight your assumptions are invalidated.

In a relatively simple product (note, no product is simple to the folks making it) these assumptions might all be within the domain.  Christensen famously studied the early days of the disk drive industry.  To many of us these assumptions are all contained within one system or component and it is hard to see how disruption could take hold.  Fast forward and we just assume solid-state storage, yet even this transition as obvious as it is to us, requires a whole new world view for people who engineer spinning disks.

In a complex product like the entirety of the Blackberry experience there are assumptions that cross hardware, software, communications networks, channel relationships, business models and more.  When you bring all these together into a single picture one realizes the enormity of what was accomplished.

It is instructive to consider the many assumptions or ingredients of Blackberry success that go beyond the popular “keyboard v. touch”.  In thinking about my own experience with the product, the following list just a few things that were essentially revisited by the iPhone from the perspective of the Blackberry device/team:

  • Keyboard to touch.  The most visible difference and most easily debated is this change.  From crackberry thumbs to contests over who could type faster, your keyboard was clearly a major innovation. The move to touch would challenge you in technology, behavior, and more.
  • Small (b&w) screens to large color.  Closely connected with the shift to touch was a change in perspective that consuming information on a bigger screen would trump the use of the real estate for (arguably) more efficient input.  Your whole notion of industrial design, supply chain, OS, and more would be challenged.  As an aside, the power consumption of large screens immediately seemed like a non-starter to a team insanely focused on battery life.
  • GPRS to 3G then LTE. Your heritage in radios, starting with the pager network, placed a premium on using the lowest power/bandwidth radio and focusing on efficiency therein.  The iPhone, while 2G early, quickly turned around a game changing 3G device.  You had been almost dragged into using the newer higher powered radios because your focus had been to treat radio usage as a premium resource.
  • Minimize bandwidth to assume bandwidth is free.  Your focus on reducing bytes over the wire was met with a device that just assumed bytes would be “free” or at least easily purchased.  Many of the early comments on the iPhone focused on this but few assumed the way the communications companies would respond to an appetite for bandwidth.  Imagine thinking how sloppy the iPhone was with bandwidth usage and how fast the battery would drain.  Assuming a specific resource is high cost is often a path to disruption when someone makes a different assumption.
  • No general web support v. general web support.  Despite demand, the Blackberry avoided offered generalized web browsing support.  The partnership with carriers also precluded this given their concern about network responsiveness and capacity.  Again, few would have assumed a network buildout that would support mobile browsing the way it does today.  The disruptor had the advantage of growing slowly (relatively) compared to flipping a switch on a giant installed base.
  • WiFi as “present” to nearly ubiquitous.  The physics of WiFi coverage (along with power consumption, chip surface area and more) assumed WiFi would be expensive and hard to find.  Even with whole city WiFi projects in early 2000’s people didn’t see WiFi as a big part of the solution.  Few thought about the presence of WiFi at home and new usage scenarios or that every urban setting, hotel, airport, and more would have WiFi.  Even the carriers built out WiFi to offload traffic and include it for free in their plans.  The elegant and seamless integration of WiFi on the iPhone became a quick advantage.
  • Device update/mgmt by tethering to off air.  Blackberry required tethering for some routine operations and for many the only way to integrate corporate mail was to keep a PC running all the time. The PC was an integral part of the Blackberry experience for many. While the iPhone was tethered for music and videos, the presence of WiFi and march towards PC-free experiences was an early assumption in the architecture that just took time to play out.
  • Business to consumer. Your Blackberry was clearly a business device.  Through much of the period of high success consumers flocked to devices like the SideKick.  While there was some consumer success, you anchored in business scenarios from Exchange and Notes integration to network security.  The iPhone comes along and out of the gate is aimed at consumers with a camera, MMS, and more.  This disruption hits at the hardware, the software, the service integration, and even how the device is sold at carriers.
  • Data center based service to broad set of cloud based services.  Your connection to the enterprise was anchored in a server that business operated.  This was a significant business upside as well as a key part of the value proposition for business. This server became a source for valuable business information propagated to the Blackberry (rather than use the web).  The absence of an iPhone server seemed like a huge opportunity yet in fact it turned into an asset in terms of spreading the device.  Instead the iPhone relied on the web (and subsequently apps) to deliver services rather than programmed and curated services.
  • Deep channel partnership/revenue sharing to somewhat tense relationship.  By most accounts, your Blackberry business was an incredible win-win with telcos around the world.  Story after story talked of the amazing partnerships between carriers and Blackberry.  At the same time, stories (and blame game) between Apple and AT&T in the US became somewhat legendary.  Yet even with this tension, the iPhone was bringing very valuable customers to AT&T and unseating Blackberry customers.
  • Ubiquitous channel presence to exclusives. Your global partnership strength was unmatched and yet disrupted. The iPhone launched with single carriers in limited markets, on purpose.  Many viewed that as a liability, including Blackberry.  Yet in hindsight this only increased the value to the selected partners and created demand from other potential partners (even with the tension).
  • Revenue sharing to data plan.  One of the main assets that was mostly invisible to consumers was the revenue to Blackberry for each device on the network.  This was because Blackberry was running a secure email service as a major anchor of the offering. Most thought no one was going to give up this revenue, including the carrier ability to up-charge for your Blackberry. Few saw a transition to a heavily subsidized business model with high priced data plans purchased by consumers.

These are just a few and any one of these is probably debatable. The point is really the breadth of changes the iPhone introduced to the Blackberry offering and roadmap.  Some of these are assumptions about the technology, some about the business model, some about the ecosystem, some about physics even!

Imagine you’ve just changed the world and everything you did to change the world—your entire world view—has been changed by a new product.  Now imagine that the new product is not universally applauded and many folks not only say your product is better and more useful, but that the new product is simply inferior.

Put yourself in those shoes…

Disruption

Disruption happens when a new product comes along and changes the underlying assumptions of the incumbent, as we all know.

Incumbent products and businesses respond by often downplaying the impact of a particular feature or offering.  And more often than folks might notice, disruption doesn’t happen so easily.  In practice, established businesses and products can withstand a few perturbations to their offering.  Products can be rearchitected. Prices can be changed.  Features can be added.

What happens though when nearly every assumption is challenged?  What you see is a complete redefinition of your entire company.  And seeing this happen in real time is both hard to see and even harder to acknowledge.  Even in the case of Blackberry there was a time window of perhaps 2 years to respond—is that really enough time to re-engineer everything about your product, company, and business?

One way to look at this case is that disruption rarely happens from a single vector or attribute, even though the chorus might claim X disrupts Y because of price or a single feature, for example.  We can see this in the case of something like desktop Linux—being lower priced/open source are interesting attributes but it is fair to say that disruption never really happened to the degree that might have been claimed early on.

However, if you look at Linux in the data center the combination of using Linux for proprietary data center architectures and services combined with the benefit of open source/low price brought with it a much more powerful disruptive capability.

One might take away from this case and other examples, that the disruption to watch out for the most would be the one that combined multiple elements of the traditional marketing mix  of product, price, place, promotion. When considering these dimensions it is also worth understanding the full breadth of assumptions, both implicit and explicit, in your product and business when defending against disruption. Likewise, if you’re intending to disrupt you want to consider the multiple dimensions of your approach in order to bypass the intrinsic defenses of incumbents.

It is not difficult to talk about disruption in our industry.  As product and business leaders it is instructive to dive into a case of disruption and consider not just all the factors that contributed but how would you respond personally.  Could you really lead a team through the process of creating a product that literally inverted almost every business and technology assumption that created $80B or so in market cap over  a 10 year period?

In The Sun Also Rises, Hemingway wrote:

How did you go bankrupt? Two ways. Gradually, then suddenly.

That is how disruption happens.

—Steven Sinofsky

Written by Steven Sinofsky

October 3, 2013 at 9:00 am

Posted in posts

Tagged with ,

Avoiding mobile app bloat

Nemo BloatrBack in the pre-web days, acquiring software was difficult and expensive.  Learning a given program (app) was difficult and time consuming.  Within this context there was an amazing amount of innovation.  At least in part, these constraints also contributed to the oft-cited (though not well-defined) concept of bloatware.  Even these constraints do not seem particularly true on today’s modern mobile platforms, we are starting to see a rise in app bloat.  It is early and with enough self-policing and through the power of reviews/ratings we might collectively avoid bloatware on our mobile devices.

Product managers have a big responsibility to develop feature lists/themes that make products easier to use, more functional, and better overall–with very finite resources.  Focusing these efforts in ways that deliberately deprioritize what could lead to bloatware is an opportunity to break from past industry cycles and do a better job for modern mobile platforms.  There are many forms of bloat across user experience, performance, resource usage and more.  This post looks at some forms of UX bloat.

This post was motivated by a conversation with a developer considering building an “all in one” app to manage many aspects of the system and files.  This interesting post by Benedict Evans, http://ben-evans.com/benedictevans/2013/9/21/atomisation-and-bundling about unbundling capability and Chris Dixon’s thoughtful post on “the internet is for snacking” http://cdixon.org/2013/09/14/the-internet-is-for-snacking/ serve as excellent motivators.

History

The first apps people used on PCs tended to be anchor apps–that is a given individual would use one app all day, every day.  This might have been a word processor or a spreadsheet, commonly.  These apps were fairly significant investments to acquire and to gain proficiency.

There was plenty of competition in these anchor apps.  This resulted in an explosion in features as apps competed for category leadership.  Software Digest used to evaluate all the entries in a category with lists of hundreds of features in a checklist format, for example.  This is all innovation goodness modulo whether any given individual valued any given feature. By and large, the ability for a single (difficult to acquire and gain proficiency) product to do so many things for so many people was what determined the top products in any given category.  

Two other forms of innovation would also take place as a direct result of this anchor status and need to continue to maintain such status.

First, the fact that people would be in an app all day created an incentive for an ISV to “pull into” the app any supporting functionality so the person would not need to leave the app (and enter the wild world of the OS or another app that might be completely different).  This led to an expansion of functionality like file management, for example.  This situation also led to a broad amount of duplication of OS capabilities from data access to security and even managing external devices such as printers or storage.

As you can imagine, over time the amount of duplication was significant and the divergence of mechanisms to perform common tasks across different apps and the OS itself became obvious and troublesome.  As people use more and more programs this began to strain the overall system and experience in terms of resources and cognitive load.

Second, because software was so hard to use in the early days of these new apps and paradigms there was a great deal of innovation in user experience.  Evolving from command line to keyboard shortcuts to graphical interface.  Then within the graphical interface from menus to toolbars to palettes to context menus and more.  Even within one phase such as early GUI there were many styles of controls and affordances.  At each innovation junction the new, presumably easier mechanism was added to the overall user experience.

At an extreme level this just created complete redundancy in various mechanisms. When toolbars were introduced there was a raging debate over whether a toolbar button should always be redundant with a menu command or never be redundant with a menu command.  Similarly, the same debate held for context menus (also called shortcut menus, which tells you where that landed).  Note a raging debate means that well-meaning people had opposing viewpoints that each asserted were completely true, each unable to definitively prove any point of view. I recall some of the earliest instrumented studies (special versions of apps that packaged up telemetry that could later be downloaded from a PC enlisted in the study) that showed before/after the addition of redundant toolbar buttons, keyboard shortcuts, and shortcut menus.  Each time a new affordance was added the existing usage patterns were split again–in other words every new way to access a command was used frequently by some set of people.  This provided a great deal of validation for redundancy as a feature.  It should be noted that the whole system surrounding a release of a new mechanism further validated the redundant approach–reviews, marketing, newsgroups, enthusiasts, as well as telemetry showed ample support for the added ways of doing tasks.

As you can imagine, over time the UX became arguably bloated and decidedly redundant.  Ironically, for complex apps this made it even more difficult to add new features since each brand new feature needed to have several entry points and this toolbars, palettes, menus, keyboard shortcuts, and more were rather overloaded.  Command location became an issue.  The development of the Office ribbon (see JensenH’s blog for tons of great history – http://blogs.msdn.com/b/jensenh/) started from the principle of flattening the command hierarchy and removing redundant access to commands in order to solve this real-estate problem.

By the time modern mobile apps came on the scene it was starting to look like we would have a new world of much simpler and more streamlined tools.  

Mobile apps and the potential for bloat

Mobile app platforms would seem to have the foundation upon which to prevent bloat from taking place if you consider the two drivers of bloat previously discussed. Certainly one could argue that the inherent nature of the platforms intend for apps to be focused in purpose.

First, apps are easy to get and not so expensive.  If you have an app that takes photos and you want to do some photo editing, there are a plethora of available photo editing apps.  If you want to later tag or manage photos, there are specialized apps that do just that.  While there are many choices, the web provides a great way to search for and locate apps and the reviews and ratings provide a ton more guidance than ever before to help you make a choice.  The relative safety, security, and isolation of apps reduces the risk of trial.

Second, because the new mobile platforms operate at a higher level of abstraction the time to learn and app is substantially reduced.  Where classic apps might feel like you’re debugging a document, new apps building on higher level concepts get more done with fewer gestures, sometimes in a more focused domain (compare old style photo editing to instagram filters, for example).  Again the safety afforded the platforms makes it possible to try things out and undo operations (or even whole apps) as well.  State of the art software engineering means even destructive operations almost universally provide for undo/redo semantics (something missing from the early days).

Given these two realities, one might hope that modern mobile apps are on a path to stay streamlined.

While there is a ton of great work and the modern focus on design and simplicity abounds in many apps, it is also fair to say that common design patterns are arising that represent the seeds of bloat.  Yet the platforms provide capabilities today that can be used effectively by ISVs to put people in control of their apps to avoid redundancy.  Are these being used enough?  That isn’t clear.

One example that comes to mind is the share to verb that is commonly used.  Many apps can be both the source and sink of sharing.

For example, a mail program might be able to attach a photo from within the program.  Or the photo viewer might be able to mail a photo.

It seems routine then that there should be an “attach” verb within the mail program along with the share verb from the photo viewer.  On most platforms this is the case at least with third party mail programs as well.  This seems fast, convenient, efficient.

As you play this out over time the the mail program starts to need more than attach of a photo but potentially lists of data types and objects.  As we move away from files or as mobile platforms emphasize security the ability for one app to enumerate data created by another makes this challenging and thus the OS/apps need to implement namespaces or brokers.

The other side of this, share to, becomes an exceedingly long list of potential share targets.  It becomes another place for ISVs to work to gain visibility.  Some platforms allow ISVs to install multiple share targets per app and so apps show up more than once.  On Android there is even a third party app that is quite popular that enables you the ability to offer to manage this list of share targets.  Windows provides this natively and apps can only install as a single share to target to avoid this “spamming”.

As an app creator, the question is really how critical it is to provide circular access to your data types?  Can you allow the system to provide the right level of access and allow people to use the native paradigms for sharing?  This isn’t always possible and the limitations (and controls) can make this impossible, so this is also a call to OS vendors to think through this “cycle” more completely.

In the meantime, limited screen real estate is being dedicated to commands redundant with the OS and OS capabilities might be overloaded with capabilities available elsewhere.

A second example comes from cross-platform app development.  This isn’t new and many early GUI apps had this same goal (cross-platform or cross-OS version).  When you need to be cross-platform you tend to create your own mechanisms for things that might be available in the platform.  This leads to inconsistencies or redundancies, which in turn make it difficult for people to use the models inherent in the platform.

In other words, your single-app view centered around making your app easier by putting everything in the context of your app drives the feature “weight” of your app up and the complexity of the overall system up as well.  This creates a situation where everyone is acting in the interest of their app, but in a world of people using many different apps the overall experience degrades.

Whether we’re talking about user/password management, notifications, sounds, permissions/rights, and more the question for you as an ISV is whether your convenience or ease of access, or desire to do things once and work across platforms is making things locally easier at the expense of the overall platform or not?

Considering innovation

Every development team deals with finite resources and a business need to get the most bang for the buck.  The most critical need for any app is to provide innovative new features in the specific domain of the app–if you’re a photo editing app then providing more editing capabilities seems more innovative than being able to also grab a picture from the camera directly (this is sort of a canonical example of redundancy–do many folks start in the editor when taking a picture, yet almost all the editors enable this because it is not a lot of extra code).

Thinking hard about what you’re using your finite resources to deliver is a job of product management.  Prioritizing domain additions over redundancy and bloat can really help to focus.  One might also look to reviewers (in the App Stores or outside reviewers) to consider redundancy as not always more convenient but somewhat of potential challenge down the road.

Ironically, just as with the GUI era it is enthusiasts who can often drive features of apps.  Enthusiasts love shortcuts and connections along with pulling functionality into their favorite apps.  You can see this in reviews and comments on apps.  Enthusiasts also tend to have the skills and global view of the platforms to navigate the redundancy without getting lost.  So this could also be a case of making sure not to listen too closely to the most engaged…and that’s always tricky.

Designers and product managers looking to measure the innovation across the set of features chosen for a release might consider a few things that don’t necessarily count as innovation for apps on modern mobile platforms:

  • Adding more access points to previously existing commands.  Commands should have one access point, especially on small screen devices.  Multiple access points means that over time you’ll be creating a screen real estate challenge and at some point some people will want everything everywhere, which won’t be possible.
  • Making it possible to invoke a command from both inside-out and outside-in.  When it comes to connecting apps with each other or apps to data, consider the most fluid and normal path and optimize for that–is it going from data to app or from app to data, is your app usually the source or the sink?  It is almost never the case that your app or app data is always the starting point and the finishing point for an operation.  Again, filling out this matrix leads to a level of bloat and redundancy across the system and a lack of predictability for customers.
  • Duplicating functionality that exists elsewhere for convenience.  It is tempting to pull in commonly changed settings or verbs into your app as a point of efficiency.  The challenge with this is where does it end?  What do you do if something is not longer as common as it once was or if the OS dramatically changes the way some functionality is accessed.  Whenever possible, rely on the native platform mechanisms even when trying to be cross-platform.
  • Thinking your app is an anchor so it needs to provide access to everything from within your app.  Everyone building an app wants their app to be the one used all the time. No one builds an app thinking they are an edge case.  This drives apps to have more and more capability that might not be central to the raison d’être for your app.  In the modern mobile world, small tools dominate and the platforms are optimized for swiftly moving between tools.  Think about how to build your app to be part of an overall orchestra of apps.  You might even consider breaking up your app if the tasks themselves are discrete rather than overloading one app.
  • Reminding yourself it is your app, but the person’s device.  “Taking over” the device as though your app is the only thing people will use isn’t being fair to people.  Just because the OS might let you add entry points or gain visibility does not mean you should take advantage of every opportunity.

These all might be interesting features and many might be low cost ways to lengthen the change log.  The question for product managers is whether this was the best use of resources today and whether it builds the experience foundation for your app that scales down the road?

Where do you want your innovation energy to go–your domain or potential bloat?

–Steven Sinofsky

Written by Steven Sinofsky

September 24, 2013 at 6:00 pm

Posted in posts

Tagged with ,

8 steps for engineering leaders to keep the peace

Tug of warWhen starting a new product there’s always so much more you want to do than can be done.  In early days this is where a ton of energy comes from in a new company—the feeling of whitespace and opportunity.  Pretty soon though the need for prioritized lists and realities of resource/time constraints become all too real.  Naturally the founder(s) (or your manager in a larger organization) and others push for more.  And just as naturally, the engineering leader starts to feel the pressure and pushes back.  All at once there is a push to do more and a pull to prioritize.  What happens when “an unstoppable force meets an immovable object”, when the boss is pushing for more and the engineering leader is trying to prioritize?

I had a chance to talk to a couple of folks facing this challenge within early stage companies where a pattern emerges.  The engineering leader is trying hard to build out the platform, improve quality, and focus more on details of design.  The product-focused founder (or manager) is pushing to add features, change designs, and do that all sooner.  There’s pushback between folks.  The engineering leader was starting to worry if pushing back was good.  The founder was starting to wonder if too much was being asked for.  Some say this is a “natural” tension, but my feeling is tension is almost always counter-productive or at least unnecessary.

There’s no precise way to know the level of push or pushback as it isn’t something you can quantify.  But it is critically important to avoid a situation that can result in a clash down the road, a loss of faith in leadership, or a let down by engineering.

As with any challenge that boils down to people, communication is the tool that is readily available to anyone. But not every communication style will work.  Engineers and other analytical types fall into some common traps when trying to cope with the immense pressure of feeling accountable to get the right things done and meet shared goals:

  • Setting expectations by always repeating “some of this won’t get done”.  This doesn’t help because it doesn’t add anything to the dialog as it is essentially a truism of any plan.
  • Debating each idea aggressively.  This breaks down the collaborative nature of the relationship and can get in the way, even though analytical folks like to make sure important topics are debated.
  • Acting in a passive aggressive manner and just tabling some inbound requests. This is almost always a reaction to “overflow” like too much sand poured in a funnel—the challenge is just managing all the inbound requests.  This doesn’t usually work because most ideas keep coming back.

What you can do is get ahead of the situation and be honest.  A suggested approach is all about defining the characteristics of the role you each have and the potential points of “failure” in the relationship.

As the engineering leader, sit down with the founder (or your manager) and kick off a discussion that goes something like this as said from the perspective of the accountable engineering leader:

  1. We both want the best product we can build, as fast as we can.
  2. I share your enthusiasm for the creativity and contributions from you and everyone else.
  3. My role is to provide an engineering cadence that delivers as much as we can, as soon as we can, with the level of quality and polish we can all be proud of.
  4. We’ll work from a transparent plan and a process that decides what to get done.
  5. As part of doing that, I’m going to sometimes feel like I end up saying “no” pretty often.
  6. And even with that, you’re going to push to change or add more.  And almost always we’ll agree that absent constraints those are good pushes.  But I’m not working without constraints.
  7. But what I worry about is that one day when things are not going perfectly (with the builds or sales), you’ll start to worry that I’m an obstacle to getting more done sooner.
  8. So right then and there, I’d like to come back to this conversation and make sure to walk through where we are and what we’re doing to recalibrate.  I don’t want you to feel like I’m being too conservative or that our work to decide what to do in what order isn’t in sync with you.

That’s the basic idea.  To get ahead of what is almost certainly to be a conversation down the road and to set up a framework to talk about the challenge that all engineering efforts have—getting enough done, soon enough.

Why is this so critical?  Because if you’re not talking to each other, there’s a risk you’re talking about each other.

We all know that in a healthy organization bad news travels fast. Unfortunately, when the pressure is on or there’s a shared feeling of missing expectations often the first thing to go is the very communication that can help.  When communication begins to break down there’s a risk trust will suffer.

When trust is reduced and unhealthy cycle potentially starts.  The engineering leader starts to feel a bit like an obstacle and might start over-committing or just reduce the voice of pragmatic concerns.  The manager or founder might start to feel like the engineering leader is slowing progress and might start to work around him/her to influence the work list.

Regardless of how the efficacy of the relationship begins to weaken, there’s always room for adjustment and learning between the two of you.  It just needs to start from a common understanding and a baseline to talk and communicate.

This is such a common challenge, that it is worth an ounce of prevention and an occasional booster conversation.

–Steven Sinofsky

Written by Steven Sinofsky

September 11, 2013 at 6:00 am

Posted in posts

Tagged with , ,

Bringing the shared economy to the enterprise

with 3 comments

In much of the world’s urban areas, it can seem like there are more cars than people. In the U.S., there are nearly 800 cars per 1,000 people. With that comes increasing congestion, pollution, and resource consumption. Yet, surprisingly, the utilization of vehicles is at an all-time low—to put it simply, the more vehicles there are, the harder it is to keep them all in use. That’s a lot of waste.

Throughout government and private business, tens of millions of passenger cars are part of vehicle fleets used on-demand by employees. Making vehicles available when and where needed and keeping track of them is a surprisingly manual process today. Not surprisingly as a result, it’s fraught with high costs and low efficiency. In an effort to meet demand, managers of these fleets simply add vehicles to meet the highest peak demand.  This results in more cars to own, manage, insure, store, and so on. But maddeningly, most of these cars end up either sitting idle, parked in the wrong place, or awaiting replacement of lost keys.

John Stanfield and Clement Gires had an idea for a better way to tackle the fleet problem. They shared a vision for reducing the number of cars on the road and increasing the amount any given car is used, while also making it easier than any other program existing to use a shared car.

John has a physics degree from Central Washington University and a Master’s degree in Mechanical Engineering from Stanford. He’s a conservationist at heart, having spent his years just after college as a forest firefighter. Along the way he invented an engine that processed vegetable oil into biodiesel. At Stanford, he began implementing an idea for a new type of vehicle—an electric car for urban areas that would be a resource shared among people, not owned by a single person. It would be a car that you jump in and use when needed, on demand.

About the same time, Clement Gires was studying behavioral economics at École Polytechnique when he wasn’t also working as part of a high-altitude Alpine rescue unit. Clement worked on the famed Vélib’ bicycle sharing program in Paris which encompasses over 18,000 bicycles in 1,200 locations providing well over 100,000 daily rides. Clement brought novel approaches to improve the distribution and utilization of bikes to the program before coming to the U.S. to study Management Science and Engineering at Stanford.

While climbing in Yosemite, John and Clement got to know each other. Initially, they spent time pursuing the electric vehicle John began, but soon realized that the real value of their work was in the underlying technology for sharing, which could be applied to any car.

Local Motion is bringing to market a unique combination of hardware, software, and services that redefine the way fleets of vehicles can be deployed, used, and managed. There are three unique aspects of the business, which come together in an incredible offering:

  • Simple design.  Open the app on your mobile device, locate a car or just go out to the designated spots and locate a car with a green light visible in the windshield—no reservations required. Walk up to the car, swipe your card key (same one you use for the office) or use your Bluetooth connected phone and the car unlocks and you’re in control. Forget to plug in your electric car and you’ll even get a text message. When you’re done, swipe your key to lock the car and let the system know the car is free.
  • Powerful hardware.  Underneath the dash is a small box that takes about 20 minutes to install.  In the corner of the windshield is an indicator light that lets you know from a distance if the car is free or in use. The hardware works in all cars and offers a range of telemetry for the fleet manager beyond just location. In modern electric cars, the integration is just as easy but even deeper and more full-featured.
  • Elegant software. Local Motion brings “consumerization of IT” to fleet management.  For the fleet manager, the telematics are presented in a friendly user experience that integrates with your required backend infrastructure.

The folks at Local Motion share a vision for creating the largest network of shared vehicles. Today, customers are already using the product in business and government, but it’s easy to imagine a future where their technology could be used with any car.

Today, we are excited to announce that Andreessen Horowitz is leading a $6M Series A investment in Local Motion. I’m thrilled to join the board of Local Motion with John and Clement as part of my first board partner role with Andreessen Horowitz (see Joining a16z on this blog).

–Steven Sinofsky

This was also posted on http://blog.pmarca.com/

Written by Steven Sinofsky

August 28, 2013 at 7:00 am

Posted in a16z

Tagged with

steven @ a16z

As a reader of this blog, you probably notice the two big themes of learning (by shipping): (1) learning about new technologies, new ways to do things, and new products and (2) improving how products are made from an engineering and management perspective.  

I’m especially excited to learn by spending more time with entrepreneurs and those creating new technologies and products. Andreessen Horowitz is a VC firm that believes deeply in helping entrepreneurs and helping change the product and business landscape, which is why I am thrilled to join the firm as a board partner.

Board partners are unique at a16z. In this position I will represent the firm on the boards of portfolio companies when the opportunities present themselves, but will not be a full-time member of the firm.

I’m relatively new to the VC world and have a lot of learning to do—and I am very excited to do that. I can’t think of a better place to do this than a16z, as they share the commitment to learning and sharing that learning, for example through all the blog posts the GPs write. I first got to know Ben, Marc, and some of the over 70 people at the firm starting late last year. What was so cool to see was the commitment to fostering innovation, product creation, and working with product-focused entrepreneurs.

More than anything, what I find so cool about a16z are the values clearly articulated and lived day to day by everyone at the firm. From the very first time I got to hang out with folks I saw things that reminded me of the values that contribute to all great product (and company) efforts:

  1. Team effort – Scalable work that “goes big” requires a lot of people. Being part of a team that works to let each person contribute at their highest level is how the resulting whole is greater than the sum of the parts.
  2. Long term – Sustainable efforts take more than one turn of the crank. The commitment to the long term that starts from building strong relationships through supporting entrepreneurs as they create sustainable products and businesses truly differentiates the a16z approach.

My own experience in product development has been focused on learning and changing from within an organization as part of teams—scaling teams, building the first professional GUI dev tools for Windows, marshaling the company around the “InterNet”, bringing together disparate apps to create Office, creating the first collaboration servers, and shifting to the tablet era. Each was decidedly a new effort working to change the rules of the product game while learning along the way. Bringing this relevant experience to new companies is something I’m excited to do.

Among other activities, I will maintain my EIR with Harvard Business School and will continue to pursue other business and product development opportunities that arise.  

As folks following me on Twitter or Facebook know, we’ve been splitting our time between coasts and will do so for a bit more time before transitioning to the Bay Area full time. I will still definitely explore companies out East, but maintain a strong focus on the Bay Area.

Of course I will continue blogging here on learning by shipping (and on LinkedIn Influencers) and you can follow me @stevesi or email me.

–Steven Sinofsky

Written by Steven Sinofsky

August 22, 2013 at 7:15 am

Posted in posts

Tagged with , ,

Continuous Productivity: New tools and a new way of working for a new era

with 138 comments

553698_10101017579649025_101860817_nWhat happens when the tools and technologies we use every day become mainstream parts of the business world?  What happens when we stop leading separate “consumer” and “professional” lives when it comes to technology stacks?  The result is a dramatic change in the products we use at work and as a result an upending of the canon of management practices that define how work is done.

This paper says business must embrace the consumer world and see it not as different, less functional, or less enterprise-worthy, but as the new path forward for how people will use technology platforms, how businesses will organize and execute work, and how the roles of software and hardware will evolve in business. Our industry speaks volumes of the consumerization of IT, but maybe that is not going far enough given the incredible pace of innovation and depth of usage of the consumer software world.  New tools are appearing that radically alter the traditional definitions of productivity and work.  Businesses failing to embrace these changes will find their employees simply working around IT at levels we have not seen even during the earliest days of the PC.   Too many enterprises are either flat-out resisting these shifts or hoping for a “transition”—disruption is taking place, not only to every business, but within every business.

Paradigm shift

Continuous productivity is an era that fosters a seamless integration between consumer and business platforms.  Today, tools and platforms used broadly for our non-work activities are often used for work, but under the radar.  The cloud-powered smartphone and tablet, as productivity tools, are transforming the world around us along with the implied changes in how we work to be mobile and more social. We are in a new era, a paradigm shift, where there is evolutionary discontinuity, a step-function break from the past. This constantly connected, social and mobile generational shift is ushering a time period on par with the industrial production or the information society of the 20th century. Together our industry is shaping a new way to learn, work, and live with the power of software and mobile computing—an era of continuous productivity.

Continuous productivity manifests itself as an environment where the evolving tools and culture make it possible to innovate more and faster than ever, with significantly improved execution. Continuous productivity shifts our efforts from the start/stop world of episodic work and work products to one that builds on the technologies that start to answer what happens when:

  • A generation of new employees has access to the collective knowledge of an entire profession and experts are easy to find and connect with.
  • Collaboration takes place across organization and company boundaries with everyone connected by a social fiber that rises above the boundaries of institutions.
  • Data, knowledge, analysis, and opinion are equally available to every member of a team in formats that are digital, sharable, and structured.
  • People have the ability to time slice, context switch, and proactively deal with situations as they arise, shifting from a world of start/stop productivity and decision-making to one that is continuous.

Today our tools force us to hurry up and wait, then react at all hours to that email or notification of available data.  Continuous productivity provides us a chance at a more balanced view of time management because we operate in a rhythm with tools to support that rhythm.  Rather than feeling like you’re on call all the time waiting for progress or waiting on some person or event, you can simply be more effective as an individual, team, and organization because there are new tools and platforms that enable a new level of sanity.

Some might say this is predicting the present and that the world has already made this shift.  In reality, the vast majority of organizations are facing challenges or even struggling right now with how the changes in the technology landscape will impact their efforts.  What is going on is nothing short of a broad disruption—even winning organizations face an innovator’s dilemma in how to develop new products and services, organize their efforts, and communicate with customers, partners, and even within their own organizations.  This disruption is driven by technology, and is not just about the products a company makes or services offered, but also about the very nature of companies.

Today’s socialplace

The starting point for this revolution in the workplace is the socialplace we all experience each and every day.

We carry out our non-work (digital) lives on our mobile devices.  We use global services like Facebook, Twitter, Gmail, and others to communicate.  In many places in the world, local services such as Weibo, MixIt, mail.ru, and dozens of others are used routinely by well over a billion people collectively.  Entertainment services from YouTube, Netflix to Spotify to Pandora and more dominate non-TV entertainment and dominate the Internet itself.  Relatively new services such as Pinterest or Instagram enter the scene and are used deeply by tens of millions in relatively short times.

While almost all of these services are available on traditional laptop and desktop PCs, the incredible growth in usage from smartphones and tablets has come to represent not just the leading edge of the scenario, but the expected norm.  Product design is done for these experiences first, if not exclusively. Most would say that designing for a modern OS first or exclusively is the expected way to start on a new software experience.  The browser experience (on a small screen or desktop device) is the backup to a richer, more integrated, more fluid app experience.

In short, the socialplace we are all familiar with is part of the fabric of life in much of the world and only growing in importance. The generation growing up today will of course only know this world and what follows. Around the world, the economies undergoing their first information revolutions will do so with these technologies as the baseline.

Historic workplace

Briefly, it is worth reflecting on and broadly characterizing some of the history of the workplace to help to place the dramatic changes into historic context.

Mechanized productivity

The industrial revolution that defined the first half of the 20th century marked the start of modern business, typified by high-volume, large-scale organizations.  Mechanization created a culture of business derived from the capabilities and needs of the time. The essence of mechanization was the factory which focused on ever-improving and repeatable output.  Factories were owned by those infusing capital into the system and the culture of owner, management, and labor grew out of this reality.  Management itself was very much about hierarchy. There was a clear separation between labor and management primarily focused on owners/ownership.

The information available to management was limited.  Supply chains and even assembly lines themselves were operated with little telemetry or understanding of the flow of raw materials through to sales of products. Even great companies ultimately fell because they lacked the ability to gather insights across this full spectrum of work.

Knowledge productivity

The problems created by the success of mechanized production were met with a solution—the introduction of the computer and the start of the information revolution.  The mid-20th century would kick off a revolution in business, business marked by global and connected organizations.  Knowledge created a new culture of business derived from the information gathering and analysis capabilities of first the mainframe and then the PC.

The essence of knowledge was the people-centric office which focused on ever-improving analysis and decision-making to allocate capital, develop products and services, and coordinate the work across the globe.  The modern organization model of a board of directors, executives, middle management, and employees grew out of these new capabilities.  Management of these knowledge-centric organizations happened through an ever-increasing network of middle-managers.  The definition of work changed and most employees were not directly involved in making things, but in analyzing, coordinating, or servicing the products and services a company delivered.

The information available to management grew exponentially.  Middle-management grew to spend their time researching, tabulating, reporting, and reconciling the information sources available.  Information spanned from quantitative to qualitative and the successful leaders were expert or well versed in not just navigating or validating information, but in using it to effectively influence the organization as a whole.  Knowledge is power in this environment.  Management took over the role of resource allocation from owners and focused on decision-making as the primary effort, using knowledge and the skills of middle management to inform those choices.

A symbol of knowledge productivity might be the meeting.  Meetings came to dominate the culture of organizations:  meetings to decide what to meet about, meetings to confirm that people were on the same page, meetings to follow-up from other meetings, and so on.  Management became very good at justifying meetings, the work that went into preparing, having, and following up from meetings.  Power derived from holding meetings, creating follow-up items and more.  The work products of meetings—the pre-reading memos, the presentations, the supporting analytics began to take on epic proportions.  Staff organizations developed that shadowed the whole process.

The essence of these meetings was to execute on a strategy—a multi-year commitment to create value, defend against competition, and to execute.  Much of the headquarters mindset of this era was devoted to strategic analysis and planning.

The very best companies became differentiated by their use of information technologies in now legendary ways such as to manage supply chain or deliver services to customers.  Companies like Wal-Mart pioneered the use of technology to bring lower prices and better inventory management.  Companies like the old MCI developed whole new products based entirely on the ability to write software to provide new ways of offering existing services.

Even with the broad availability of knowledge and information, companies still became trapped in the old ways of doing things, unable to adapt and change.  The role of disruption as a function not just of technology development but as management decision-making showed the intricate relationship between the two. With this era of information technology came the notion of companies too big and too slow to react to changes in the marketplace even with information right there in front of collective eyes.

The impact of software, as we finished the first decade of the 21st century, is more profound than even the most optimistic software people would have predicted.  As the entrepreneur and venture capitalist Marc Andreessen wrote two years ago, “software is eating the world”.  Software is no longer just about the internal workings of business or a way to analyze information and execute more efficiently, but has come to define what products a business develops, offers, and serves.  Software is now the product, from cars to planes to entertainment to banking and more. Every product not only has a major software component but it is also viewed and evaluated through the role of software.  Software is ultimately the product, or at least a substantial part of differentiation, for every product and service.

Today’s workplace: Continuous Productivity

Today’s workplace is as different as the office was from the factory.

Today’s organizations are either themselves mobile or serving customers that are mobile, or likely both.  Mobility is everywhere we look—from apps for consumers to sales people in stores and the cash registers to plane tickets.  With mobility comes an unprecedented degree of freedom and flexibility—freedom from locality, limited information, and the desktop computer.

The knowledge-based organization spent much energy on connecting the dots between qualitative sampling and data sourced on what could be measured. Much went into trying get more sources of data and to seek the exact right answer to important management decisions.  Today’s workplace has access to more data than ever before, but along with that came understanding that just because it came from a computer it isn’t right.  Data is telemetry based on usage from all aspects of the system and goes beyond sampling and surveys.  The use of data today substitutes algorithms seeking exact answers with heuristics informed by data guessing the best answer using a moment’s worth of statistical data.  Today’s answers change over time as more usage generates more data.  We no longer spend countless hours debating causality because what is happening is right there before our eyes.

We see this all the time in the promotion of goods on commerce sites, the use of keyword search and SEO, even the way that search itself corrects spellings or maps use a vast array of data to narrow a potentially very large set of results from queries.  Technologies like speech or vision have gone from trying to compute the exact answer to using real-time data to provide contextually relevant and even more accurate guesses.

The availability of these information sources is moving from a hierarchical access model of the past to a much more collaborative and sharing-first approach.  Every member of an organization should have access to the raw “feeds” that could be material to their role.  Teams become the focus of collaborative work, empowered by the data to inform their decisions.  We see the increasing use of “crowds” and product usage telemetry able to guide improved service and products, based not on qualitative sampling plus “judgment” but on what amounts to a census of real-world usage.

Information technology is at the heart of all of these changes, just as it was in the knowledge era.  The technologies are vastly different.  The mainframe was about centralized information and control.  The PC era empowered people to first take mainframe data and make better use of it and later to create new, but inherently local or workgroup specific information sources.  Today’s cloud-based services serve entire organizations easily and can also span the globe, organizations, and devices.  This is such a fundamental shift in the availability of information that it changes everything in how information is collected, shared, and put to use. It changes everything about the tools used to create, analyze, synthesize, and share information.

Management using yesterday’s techniques can’t seem keep up with this world. People are overwhelmed by the power of their customers with all this information (such as when social networks create a backlash about an important decision, or we visit a car dealer armed with local pricing information).  Within organizations, managers are constantly trying to stay ahead of the curve.  The “young” employees seem to know more about what is going on because of Twitter and Facebook or just being constantly connected.  Even information about the company is no longer the sole domain of management as the press are able to uncover or at least speculate about the workings of a company while employees see this speculation long before management is communicating with employees.  Where people used to sit in important meetings and listen to important people guess about information, people now get real data from real sources in real-time while the meeting is taking place or even before.

This symbol of the knowledge era, the meeting, is under pressure because of the inefficiency of a meeting when compared to learning and communicating via the technology tools of today.  Why wait for a meeting when everyone has the information required to move forward available on their smartphones?  Why put all that work into preparing a perfect pitch for a meeting when the data is changing and is a guess anyway, likely to be further informed as the work progresses?  Why slow down when competitors are speeding up?

There’s a new role for management that builds on this new level of information and employees skilled in using it.  Much like those who grew up with PC “natively” were quick to assume their usage in the workplace (some might remember the novelty of when managers first began to answer their own email), those who grow up with the socialplace are using it to do work, much to the chagrin of management.

Management must assume a new type of leadership that is focused on framing the outcome, the characteristics of decisions, and the culture of the organization and much less about specific decision-making or reviewing work.  The role of workplace technology has evolved significantly from theory to practice as a result of these tools. The following table contrasts the way we work between the historic norms and continuous productivity.

Then Now, Continuous Productivity
Process Exploration
Hierarchy, top down or middle out Network, bottom up
Internal committees Internal and external teams, crowds
Strategy-centric Execution-centric
Presenting packaged and produced ideas, documents Sharing ideas and perspectives continuously, service
Data based on snapshots at intervals, viewed statically Data always real-time, viewed dynamically
Process-centric Rhythm-centric
Exact answers Approximation and iteration
More users More usage

Today’s workplace technology, theory

Modern IT departments, fresh off the wave of PC standardization and broad homogenization of the IT infrastructure developed the tools and techniques to maintain, ne contain, the overall IT infrastructure.

A significant part of the effort involved managing the devices that access the network, primarily the PC.  Management efforts ran the gamut from logon scripts, drive scanning, anti-virus software, standard (or only) software load, imaging, two-factor authentication and more.  Motivating this has been the longstanding reliability and security problems of the connected laptop—the architecture’s openness so responsible for the rise of the device also created this fragility.  We can see this expressed in two symbols of the challenges faced by IT: the corporate firewall and collaboration.  Both of these technologies offer good theories but somewhat backfire in practice in today’s context.

With the rise of the Internet, the corporate firewall occupied a significant amount of IT effort.  It also came to symbolize the barrier between employees and information resources.  At some extremes, companies would routinely block known “time wasters” such as social networks and free email.  Then over time as the popularity of some services grew, the firewall would be selectively opened up for business purposes.  YouTube and other streaming services are examples of consumer services that transitioned to an approved part of enterprise infrastructure given the value of information available.  While many companies might view Twitter as a time-wasting service, the PR departments routinely use it to track news and customer service might use it to understand problems with products so it too becomes an expected part of infrastructure.  These “cracks” in the notion of enterprise v. consumer software started to appear.

Traditionally the meeting came to symbolize collaboration.  The business meeting which occupied so much of the knowledge era has taken on new proportions with the spread of today’s technologies.  Businesses have gone to great lengths to automate meetings and enhance them with services.  In theory this works well and enables remote work and virtual teams across locations to collaborate.  In practical use, for many users the implementation was burdensome and did not support the wide variety of devices or cross-organization scenarios required.  The merger of meetings with the traditional tools of meetings (slides, analysis, memos) was also cumbersome as sharing these across the spectrum of devices and tools was also awkward. We are all familiar with the first 10 minutes of every meeting now turning into a technology timesink where people get connected in a variety of ways and then sync up with the “old tools” of meetings while they use new tools in the background.

Today’s workspace technology, practice

In practice, the ideal view that IT worked to achieve has been rapidly circumvented by the low-friction, high availability of a wide variety of faster-to-use, easier-to-use, more flexible, and very low-cost tools that address problems in need of solutions.  Even though this is somewhat of a repeat of the introduction of PCs in the early 1990’s, this time around securing or locking down the usage of these services is far more challenging than preventing network access and isolating a device.  The Internet works to make this so, by definition.

Today’s organizations face an onslaught of personally acquired tablets and smartphones that are becoming, or already are, the preferred device for accessing information and communication tools.  As anyone who uses a smartphone knows, accessing your inbox from your phone quickly becomes the preferred way to deal with the bulk of email.  How often do people use their phones to quickly check mail even while in front of their PC (even if the PC is not in standby or powered off)?  How much faster is it to triage email on a phone than it is on your PC?

These personal devices are seen in airports, hotels, and business centers around the world.  The long battery life, fast startup time, maintenance-free (relatively), and of course the wide selection of new apps for a wide array of services make these very attractive.

There is an ongoing debate about “productivity” on tablets.  In nearly all ways this debate was never a debate, but just a matter of time.  While many look at existing scenarios to be replicated on a tablet as a measure of success of tablets at achieving “professional productivity”, another measure is how many professionals use their tablets for their jobs and leave their laptops at home or work.  By that measure, most are quick to admit that tablets (and smartphones) are a smashing success.  The idea that tablets are used only for web browsing and light email seems as quaint as claiming PCs cannot do the work of mainframes—a common refrain in the 1980s.  In practice, far too many laptops have become literally desktops or hometops.

While the use of tools such as AutoCAD, Creative Suite, or enterprise line of business tools will be required and require PCs for many years to come, the definition of professional productivity will come to include all the tasks that can be accomplished on smartphones and tablets.  The nature of work is changing and so the reality of the tools in use are changing as well.

Perhaps the most pervasive services for work use are cloud-based storage products such as DropBox, Hightail (YouSendIt), or Box.  These products are acquired easily by consumers, have straightforward browser-based interfaces and apps on all devices, and most importantly solve real problems required by modern information sharing.  The basic scenario of sharing large files with a customers or partners (or even fellow employees) across heterogeneous devices and networks is easily addressed by these tools.  As a result, expensive and elaborate (or often much richer) enterprise infrastructure goes unused for this most basic of business needs—sharing files.  Even the ubiquitous USB memory stick is used to get around the limitations of enterprise storage products, much to the chagrin of IT departments.

Tools beyond those approved for communication are routinely used by employees on their personal devices (except of course in regulated industries).  Tools such as WhatsApp or WeChat have hundreds of millions of users.  A quick look at Facebook or Twitter show that for many of those actively engaged the sharing of work information, especially news about products and companies, is a very real effort that goes beyond “the eggs I had for breakfast” as social networks have sometimes been characterized.  LinkedIn has become the goto place for sales people learning about customers and partners and recruiters seeking to hire (or headhunt) and is increasingly becoming a primary source of editorial content about work and the workplace.  Leading strategists are routinely read by hundreds of thousands of people on LinkedIn and their views shared among the networks employees maintain of their fellow employees.  It has become challenging for management to “compete” with the level and volume of discourse among employees.

The list of devices and services routinely used by workers at every level is endless.  The reality appears to be that for many employees the number of hours of usage in front of approved enterprise apps on managed enterprise devices is on the decline, unless new tablets and phones have been approved.  The consumerization of IT appears to be very real, just by anecdotally observing the devices in use on public transportation, airports, and hotels.  Certainly the conversation among people in suits over what to bring on trips is real and rapidly tilting towards “tablet for trips”, if not already there.

The frustration people have with IT to deliver or approve the use of services is readily apparent, just as the frustration IT has with people pushing to use insecure, unapproved, and hard to manage tools and devices.  Whenever IT puts in a barrier, it is just a big rock in the information river that is an organization and information just flows around it.  Forward-looking IT is working diligently to get ahead of this challenge, but the models used to reign in control of PCs and servers on corporate premises will prove of limited utility.

A new approach is needed to deal with this reality.

Transition versus disruption

The biggest risks organizations face is in thinking the transition to a new way of working will be just that, a transition, rather than a disruption.  While individuals within an organization, particularly those that might be in senior management, will seek to smoothly transition from one style of work to another, the bulk of employees will switch quickly. Interns, new hires, or employees looking for an edge see these changes as the new normal or the only normal they’ve ever experienced.  Our own experience with PCs is proof of how quickly change can take place.

In Only the Paranoid Survive, Andy Grove discussed breaking the news to employees of a new strategy at Intel only to find out that employees had long ago concluded the need for change—much to the surprise of management.  The nature of a disruptive change in management is one in which management believes they are planning a smooth transition to new methods or technologies only to find out employees have already adopted them.

Today’s technology landscape is one undergoing a disruptive change in the enterprise—the shift to cloud based services, social interaction, and mobility.  There is no smooth transition that will take place.  Businesses that believe people will gradually move from yesterday’s modalities of work to these new ways will be surprised to learn that people are already working in these new ways. Technologists seeking solutions that “combine the best of both worlds” or “technology bridge” solutions will only find themselves comfortably dipping their toe in the water further solidifying an old approach while competitors race past them.  The nature of disruptive technologies is the relentless all or nothing that they impose as they charge forward.

While some might believe that continuing to focus on “the desktop” will enable a smoother transition to mobile (or consumer) while the rough edges are worked out or capabilities catch up to what we already have, this is precisely the innovator’s dilemma – hunkering down and hoping things will not take place as quickly as they seem to be for some.  In fact, to solidify this point of view many will point to a lack of precipitous decline or the mission critical nature in traditional ways of working.  The tail is very long, but innovation and competitive edge will not come from the tail.  Too much focus on the tail will risk being left behind or at the very least distract from where things are rapidly heading. Compatibility with existing systems has significant value, but is unlikely to bring about more competitive offerings, better products, or step-function improvements in execution.

Culture of continuous productivity

The culture of continuous productivity enabled by new tools is literally a rewrite of the past 30 years of management doctrine.  Hierarchy, top-down decision making, strategic plans, static competitors, single-sided markets, and more are almost quaint views in a world literally flattened by the presence of connectivity, mobility, and data. The impact of continuous productivity can be viewed through the organization, individuals and teams, and the role of data.

The social and mobile aspects of work, finally, gain support of digital tools and with those tools the realization of just how much of nearly all work processes are intrinsically social.   The existence and paramount importance of “document creation tools” as the nature of work appear, in hindsight, to have served as a slight detour of our collective focus.  Tools can now work more like we like to work, rather than forcing us to structure our work to suit the tools.  Every new generation of tools comes with promises of improvements, but we’ve already seen how the newest styles of work lead to improvements in our lives outside of work. Where it used to be novel for the person with a PC to use those tools to organize a sports team or school function, now we see the reverse and we see the tools for the rest of life being used to improve our work.

This existence proof makes this revolution different.  We already experience the dramatic improvements in our social and non-work “processes”.  With the support and adoption of new tools, just as our non-work lives saw improvements we will see improvements in work.

The cultural changes encouraged or enabled by continuous productivity include:

  • Innovate more and faster.  The bottom line is that by compressing the time between meaningful interactions between members of a team, we will go from problem to solution faster.  Whether solving a problem with an existing product or service or thinking up a new one, the continuous nature of communication speeds up the velocity and quality of work. We all experience the pace at which changes outside work take place compared to the slow pace of change within our workplaces.
  • Flatten hierarchy. The difficulty in broad communication, the formality of digital tools, and restrictions on the flow of information all fit perfectly with a strict hierarchical model of teams.  Managers “knew” more than others.  Information flowed down.  Management informed employees.  Equal access to tools and information, a continuous multi-way dialog, and the ease and bringing together relevant parties regardless of place in the organization flattens the hierarchy.  But more than that, it shines a light on the ineffectiveness and irrelevancy of a hierarchy as a command structure.

  • Improve execution.  Execution improves because members of teams have access to the interactions and data in real-time.  Gone are the days of “game of telephone” where information needed to “cascade” through an organization only to be reinterpreted or even filtered by each level of an organization.
  • Respond to changes using telemetry / data.  With the advent of continuous real-world usage telemetry, the debate and dialog move from deciding what the problems to be solved might be to solving the problem.  You don’t spend energy arguing over the problem, but debating the merits of various solutions.

  • Strengthen organization and partnerships.  Organizations that communicate openly and transparently leave much less room for politics and hidden agendas.  The transparency afforded by tools might introduce some rough and tumble in the early days as new “norms” are created but over time the ability to collaborate will only improve given the shared context and information base everyone works from.
  • Focus on the destination, not the journey.  The real-time sharing of information forces organizations to operate in real-time. Problems are in the here and now and demand solutions in the present. The benefit of this “pressure” is that a focus on the internal systems, the steps along the way, or intermediate results is, out of necessity, de-emphasized.

Organization culture change

Continuously productive organizations look and feel different from traditional organizations. As a comparison, consider how different a reunion (college, family, etc.) is in the era of Facebook usage. When everyone gets together there is so much more that is known—the reunion starts from shared context and “intimacy”.  Organizations should be just as effective, no matter how big or how geographically dispersed.

Effective organizations were previously defined by rhythms of weekly, monthly and quarterly updates.  These “episodic” connection points had high production values (and costs) and ironically relatively low retention and usage.  Management liked this approach as it placed a high value on and required active management as distinct from the work.  Tools were designed to run these meetings or email blasts, but over time these were far too often over-produced and tended to be used more for backward looking pseudo-accountability.

Looking ahead, continuously productive organizations will be characterized by the following:

  • Execution-centric focus.  Rather than indexing on the process of getting work done, the focus will shift dramatically to execution. The management doctrine of the late 20th century was about strategy.  For decades we all knew that strategy took a short time to craft in reality, but in practice almost took on a life of its own. This often led to an ever-widening gap between strategy and execution, with execution being left to those of less seniority.  When everyone has the ability to know what can be known (which isn’t everything) and to know what needs to be done, execution reigns supreme.  The opportunity to improve or invent will be everywhere and even with finite resources available, the biggest failure of an organization will be a failure to act.
  • Management framing context with teams deciding. Because information required discovery and flowed (deliberately) inefficiently management tasked itself with deciding “things”. The entire process of meetings degenerated into a ritualized process to inform management to decide amongst options while outside the meeting “everyone” always seemed to know what to do. The new role of management is to provide decision-making frameworks, not decisions.  Decisions need to be made where there is the most information. Framing the problem to be solved out of the myriad of problems and communicating that efficiently is the new role of management.
  • Outside is your friend.  Previously the prevailing view was that inside companies there was more information than there was outside and often the outside was viewed as being poorly informed or incomplete. The debate over just how much wisdom resides in the crowd will continue and certainly what distinguishes companies with competitive products will be just how they navigate the crowd and simultaneously serve both articulated and unarticulated needs.  For certain, the idea that the outside is an asset to the creation of value, not just the destination of value, is enabled by the tools and continuous flow of information.
  • Employees see management participate and learn, everyone has the tools of management.  It took practically 10 years from the introduction of the PC until management embraced it as a tool for everyday use by management.  The revolution of social tools is totally different because today management already uses the socialplace tools outside of work. Using Twitter for work is little different from using Facebook for family.  Employees expect management to participate directly and personally, whether the tool is a public cloud service or a private/controlled service. The idea of having an assistant participate on behalf of a manager with a social tool is as archaic as printing out email and typing in handwritten replies. Management no longer has separate tools or a different (more complete) set of books for the business, but rather information about projects and teams becomes readily accessible.
  • Individuals own devices, organizations develop and manage IP. PCs were first acquired by individual tech enthusiasts or leading edge managers and then later by organizations.  Over time PCs became physical assets of organizations.  As organizations focused more on locking down and managing those assets and as individuals more broadly had their own PCs, there was a decided shift to being able to just “use a computer” when needed.  The ubiquity of mobile devices almost from the arrival of smartphones and certainly tablets, has placed these devices squarely in the hands of individuals. The tablet is mine. And because it is so convenient for the rest of my life and I value doing a good job at work, I’m more than happy to do work on it “for free”.  In exchange, organizations are rapidly moving to tools and processes that more clearly identify the work products as organization IP not the devices.  Cloud-based services become the repositories of IP and devices access that through managed credentials.

Individuals and teams work differently

The new tools and techniques come together to improve upon the way individuals and teams interact.  Just as the first communication tools transformed business, the tools of mobile and continuous productivity change the way interactions happen between individuals and teams.

  • Sense and respond.  Organizations through the PC era were focused on planning and reacting cycles.  The long lead time to plan combined with the time to plan a reaction to events that were often delayed measurements themselves characterized “normal”.  New tools are much more real-time and the information presented represents the whole of the information at work, not just samples and surveys.  The way people will work will focus much more on everyone being sensors for what is going on and responding in real-time.  Think of the difference between calling for a car or hailing a cab and using Uber or Lyft from either a consumer perspective or from the business perspective of load balancing cars and awareness of the assets at hand as representative to sensing and responding rather than planning.
  • Bottom up and network centric.  The idea of management hierarchy or middle management as gatekeepers is being broken down by the presence of information and connectivity.  The modern organization working to be the most productive will foster an environment of bottom up—that is people closest to the work are empowered with information and tools to respond to changes in the environment.  These “bottoms” of the organization will be highly networked with each other and connected to customers, partners, and even competitors.  The “bandwidth” of this network is seemingly instant, facilitated by information sharing tools.
  • Team and crowd spanning the internal and external.  The barriers of an organization will take on less and less meaning when it comes to the networks created by employees.  Nearly all businesses at scale are highly virtualized across vendors, partners, and customers.  Collaboration on product development, product implementation, and product support take place spanning information networks as well as human networks.  The “crowd” is no longer a mob characterized by comments on a blog post or web site, but can be structured and systematically tapped with rich demographic information to inform decisions and choices.
  • Unstructured work rhythm.  The highly structured approach to work that characterized the 20th century was created out of a necessity for gathering, analyzing, and presenting information for “costly” gatherings of time constrained people and expensive computing.  With the pace of business and product change enabled by software, there is far less structure required in the overall work process.  The rhythm of work is much more like routine social interactions and much less like daily, weekly, monthly staff meetings.  Industries like news gathering have seen these radical transformations, as one example.

Data becomes pervasive (and big)

With software capabilities come ever-increasing data and information.  While the 20th century enabled the collection of data and to a large degree the analysis of data to yield ever improving decisions in business, the prevalence of continuous data again transforms business.

  • Sharing data continuously.  First and foremost, data will now be shared continuously and broadly within organizations. The days when reports were something for management and management waited until the end of the week or month to disseminate filtered information are over.  Even though financial data has been relatively available, we’re now able to see how products are used, trouble shoot problems customers might be having, understand the impact of small changes, and try out alternative approaches.  Modern organizations will provide tools that enable the continuous sharing of data through mobile-first apps that don’t require connectivity to corporate networks or systems chained to desktop resources
  • Always up to date.  The implication of continuously sharing information means that everyone is always up to date.  When having a discussion or meeting, the real world numbers can be pulled up right then and there in the hallway or meeting room.  Members of teams don’t spend time figuring out if they agree on numbers, where they came from or when they were “pulled”.  Rather the tools define the numbers people are looking at and the data in those tools is the one true set of facts.
  • Yielding best statistical approach informed by telemetry (induction).  The notion that there is a “right” answer is antiquated as the printed report.  We can now all admit that going to a meeting with a printed out copy of “the numbers” is not worth the debate over the validity or timeframe of those numbers (“the meeting was rescheduled, now we have to reprint the slides.”)  Meetings now are informed by live data using tools such as Mixpanel or live reporting from Workday, Salesforce and others.  We all know now that “right” is the enemy of “close enough” given that the datasets we can work with are truly based on census and not surveys.  This telemetry facilitates an inductive approach to decision-making.
  • Valuing more usage.  Because of the ability to truly understand the usage of products—movies watched, bank accounts used, limousines taken, rooms booked, products browsed and more—the value of having more people using products and services increases dramatically.  Share matters more in this world because with share comes the best understanding of potential growth areas and opportunities to develop for new scenarios and new business approaches.

New generation of productivity tools, examples and checklist

Bringing together new technologies and new methods for management has implications that go beyond the obvious and immediate.  We will all certainly be bringing our own devices to work, accessing and contributing to work from a variety of platforms, and seeing our work take place across organization boundaries with greater ease.  We can look very specifically at how things will change across the tools we use, the way we communicate, how success is measured, and the structure of teams.

Tools will be quite different from those that grew up through the desktop PC era.  At the highest level the implications about how tools are used are profound.  New tools are being developed today—these are not “ports” of existing tools for mobile platforms, but ideas for new interpretations of tools or new combinations of technologies.  In the classic definition of innovator’s dilemma, these new tools are less functional than the current state-of-the-art desktop tools.  These new tools have features and capabilities that are either unavailable or suboptimal at an architectural level in today’s ubiquitous tools.  It will be some time, if ever, before new tools have all the capabilities of existing tools.  By now, this pattern of disruptive technologies is familiar (for example, digital cameras, online reading, online videos, digital music, etc.).

The user experience of this new generation of productivity tools takes on a number of attributes that contrast with existing tools, including:

  • Continuous v. episodic. Historically work took place in peaks and valleys.  Rough drafts created, then circulated, then distributed after much fanfare (and often watering down).  The inability to stay in contact led to a rhythm that was based on high-cost meetings taking place at infrequent times, often requiring significant devotion of time to catching up. Continuously productive tools keep teams connected through the whole process of creation and sharing.  This is not just the use of adjunct tools like email (and endless attachments) or change tracking used by a small number of specialists, but deep and instant collaboration, real-time editing, and a view that information is never perfect or done being assembled.
  • Online and shared information.  The old world of creating information was based on deliberate sharing at points in time.  Heavyweight sharing of attachments led to a world where each of us became “merge points” for work. We worked independently in silos hoping not to step on each other never sure where the true document of record might be or even who had permission to see a document.  New tools are online all the time and by default.  By default information can be shared and everyone is up to date all the time.
  • Capture and continue  The episodic nature of work products along with the general pace of organizations created an environment where the “final” output carried with it significant meaning (to some).  Yet how often do meetings take place where the presenter apologizes for data that is out of date relative to the image of a spreadsheet or org chart embedded in a presentation or memo?  Working continuously means capturing information quickly and in real-time then moving on.  There are very few end points or final documents.  Working with customers and partners is a continuous process and the information is continuous as well.
  • Low startup costs.  Implementing a new system used to be a time consuming and elaborate process viewed as a multi-year investment and deployment project.  Tools came to define the work process and more critically make it impossibly difficult to change the work process.  New tools are experienced the same way we experience everything on the Internet—we visit a site or download an app and give it a try.  The cost to starting up is a low-cost subscription or even a trial.  Over time more features can be purchased (more controls, more depth), but the key is the very low-cost to begin to try out a new way to work.  Work needs change as market dynamics change and the era of tools preventing change is over.
  • Sharing inside and outside.  We are all familiar with the challenges of sharing information beyond corporate boundaries.  Management and IT are, rightfully, protective of assets.  Individuals struggle with the basics of getting files through firewalls and email guards.  The results are solutions today that few are happy with.  Tools are rapidly evolving to use real identities to enable sharing when needed and cross-organization connections as desired.  Failing to adopt these approaches, IT will be left watching assets leak out and workarounds continue unabated.
  • Measured enterprise integration.  The PC era came to be defined at first by empowerment as leading edge technology adopters brought PCs to the workplace.  The mayhem this created was then controlled by IT that became responsible to keep PCs running, information and networks secure, and enforce consistency in organizations for the sake of sharing and collaboration.  Many might (perhaps wrongly) conclude that the consumerization wave defined here means IT has no role in these tasks.  Rather the new era is defined by a measured approach to IT control and integration.  Tools for identity and device management will come to define how IT integrates and controls—customization or picking and choosing code are neither likely nor scalable across the plethora of devices and platforms that will be used by people to participate in work processes. The net is to control enterprise information flow, not enterprise information endpoints.
  • Mobile first.  An example of a transition between the old and new, many see the ability to view email attachments on mobile devices as a way forward.  However, new tools imply this is a true bridge solution as mobility will come to trump most everything for a broad set of people.  Deep design for architects, spreadsheets for analysts, or computation for engineers are examples that will likely be stationary or at least require unique computing capabilities for some time. We will all likely be surprised by the pace at which even these “power” scenarios transition in part to mobile.  The value of being able to make progress while close to the site, the client, or the problem will become a huge asset for those that approach their professions that way.
  • Devices in many sizes. Until there is a radical transformation of user-machine interaction (input, display), it is likely almost all of us will continue to routinely use devices of several sizes and those sizes will tend to gravitate towards different scenarios (see http://blog.flurry.com/bid/99859/The-Who-What-and-When-of-iPhone-and-iPad-Usage), though commonality in the platforms will allow for overlap.  This overlap will continue to be debated as “compromise” by some.  It is certain we will all have a device that we carry and use almost all the time, the “phone”.  A larger screen device will continue to better serve many scenarios or just provide a larger screen area upon which to operate.  Some will find a small tablet size meeting their needs almost all of the time.  Others will prefer a larger tablet, perhaps with a keyboard.  It is likely we will see somewhat larger tablets arise as people look to use modern operating systems as full-time replacements for existing computing devices.  The implications are that tools will be designed for different device sizes and input modalities.

It is worth considering a few examples of these tools.  As an illustration, the following lists tools in a few generalized categories of work processes.  New tools are appearing almost every week as the opportunity for innovation in the productivity space is at a unique inflection point.  These examples are just a few tools that I’ve personally had a chance to experience—I suspect (and hope) that many will want to expand these categories and suggest additional tools (or use this as a springboard for a dialog!)

The architecture and implementation of continuous productivity tools will also be quite different from the architecture of existing tools.  This starts by targeting a new generation of platforms, sealed-case platforms.

The PC era was defined by a level of openness in architecture that created the opportunity for innovation and creativity that led to the amazing revolution we all benefit from today.  An unintended side-effect of that openness was the inherent unreliability over time, security challenges, and general futzing that have come to define the experience many lament.  The new generation of sealed case platforms—that is hardware, software, and services that have different points of openness, relative to previous norms in computing, provide for an experience that is more reliable over time, more secure and predictable, and less time-consuming to own and use.  The tradeoff seems dramatic (or draconian) to those versed in old platforms where tweaking and customizing came to dominate.  In practice the movement up the stack, so to speak, of the platform will free up enormous amounts of IT budget and resources to allow a much broader focus on the business.  In addition, choice, flexibility, simplicity in use, and ease of using multiple devices, along with a relative lack of futzing will come to define this new computing experience for individuals.

The sealed case platforms include iOS, Android, Chromebooks, Windows RT, and others.  These platforms are defined by characteristics such as minimizing APIs that manipulate the OS itself, APIs that enforce lower power utilization (defined background execution), cross-application security (sandboxing), relative assurances that apps do what they say they will do (permissions, App Stores), defined semantics for exchanging data between applications, and enforced access to both user data and app state data.  These platforms are all relatively new and the “rules” for just how sealed a platform might be and how this level of control will evolve are still being written by vendors.  In addition, devices themselves demonstrate the ideals of sealed case by restricting the attachment of peripherals and reducing the reliance on kernel mode software written outside the OS itself.  For many this evolution is as controversial as the transition automobiles made from “user-serviceable” to electronic controlled engines, but the benefits to the humans using the devices are clear.

Building on the sealed case platform, a new generation of applications will exhibit a significant number of the following attributes at the architecture and implementation level.  As with all transitions, debates will rage over the relative strength or priority of one or more attributes for an app or scenario (“is something truly cloud” or historically “is this a native GUI”).  Over time, if history is any guide, the preferred tools will exhibit these and other attributes as a first or native priority, and de-prioritize the checklists that characterized the “best of” apps for the previous era.

The following is a checklist of attributes of tools for continuous productivity:

  • Mobile first. Information will be accessed and actions will be performed mobile first for a vast majority of both employees and customers.  Mobile first is about native apps, which is likely to create a set of choices for developers as they balance different platforms and different form factors.
  • Cloud first.  Information we create will be stored first in the cloud, and when needed (or possible) will sync back to devices.  The days of all of us focusing on the tasks of file management and thinking about physical storage have been replaced by essentially unlimited cloud storage.  With cloud-storage comes multi-device access and instant collaboration that spans networks.  Search becomes an integral part of the user-experience along with labels and meta-data, rather than physical hierarchy presenting only a single dimension.  Export to broadly used interchange formats and printing remain as critical and archival steps, but not the primary way we share and collaborate.
  • User experience is platform native or browser exploitive.  Supporting mobile apps is a decision to fully use and integrate with a mobile platform.  While using a browser can and will be a choice for some, even then it will become increasingly important to exploit the features unique to a browser.  In all cases, the usage within a customer’s chosen environment encourages the full range of support for that platform environment.
  • Service is the product, product is the service.  Whether an internal IT or a consumer facing offering, there is no distinction where a product ends and a continuously operated and improving service begins.  This means that the operational view of a product is of paramount importance to the product itself and it means that almost every physical product can be improved by a software service element.
  • Tools are discrete, loosely coupled, limited surface area.  The tools used will span platforms and form factors.  When used this way, monolithic tools that require complex interactions will fall out of favor relative to tools more focused in their functionality.  Doing a smaller set of things with focus and alacrity will provide more utility, especially when these tools can be easily connected through standard data types or intermediate services such as sharing, storage, and identity.
  • Data contributed is data extractable.  Data that you add to a service as an end-user is easily extracted for further use and sharing.  A corollary to this is that data will be used more if it can also be extracted a shared.  Putting barriers in place to share data will drive the usage of the data (and tool) lower.
  • Metadata is as important as data.  In mobile scenarios the need to search and isolate information with a smaller user interface surface area and fewer “keystrokes” means that tools for organization become even more important.  The use of metadata implicit in the data, from location to author to extracted information from a directory of people will become increasingly important to mobile usage scenarios.
  • Files move from something you manage to something you use when needed.  Files (and by corollary mailboxes) will simply become tools and not obsessions.  We’re all seeing the advances in unlimited storage along with accurate search change the way we use mailboxes.  The same will happen with files.  In addition, the isolation and contract-based sharing that defines sealed platforms will alter the semantic level at which we deal with information.  The days of spending countless hours creating and managing hierarchies and physical storage structures are over—unlimited storage, device replication, and search make for far better alternatives.  
  • Identity is a choice.  Use of services, particularly consumer facing services, requires flexibility in identity.  Being able to use company credentials and/or company sign-on should be a choice but not a requirement.  This is especially true when considering use of tools that enable cross-organization collaboration. Inviting people to participate in the process should be as simple as sending them mail today.
  • User experience has a memory and is aware and predictive.  People expect their interactions with services to be smart—to remember choices, learn preferences, and predict what comes next.  As an example, location-based services are not restricted to just maps or specific services, but broadly to all mobile interactions where the value of location can improve the overall experience.
  • Telemetry is essential / privacy redefined.  Usage is what drives incremental product improvements along with the ability to deliver a continuously improving product/service.  This usage will be measured by anonymous, private, opt-in telemetry.  In addition, all of our experiences will improve because the experience will be tailored to our usage.  This implies a new level of trust with regard to the vendors we all use.  Privacy will no doubt undergo (or already has undergone) definitional changes as we become either comfortable or informed with respect to the opportunities for better products.   
  • Participation is a feature.  Nearly every service benefits from participation by those relevant to the work at hand.  New tools will not just enable, but encourage collaboration and communication in real-time and connected to the work products.  Working in one place (document editor) and participating in another (email inbox) has generally been suboptimal and now we have alternatives.  Participation is a feature of creating a work product and ideally seamless.
  • Business communication becomes indistinguishable from social.  The history of business communication having a distinct protocol from social goes back at least to learning the difference between a business letter and a friendly letter in typing class.  Today we use casual tools like SMS for business communication and while we will certainly be more respectful and clear with customers, clients, and superiors, the reality is the immediacy of tools that enable continuous productivity will also create a new set of norms for business communication.  We will also see the ability to do business communication from any device at any time and social/personal communication on that same device drive a convergence of communication styles.
  • Enterprise usage and control does not make things worse. In order for enterprises to manage and protect the intellectual property that defines the enterprise and the contribution employees make to the enterprise IP, data will need to be managed.  This is distinctly different from managing tools—the days of trying to prevent or manage information leaks by controlling the tools themselves are likely behind us.  People have too many choices and will simply choose tools (often against policy and budgets) that provide for frictionless work with coworkers, partners, customers, and vendors.  The new generation of tools will enable the protection and management of information that does not make using tools worse or cause people to seek available alternatives.  The best tools will seamlessly integrate with enterprise identity while maintaining the consumerization attributes we all love.

What comes next?

Over the coming months and years, debates will continue over whether or not the new platforms and newly created tools will replace, augment, or see occasional use relative to the tools with which we are all familiar.  Changes as significant as those we are experiencing right now happen two ways, at first gradually and then quickly, to paraphrase Hemingway. Some might find little need or incentive to change. Others have already embraced the changes.  Perhaps those right now on the cusp, realize that the benefits of their new device and new apps are gradually taking over their most important work and information needs.  All of these will happen.  This makes for a healthy dialog.

It also makes for an amazing opportunity to transform how organizations make products, serve customers, and do the work of corporations.  We’re on the verge of seeing an entire rewrite of the management canon of the 20th century.  New ways of organizing, managing, working, collaborating are being enabled by the tools of the continuous productivity paradigm shift.

Above all, it makes for an incredible opportunity for developers and those creating new products and services.  We will all benefit from the innovations in technology that we will experience much sooner than we think.

–Steven Sinofsky

Written by Steven Sinofsky

August 20, 2013 at 7:00 am