Learning by Shipping

products, development, management…

Archive for the ‘posts’ Category

Management Clichés That Work

management mugManaging product development and management in general are ripe with clichés. By definition of course a cliché is something that is true, but unoriginal. I like a good cliché because it reminds you that much of management practice boils down to things you need to do but often forget or fail to do often enough.

The following 15 clichés might prove helpful and worth making sure you’re really doing the things in product development that need to get done on a daily basis. Some of these are my own wording of other’s thoughts expressed differently. There’s definitely a personal story behind each of these

Promise and deliver. People love to play expectations games and that is always bad for collaboration internal to a team, with your manager, or externally with customers. The cliché “under promise and over deliver” is one that people often use with pride. If you’re working with another group or with customers, the work of “setting expectations” should not be a game. It is a commitment. Tell folks what you believe, with the best of intentions, what you will do and do everything to deliver that. Over time this is far more valuable to everyone to be known as someone that gets done what you say.

Make sure bad news travels fast. Things will absolutely go wrong. In a healthy team as soon as things go wrong that information should be surfaced. Trying to hide or obscure bad news creates an environment of distrust or lack of transparency. This is especially noticeable on team when the good news is always visible but for some reason less good news lacks visibility. Avoid “crying wolf” of course by making sure you are broadly transparent in the work you do.

Writing is thinking. We’re all faced with complex choices in what to do or how to go about what will get done. While some people are great at spontaneously debating, most people are not and most people are not great at contributing in a structured way on the fly. So when faced with something complex, spend the time to think about some structure write down sentences, think about it some more, and then share it. Even if you don’t send around the writing, almost everyone achieves more clarity by writing. If you don’t then don’t blame writer’s block, but consider that maybe you haven’t formulated your point of view, yet.

Practice transparency within your team. There’s really no reason to keep something from everyone on the team. If you know something and know others want to know, either you can share what you know or others will just make up their own idea of what is going on. Sharing this broad base of knowledge within a team creates a shared context which is incredibly valuable.

Without a point of view there is no point. In our world of A/B testing, MVPs, and iteration we can sometimes lose sight of why a product and company can/should exist. The reason is that a company brings together people to go after a problem space with a unique point of view. Companies are not built to simply take requests and get those implemented or to throw out a couple of ideas and move forward with the ones that get traction. You can do that as work for hire or consulting, but not if you’re building a new product. It is important to maintain a unique point of view as a “north star” when deciding what to do, when, and why.

Know your dilithium crystals. Closely related to your point of view as a team is knowing what makes your team unique relative to competition or other related efforts. Apple uses the term “magic” a lot and what is fascinating is how with magic you can never quite identify the specifics but there is a general feeling about what is great. In Star Trek the magic was dilithium crystals–if you ever needed to call out the ingredient that made things work, that was it. What is your secret (or as Thiel says, what do you believe that no one else does)? It could be branding, implementation, business model, or more.

Don’t ask for information or reports unless they help those you ask to do their jobs. If you’re a manager you have the authority to ask your team for all sorts of reports, slides, analysis, and more. Strong managers don’t exercise that authority. Instead, lead your team to figure out what information helps them to do their job and use that information. As a manager your job isn’t a superset of your team, but the reflection of your team.

Don’t keep two sets of books. We keep track of lots of things in product development: features, budgets, traffic, revenue, dev schedules, to do lists, and more. Never keep two versions of a tracking list or of some report/analysis. If you’re talking with the team about something and you have a different view of things than they do, then you’ll spend all your time reconciling and debating which data is correct. Keeping a separate set of books is also an exercise in opacity which never helps the broader team collaboration.

Showdowns are boring and nobody wins. People on teams will disagree. The worst thing for a team dynamic is to get to a major confrontation. When that happens and things become a win/lose situation, no one wins and everyone loses. Once it starts to look like battle lines are being draw, the strongest members of the team will start to find ways to avoid what seems like an inevitable showdown. (Source: This is a line from the film “Wall Street”.)

Never vote on anything. On paper, when a team has to make a decision it seems great to have a vote. If you’re doing anything at all interesting then there’s almost certainty that at least one person will have a different view. So the question is if you’re voting do you expect a majority rule, 2/3rds, consensus, are some votes more equal? Ultimately once you have a vote then the choice is one where the people that disagree are not singled out and probably isolated. My own history is that any choice that was ever voted on didn’t even stick. Leadership is  about anticipating and bringing people along to avoid these binary moments. It is also about taking a stand and having a point of view if you happen to reach such a point.

When presenting the boss with n alternatives he/she will always choose option n+1. If you’re asked to come up with a solution to a problem or you run across a problem you have to solve but need buy in from others, you’re taking a huge risk by presenting alternatives. My view is that you present a solution and everything else is an alternative–whether you put it down on paper or not. A table of pros/cons or a list of options like a menu almost universally gets a response of trying to create an alternative that combines attributes that can’t be combined. I love choices that are cost/quality, cheap/profitable, small/fast and then the meeting concludes in search of the alternative that delivers both.

Nothing is ever decided at a meeting so don’t try. If you reach a point where you’re going to decide a big controversial thing at a meeting then there’s a good chance you’re not really going to decide. Even if you do decide you’re likely to end up with an alliterative you didn’t think of beforehand and thus is not as thought through or as possible as you believed it to be by the end of the meeting. At the very least you’re not going to enroll everyone in the decision which means there is more work to do be done. The best thing to do is not to avoid a decision making meeting but figure out how you can keep things moving forward every day to avoid these moments of truth.

Work on things that are important not urgent. Because of mobile tools like email, twitter, SMS, and notifications of all kinds from all sorts of apps have a way of dominating your attention. In times of stress or uncertainty, we all gravitate to working on what we think we can accomplish. It is easier to work towards inbox zero than to actually dive in and talk to everyone on the team about how they are handling things or to walk into that customer situation. President Eisenhower and later Stephen Covey developed amazing tools for helping you to isolate work that is important rather than urgent.

Products don’t ship with a list of features you thought you’d do but didn’t. The most stressful list of any product development effort is the list of things you have to cut because you’re running out of time or resources. I don’t like to keep that list and never did, for two reasons. First, it just makes you feel bad. The list of things you’re not doing is infinitely long–it is literally everything else. There’s no reason to remind yourself of that. Second, whatever you think you will do as soon as you can will change dramatically once customers start using the product you do end up delivering to them. When you do deliver a product it is what you made and you’re not obligated to market or communicate all the things you thought of but didn’t get done!

If you’re interesting someone won’t agree with what you said. Whether you’re writing a blog, internal email, talking to a group, or speaking to the press you are under pressure. You have to get across a unique point of view and be heard. The challenge is that if you only say things everyone believes to already be the case, then you’re not furthering the dialog. The reality is that if you are trying to change things or move a dialog forward, some will not agree with you. Of course you will learn and there’s a good chance you we wrong and that gives you a chance be interesting in new ways. Being interesting is not the same as being offensive, contrarian, cynical, or just negative. It is about articulating a point of view that acknowledges a complex and dynamic environment that does not lend itself to simple truths. Do make sure you have the right mechanisms in place to learn just how wrong you were and with how many people.

Like for example, if you write a post of 15 management tips, most people won’t agree with all of them :)
–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

October 23, 2014 at 12:00 pm

Posted in posts

Tagged with

Product Hunt: A Passion for Products, the Makers Behind Them, and the Community Around Them

product-hunt-glasshole-kitty-by-jess3More products are being created and developed faster today than ever before. Every day new services, sites, and apps are introduced. But with this surge in products, it’s become more difficult to get noticed and connect with users. In late 2013, Ryan Hoover founded Product Hunt to provide a daily view of new products that brings together an engaged community of product users with product makers. Today marks the next step in the growth of the company.

Interconnecting a Community

When you first meet Ryan it becomes immediately clear he has a passion for entrepreneurship and its surrounding ecosystem. Well before starting Product Hunt, he hosted intimate brunches to bring founders together. This came out of another email-based experiment named Startup Edition, where he assembled a weekly newsletter of founder essays on topics of marketing, product development, fundraising, and other challenges company builders face. This enthusiasm is prevalent on Twitter where he shares new products and regularly interacts with fellow enthusiasts in the startup community.

Ryan’s background comes from games, an ecosystem that is regarded as one of the most connected. Gamers love to stay on top of the latest products. Game makers love to connect with gamers. There’s an even larger community of game enthusiasts who value being observers in this dialog. Ryan grew up in the midst of a family-owned video game store so it’s no surprise that he has an incredibly strong sense of community. That’s why after college, he got involved in the gaming industry, first at InstantAction and then at PlayHaven. Each of these roles allowed Ryan to build the skills to foster both the product and community engagement sides of gaming, while also creating successful business opportunities for the whole community.

Spending time in the heart of gaming, between gamers and game makers, Ryan saw how those makers that fostered a strong sense of community around their game had stronger engagement and improved chances of future growth. Along the way he saw a wide variety of ways to build communities — and most importantly to maintain an open and constructive environment where praise, criticism, and wishes could be discussed between makers and enthusiasts.

About a year ago, Ryan launched, in his words, “an experiment” — a daily email of the latest products. After a short time, interest and subscribers to the mail list grew. So with a lot of hustle, the email list turned into a site. Product Hunt was launched.

Product Hunt started with a passion for products and has grown into a community of people passionate to explore and discuss new products with likeminded enthusiasts and makers of those products.

Product Hunt: More Than a Site

Product Hunt has become something of a habit for many since its debut. Today hundreds of thousands of “product hunters” visit the site plus more through the mobile apps, the daily email, and the platform API. Every month, millions of visits to product trial, app stores, and download sites are generated. And nearly half of all product discussions include the product maker, from independent hackers to high-profile veteran founders.

Product Hunt is used by enthusiasts to learn about new products, colored with an unfiltered conversation with its makers. It servers the industry as a source for new and trending product areas. For many, Product Hunt is or will evolve to be the place you go to discover products in the context of similar products along with a useful dialog with a community.

Product Hunt is much more than a site. Product Hunt is a community. In fact, Ryan and the team spend most of their energy creating, curating, and crafting a unique approach to building a community. His own experience as a participant and a maker led him to believe deeply in the role of community and engagement not just in building products, but also in launching new products and connecting with customers.

This led the team to create a platform for products, starting with the products they know best — mobile and desktop apps and sites.

The challenge they see is that today’s internet and app stores are overwhelmed with new products, as we all know. The stores limit interaction to one-way communication and reviews. If you want to connect with the product makers, there’s no way to do so. Ironically, makers themselves are anxious to connect but do so in an ad hoc manner that often lacks the context of the product or community. Product Hunt allows this type of community to be a normal part of interaction and not just limited to tech products.

Product Hunt is just getting started, but the enthusiasm is incredible. A quick Twitter search for “addicted to product hunt” shows in just a short time how many folks are making the search for what’s new a part of a routine. The morning email with the latest news is now a must-read and Ryan is seeing the technology industry use this as a source for the most up to date launches.

Product Hunt’s uniqueness comes from the full breadth of activity around new products and those enthusiastic about them:

Launch. Product Hunt is a place where products are announced and discovered for the first time. Most new products today don’t start with marketing or advertising, but simply “show up”. Makers know how hard it is to get noticed. They upload an app to a store or set up a new site and just wait. Gaining awareness or traction is challenging. Since the first people to use most new products are themselves involved in making products, they love to know about and experience the latest creations. New product links come from a variety of sources and already Product Hunt is becoming the go-to place for early adopters.

Learn. Learning about what’s new is just as challenging for enthusiasts. Most new products launched do not yet have full-blown marketing, white papers, or other information. In fact, in today’s world of launching-to-learn more about how to refine products, there are often more questions than answers. Community members submit just a short tagline and link to the product. Then the dialog begins. There are robust discussions around choices in the product, comparisons to other products, and more. Nearly half of the products include the makers in the discussion, sharing their stories and directly interacting with people. And these discussions are also happening in the real world, as members of the community organize meetups across the globe from Tokyo to Canada.

Share. Early adopters love to share their opinions and engage with others. On Product Hunt, the people determine which products surface as enthusiasts upvote their favorite discoveries and share their perspective in the comments. Openness, authenticity, and constructive sharing are all part of the Product Hunt experience, and naturally this enthusiasm spills outside the community itself.

Curate. With the help of the community, the team is constantly curating collections of products into themes that are dynamic and changing. This helps raise awareness of emerging product categories and gives consumers a way to find great products for specific needs. Recent lists have included GIF apps, tools used by product managers, and productivity apps. One favorite that shows the timeliness of Product Hunt was a list of iOS 8 keyboards the day after iOS 8’s launch.

One attribute of all products that serve an enthusiastic community is the availability of a platform to extend and customize the product. Product Hunt recently announced the Product Hunt API and already has apps and services that present useful information gathered from Product Hunt, such as the leaderboard and analytics platform.

Product Hunt + a16z

When I first hung out with Ryan outside of a conference room, he brought me to The Grove coffee shop on Mission St. We sat outside and began to talk about products, enthusiasts, and community. It was immediately clear Ryan sees the world or products in a unique way — he sees a world of innovation, openness to new ideas, and unfiltered communication between makers and consumers. As founder, Ryan embodies the mission-oriented founders a16z loves to work with and he’s built a team that shares that passion and mission.

Andreessen Horowitz could not be more excited to lead this next round of investing, and I am thrilled to serve on the board. Please check out Product Hunt for yourself onthe web, download its iOS app, or sign up for the email digest.

–Steven Sinofsky

Note: This post originally appeared on a16z.com.

Written by Steven Sinofsky

October 8, 2014 at 8:30 am

Posted in a16z, posts

Tagged with

Beauty of Testing

Star Trek's Scotty engineering in the Jeffries tubes.In a post last week,@davewiner described The Lost Art of Software Testing. I loved the post and the ideas about testing expressed (Dave focuses more on the specifics of scenario and user experience testing so this post will broaden the definition to include that and the full range of testing). Testing, in many forms, is an integral part of building products. Too often if the project is late or hurry up and learn or agile methods are employed, testing is one of those efforts where corners are cut. Test, to put it simply, is the conscience of a product. Testing systematically determines the state of a product. Testers are those entrusted with keeping everyone within 360º of a product totally honest about the state of the project.

Before you jump to twitter to tell correct the above, we all know that scheduling, agile, lean, or other methods in no way at all preclude or devalue testing. I am definitely not saying that is the case (and could argue the opposite I am sure). I am saying, however, that when you look at what is emphasized with a specific way of working, you are making inherent tradeoffs. If the goal is to get a product into market to start to learn because you know things will change, then it is almost certainly the case that you also have a different view of fit and finish, edge conditions, or completeness of a product. If you state in advance that you’re going to release every time interval and too aggressively pile on feature work, then you will have a different view of how testing fits into a crunched schedule. Testing is as much a part of the product cycle as design and engineering, and like those you can’t cut corners and expect the same results.

Too often some view testing as primarily a function of large projects, mature products, or big companies. One of the most critical hires a growing team can make is that first testing leader. That person will assume the role of a bridge between development and customer success, among many other roles. Of course when you have little existing code and a one-pizza sized dev team, testing has a different meaning. It might even be the case that the devs are building out a full test infrastructure while the code is being written, though that is exceedingly rare.

No one would argue against testing and certainly no one wants a product viewed as low quality (or one that has not been thoroughly tested as the above referenced post describes). Yet here we are in the second half century of software development and we still see products and services referred to as buggy. Note: Dave’s post inspired me, not any recent quality issues faced by other vendors.

Are today’s products actually more buggy than those of 10, 15, or 20 years ago? Absolutely not. Most every bit of software used today is on the whole vastly higher quality than anything built years ago. If vendors felt compelled, many could prove statistically (based on telemetry) that customers experience far more robust products than ever before. Products still do, rarely, crash (though the impact of that is mostly just a nuisance rather than a catastrophic data loss) and as a result the visibility seems much higher. It wasn’t too long ago that mainstream products would routinely (weekly if not daily) crash and work would be lost with the trade press anxiously awaiting the next updates to get rid of bugs. Yet products still have issues, some major, and all that should do is emphasize the role of testing. Certainly the more visible, critical, or fatal a quality issue might be the more we might notice it. If a social network has a bug in a feed or fails to upload a photo that might be vastly different from a tool that loses data you typed and created.

Today’s products and services benefit enormously from telemetry which informs the real world behavior of a product. Many thought the presence of this data would in a sense automate testing. As we often see with advances that some believe would reduce human labor, the challenges scale to require a new kind of labor or to understand and act on new kinds of information.

What is Testing?

Testing has many different meanings in a product making organization, but in this post we want to focus on testing as it relates to the *verification that a product does what it is intended to do and does so elegantly, efficiently, and correctly. *

Some might just distill testing down to something like “find all the bugs”. I love this because it introduces two important concepts to product development:

  1. Bug. A bug is simply any time a product does not behave the way someone thought it should. This goes way beyond crashes, data loss, and security problems. Quite literally, if a customer/user of your product experiences the unexpected then you have a bug and should record it in some database. This means by definition testing is not the only source of bugs, but certainly is the collection and management point for the list of all the bugs.
  2. Specification. In practice, deciding whether or not a bug is something that requires the product to change means you have a definition or of how a product should behave in a given context. When you decide the action to take on a bug that is done with a shared understanding across the team of what a product should be doing. While often viewed as “old school” or associated with a classic “waterfall” methodology, specifications are how the product team has some sense of “truth”. As a team scales this becomes increasingly important because many different people will judge whether something is a bug or not.

Testing is also relative to the product lifecycle as great testers understand one the cardinal rules of software engineering—change is the enemy of quality. Testers know that when you have a bug and you change the code you are introducing risk into a complex system. Their job is to understand the potential impact a change might have on the overall product and weigh that against the known/reported problem. Good testers do not just report on problems than need to be fixed, but also push back on changing too much at the wrong time because of potential impact. Historically, for every 10 changes made to a stable product, at least one will backfire and cause things to break somehow.

Taken together these concepts explain why testing is such a sophisticated and nuanced practice. It also explains why it requires a different perspective than that of the product manager or the developer.

Checks and Balances

The art and science of making things at any scale is a careful balance of specialized skills combined with checks and balances across those skills.

Testing serves as part of the checks and balances across specializations. They do this by making sure everyone is clear on what the goals are, what success looks like, how to measure that success, and how to repeat those measures as the project progresses. By definition, testing does not make the product. That puts them in the ideal position to be the conscience of the product. The only agenda testing has is to make sure what everyone signed up to do is actually happening and happening well. Testing is the source of truth for a product.

Some might say this is the product manager’s role or the dev/engineering manager’s role (or maybe design or ops). The challenge is that each of these roles has other accountabilities to the product and so are asked to be both the creator and judge of their own work. Just as product managers are able to drive the overall design and cohesiveness of a product (among other things) while engineering drives the architecture and performance (among other things), we don’t normally expect those roles to reverse and certainly not to be held by a single person.

One can see how this creates a balanced system of checks:

  • Development writes the code. This is the ultimate truth of what a product does, but not necessarily what the team might want it to do. Development is protective of code and has one view of what to change, what are the difficult parts of code or what parts are easy. Development must balance adding and changing code across individual engineers who own different parts of the code and so on.
  • Operations runs the live product/service. Working side by side with development (in a DevOps manner) there are the folks that scale a product up and out. This is also about writing the code and tools required to manage the service.
  • Product management “designs” the product. I say design to be broader than Design (interaction, graphical, etc.) and to include the choice of features, target customers, and functional requirements.
  • Product design defines how a product feels. Design determines the look and feel of a product, the interaction flows, and the techniques used to express features.
  • And so on across many disciplines…

That also makes testing a big pain in the neck for some people. Testers want precision when it might not exist. Testers by their nature want to know things before they can be known. Testers by their nature prefer stability over change. Testers by their nature want things to be measurable even when they can’t be measured. Testers tend towards process or procedural thinking when others might tend towards non-linear thinking. We all know that engineers tilt towards wanting to distill things to 1’s and 0’s. To the uninitiated (or the less than amazing tester) testers can come across as even more binary than binary.

That said, all you need is testing to save you from yourself one time and you have a new best friend.

Why Do We (Still) Need Testing?

Software engineering is a unique engineering discipline. In fact for the whole history of the field different people have argued either that computer software is mostly a science of computing or that computing is a craft or artistic practice. We won’t settle this here. On the other hand, it is fair to say that at least two things are true. First, even art can have a technology component that requires an engineering like approach, for example making films or photography. Second, software is a critical part of society’s infrastructure and from electrical to mechanical to civil we require those disciplines to be engineers.

Software has a unique characteristic which is that it is actually the case that a single person can have an idea, write the code, and distribute it for use. Take that civil engineers! Good luck designing and building a bridge on your own. Because of this characteristic of software there is desire to scale to large projects this same way.

People who know about software bugs/defects know that there are two ways to reduce the appearance and cost of shipping bugs. First, don’t introduce them at all. Methodologies like extreme or buddy programming or code reviews are all about creating a coding environment that prevents bugs from ever being typed.

Yet those methods still yield bugs. So the other technique employed is to attempt to get engineering to test all the code they write and to move the bug finding efforts “upstream”. That is write some new code for the product and then write code that tests your code. This is what makes software creation seem most like other forms of engineering or product creation. The beauty of software is just how soft it is—complete redesigns are keystrokes away and only have a cost in brain power and time. This contrasts sharply with building roads, jets, bridges, or buildings. In those cases, mistakes are enormously costly and potentially very dangerous. Make a mistake on the load calculations of a building and you have to tear it down and start over (or just leave the building mostly empty like the Stata Center at MIT).

Therefore moving detection of mistakes earlier in the process is something all engineering works to do (though not always successfully). In all but software engineering, the standard of practice employs engineers dedicated to the oversight of other engineers. You can even see this in practice in the basics of building a home where you must enlist inspectors to oversee electrical or steel or drainage, even though the engineers presumably do all they can to avoid mistakes. On top of that there are basic codes that define minimal standards. Software lacks all of these as a formality.

Thus the importance of specialized testing in software projects is a pressing need that is often viewed as counter-cultural. Lacking the physical constraints as well, engineers tend to feel “gummed” up and constrained by what would be routine quality practices in other engineering. For example, no one builds as much as a kitchen cabinet without detailed drawings with measurements. Yet routinely we in software build products or features without specifications.

Because of this tension between acting like traditional engineers and working to maintain the velocity of a single inspired engineer, there’s a desire to coalesce testing into the role of the engineer which can potentially allow for more agility or moving bug finding more upstream. One of the biggest changes in the field of software has been the availability of data about product quality (telemetry) which can be used to inform a project team about the state of things, perhaps before the product is in broad use.

There’s some recent history in the desire to move testing and development together and that is the devops movement. Devops is about rolling in operational efforts closer to engineering to prevent the “toss it over the wall” approach used by earlier in the evolution of web services. I think this is both similar and different. Most of the devops movement focuses on the communication and collaboration between development and operations, rather than the coalescing of disciplines. It is hard to argue against more communication and certainly within my own experience, when it came time to begin planning, building, and operating services our view of Operations was that it was adding to a seat at the table of PM, dev, test, design, and more.

The real challenge is that testing is far more sophisticated than anything an engineer can do solo. The reason is that engineers are focused on adding new code and making sure the new code works the way they wrote it. That’s very different than focusing on all that new code in the context of all other new code, all the new hardware, and if relevant all the old code as well (compatibility). In other words, as a developer is writing new code the question is really if it is even possible for the developer to make progress on that code while thinking about all those other things. Progress will quickly grind to halt if one really tries to do all of that work well.

As an aside, the role of developers writing unit tests is well-established and quite successful. Historically the challenge is maintaining these over time at the same level of efficacy. In addition, going beyond unit testing to include automation, configuration, API, and more to areas that the individual developer lacks expertise proves out the challenge of trying to operate without dedicated testing.

An analogy I’ve often used is to compare software projects to movies (they share a lot of similarities). With movies you immediately think of actor, director, screenwriter and tools like cameras, lights, sound. Those are the engineer and product manager equivalents. Put a glass of iced tea in the hand of an actor and the sunset in the background and all of a sudden someone has to worry about the level of the tea, condensation, and ice cube volume along with the level of the sun and number of birds on the horizon. Now of course an actor knows how that looks and so does the director. Movies are complex—they are shot out of order, reshot, and from many angles. So movie sets employ people to keep an eye on all those things—property masters, continuity, and so on. While the idea of the actor or director or camera operator trying to remember the size of ice cubes is not difficult to understand intellectually, in practice those people have a ton of other things to worry about. In fact they have so much to worry about that there’s no way they can routinely remember all those details or keep the big issues of the film front and center. Those ice cubes are device compatibility. The count of birds represent compatibility with other features. The level of the sun represents something like alternative scripts or accessibility, for example. All these things are things that need to be considered across the whole production in a consistent and well-understood manner. There’s simply no way for each “actor” to do an adequate job on all of them.

Therefore like other forms of engineering, testing is not an optional thing just because one can imagine software being made by just pure coding. Testing is a natural outcome of a project of any sophistication, complexity, or evolution over time. When I do something like run Excel 3 from 1990 on Windows 8, I think there’s an engineering accomplishment but I really know that is the work of testers validating whole subsystems across a product.

When to Test

You can bring on test too early, whether a startup or an existing/large project. When you bring on testing before you have a firm grasp from product management of what an end state might look like, then there’s no role testing can play. Testing is a relative science. Testers validate a product relative to what it is supposed to do. If what it is supposed to do is either unknown or to be determined then the last thing you want is someone saying it isn’t doing something right. That’s a recipe for frustrating everyone. Development is told they are doing the wrong thing. Product will just claim the truth to be different. And thus the tension across the team described by Dave in his post will surface.

In fact a classic era in Microsoft’s history with testing and engineering is based on wanting to find bugs upstream so badly that the leaders at the time drove folks to test far too early and eagerly. What resulted was no less than a tsunami of bugs that overwhelmed development and the project ground to a halt. Valuable lessons were passed on about starting too early—when nothing yet works there’s no need to start testing.

While there is a desire to move testing more upstream, one must also balance this with having enough of the product done and enough knowledge of what the product should be before testing starts. Once you know that then you can’t cut corners and you have to give the testing discipline time to do their job with a product that is relatively stable.

That condition—having the product in a stable state—before starting testing is a source of tension. To many it feels like a serialization that should not be done. The way teams I’ve worked on have always talked about this is that final stages of any project are the least efficient times for the team. Essentially the whole team is working to validate code rather than change code. Velocity of product development seems to stand still. Yet that is when progress is being made because testing is gaining assurance that the product does what it is supposed to do, well.

The tools of testing that span from unit tests, API tests, security tests, ad hoc testing, code coverage, UX automation, compatibility testing, and automation across all of those are the way they do their job. So much of the early stages of a project can be spent creating and managing that infrastructure when that does not depend on the specifics of how the product will work. Grant George, the most amazing test leader I ever had the opportunity to work with on both Windows and Office, used to call this the “factory floor”. He likened this phase to building the machinery required for a manufacturing line which would allow the team to rapidly iterate on daily builds while covering the full scope of testing the product.

While you can test too early you can also test too late. Modern engineering is not a serial process. Testers are communicating with design and product management (just like a devops process would describe) all along, for example. If you really do wait to test until the product is done, you will definitely run out of time and/or patience. One way to think of this is that testers will find things to fix—a lot of things—and you just need time to fix them.

In today’s modern era, testing doesn’t end when the product releases. The inbound telemetry from the real world is always there informing the whole team of the quality of the product.

Telemetry

One of the most magical times I ever experienced was the introduction of telemetry to the product development process. It was recently the anniversary of that very innovation (called “Watson”) and Kirk Glerum, one of the original inventors back in the late 1990’s, noted so on Facebook. I just wanted to share this story a little bit because of how it showed a counter-intuitive notion of how testing evolved. (See this Facebook post from Kirk). This is not meant to be a complete history.

While working what became Office 2000 in 1998 or so, Kirk had the brilliant insight that when a program crashed one could use the *internet* and get a snapshot of some key diagnostics and upload those to Microsoft for debugging. Previously we literally had either no data or someone would call telephone support and fax in some random hex numbers being displayed on a screen. Threading the needle with our legal department, folks like Eric LeVine worked hard to provide all the right anonymization, opt-in, and disclosure required. So rather than have a sample of crashes run on specific or known machines, Kirk’s insight allowed Microsoft to learn about literally all the crashes happening. Very quickly Windows and Office began working together and Windows XP and Office 2000 released as the first products with this enabled.

A defining moment was when a well-known app from a third party released a patch. A lot of people were notified by some automated method and downloaded the patch and installed it. Except the patch caused a crash in Word. We immediately saw a huge spike in crashes all happening in the same place and quickly figured out what was going on and got in touch with the ISV. The ISV was totally unaware of the potential problem and thus began an industry wide push on this kind of telemetry and using this aspect of the Windows platform. More importantly a fix was quickly released.

An early reaction was that this type of telemetry would obsolete much of testing. We could simply have enough people running the product to find the parts that crashed or were slow (later advances in telemetry). Of course most bugs aren’t that bad but even assuming they were this automation of testing was a real thought.

But instead what happened was testing quickly became the best users of this telemetry data. They were using it while analyzing the code base, understanding where the code was most fragile, and thinking ways to gather more information. The same could be said for development. Believe it or not, some were concerned that development would get sloppy and introduce bugs more often knowing that if a bug was bad enough it would pop up on the telemetry reports. Instead of course development became obsessed with the telemetry and it became a routine part of their process as well.

The result was just better and higher quality software. As our industry proves time and time again, the improvements in tools allow the humans to focus on higher level work and to gain an even better understanding of the complexity that exists. Thus telemetry has become an integral part of testing much the same way that improvements in languages help developers or better UX toolkits help design.

It Takes a Village

Dave’s post on testing motivated me to write this. I’ve written posts about the role of design, product management, general management and more over the years as well. As “software eats the world” and as software continues to define the critical infrastructure of society, we’re going to need more and more specialized skills. This is a natural course of engineering.

When you think of all the specialties to build a house, it should not be surprising that software projects will need increasing specialization. We will need not just front end or back end developers, project managers, designers, and so on. We will continue to focus on having security, operations, linguistics, accessibility, and more. As software matures these will not be ephemeral specializations but disciplines all by themselves.

Tools will continue to evolve and that will enable individuals to do more and more. Ten years ago to build a web service your startup required people will skills to acquire and deploy servers, storage networks, and routers. Today, you can use AWS from a laptop. But now your product has a service API and integration with a dozen other services and one person can’t continuously integrate, test, and validate all of those all while still moving the product forward.

Our profession keeps moving up the stack, but the complexity only increases and the demands from customers for a always improving experience continues unabated.

–Steven

PS: My all time favorite book on engineering and one that shaped a lot of my own views is To Engineer Is Human by Henry Petroski. It talks about famous engineering “failures” and how engineering is all about iteration and learning. To anyone that ever released a bug, this should make sense (hint, that’s every one of us).

Written by Steven Sinofsky

September 25, 2014 at 7:45 pm

Posted in posts

Tagged with ,

Mobile OS Paradigm

Cycle of nature of work, capabilities of tools, architecture of platform.

Cycle of nature of work, capabilities of tools, architecture of platform.

Are tablets the next big thing, a saturated market (already), dead (!), or just in a lull? The debate continues while the sales of tablets continue to outpace laptops and will soon overtake all PCs (of all form factors and OS). What is really going on is an architectural transformation—the architecture that defined the PC is being eclipsed by the mobile OS architecture.

The controversy of this dynamic rests with the disruptive nature—the things that were easy to do with a PC architecture that are hard or impossible to do with a mobile OS, as well as the things in a mobile OS that make traditional PCs seem much easier. Legacy app compatibility, software required for whole professions, input preferences, peripherals, and more are all part of this. All of these are also rapidly changing as software evolves, scenarios adapt, and with that what is really important changes.

Previous posts have discussed the changing nature of work and the new capabilities of tools. This post details the architecture of the platform. Together these three form an innovation cycle—each feeding into and from each other, driving the overall change in the computing landscape we see today.

The fundamental shift in the OS is really key to all of this. For all the discussed negatives the mobile OS architecture brings to the near term, it is also an essential and inescapable transition. Often during these transitions we focus in the near term on the visible differences and miss the underlying nature of the change.

During the transition from mini to PC, the low price and low performance created a price/performance gap that the minis thought they would exploit. Yet the scale volume, architectural openness, and rapid improvement in multi-vendor tools (and more) contributed to a rapid acceleration that could not compare.

During the transition from character-based to GUI-based PCs many focused on the expense of extra peripherals such as graphics cards and mice, requirement for more memory and MIPs, not to mention the performance implications of the new interface in terms of training and productivity. Yet, Moore’s law, far more robust peripheral support (printers and drivers), and ability to draw on multi-app scenarios (clipboard and more) transformed computing in ways character-based could not.

The same could be said about the transition to internetworking with browsers. The point is that the ancillary benefits of these architectural transitions are often overlooked while the dialog primarily focuses on the immediate and visible changes in the platform and experience. Sometimes the changes are mitigated over time (i.e. adding keyboard shortcuts to GUI or the evolution of the PC to long file names and real multi-tasking and virtual memory). Other times the changes become the new paradigm as new customers and new scenarios dominate (i.e. mouse, color, networking).

The transition to the mobile OS platforms is following this same pattern. For all the debates about touch versus keyboard, screen-size, vertical integration, or full-screen apps, there are fundamental shifts in the underlying implementation of the operating system that are here to stay and have transformed computing.

We are fortunate during this transition because we first experienced this with phones that we all love and use (more than any other device) so the changes are less of a disconnect with existing behavior, but that doesn’t reduce the challenge for some or even the debate.

Mobile OS paradigm

The mobile OS as defined by Android, iOS, Windows RT, Chrome OS, Windows Phone, and others is a very different architecture from the PC as envisioned by Windows 7/8, Mac OS X, Linux desktop. The paradigm includes a number of key innovations that when taken together define the new paradigm.

  1. ARM. ARM architecture for mobile provides a different view of the “processor”: SoC, multi-vendor, simpler, lower power consumption, fanless, rich graphics, connectivity, sensors, and more. All of these are packaged in a much lower cost way. I am decidedly not singling out Intel/AMD about this change, but the product is fundamentally different than even Intel’s SoCs and business approach. ARM is also incompatible with x86 instructions which means, even virtualized, the existing base of software does not run, which turns out to be an asset during this change (the way OS/360 and VMS didn’t run on PCs).
  2. Security. At the heart of mobile is a more secure platform. It is not more secure because there are few pointers in the implementation or fewer APIs, but more secure because apps run with a different notion of what they can/cannot do and there is simply no way to get apps on the device that can violate those rules (other than for developers of course). There’s a full kernel there but you cannot just write your own kernel mode drivers to do whatever you want. Security is a race of course and so more socially engineered, password stealing, packet sniffing, phone home evil apps will no doubt make their way to mobile but you won’t see drive by buffer overrun attacks take over your device, keystroke loggers, or apps that steal other apps’ data.
  3. Quality over time and telemetry. We are all familiar with the way PCs (and to a lesser but non-zero degree Macs) decay over time or get into states where only a reformat or re-imaging will do. Fragility of the PC architecture in this regard is directly correlated with the openness and so very hard to defend against, even among the most diligent enthusiasts (myself included). The mobile OS is designed from the ground up with a level of isolation between the OS and apps and between apps that all but guarantee the device will continue to run and perform the way it did on the first day. When performance does take a turn for the worse, there’s ongoing telemetry that can easily point to the errant/causal app and removing it returns things to that baseline level of excellence.
  4. App store model. The app store model provides for both a full catalog of apps easily searched and a known/reviewed source of apps that adhere to some (vendor-specified) level of standards. While vendors are taking different approaches to the level of consistency and enforcement, it is fair to say this approach offers so many advantages. Even in the event of a failure of the review/approval process, apps can be revoked if they prove to be malicious in intent or fixed if there was an engineering mistake. In addition, the centralized reviews provide a level of app telemetry that has previously not existed. For developers and consumers, the uniform terms and licensing of apps and business models are significant improvements (though they come with changes in how things operate).
  5. All day battery life. All day battery life has been a goal of devices since the first portable/battery PCs. The power draw of x86 chipsets (including controllers and memory), the reliability challenges of standby power cycles, and more have made this incredibly difficult to reliably “add on” to the open PC architecture. Because of the need for device drivers, security software, and more the likelihood that a single install or peripheral will dramatically change the power profile of a traditional device is commonplace. The “closed” nature of a mobile OS along with the process/app model make it possible to have all day battery life regardless of what is thrown at it.
  6. Always connected. A modern mobile OS is designed to be always connected to a variety of networks, most importantly the WWAN. This is a capability from the chipset through the OS. This connectivity is not just an alternative for networking, but built into the assumptions of the networking stack, the process model, the app model, and the user model. It is ironic that the PC architecture which had optional connectivity is still less good at dealing with intermittent connectivity than mobile which has always been less consistent than LAN or wifi. The need to handle the constant change in connectivity drove a different architecture. In addition, the ability to run with essentially no power draw and screen off while “waking up” instantly for inbound traffic is a core capability.
  7. Always up to date apps/OS. Today’s PC OSes all have updaters and connectivity to repositories from their vendors, but from the start the modern mobile OS is designed to be constantly updated at both the app and OS from one central location (even if the two updates are handled differently). We are in a little bit of an intermediate state because on PCs there are some apps (like Chrome and Firefox, and security patches on Windows) that update without prompts by default yet on mobile we still see some notifications for action. I suspect in short order we will see uniform and seamless, but transparent, updates.
  8. Cloud-centric/stateless. For decades people have had all sorts of tricks to try to maintain a stateless PC: the “M” drive, data drives or partitions, roaming profiles, boot from server, VM or VDI, even routine re-imaging, etc. None of these worked reliably and all had the same core problem, which was that whatever could go wrong if you weren’t running them could still go wrong and then you’re one good copy was broken everywhere. The mobile OS is designed from the start to have state and data in the cloud and given the isolation, separation, and kernel architecture you can reliably restore your device often in minutes.
  9. Touch. Touch is the clearly the most visible and most challenging transition. Designing the core of the OS and app model for touch first but with support for keyboards has fundamentally altered the nature of how we expect to interact with devices. No one can dispute that for existing workloads on existing software that mouse and keyboard are superior and will remain so (just as we saw in the transition from mainframe to mini, CUI to GUI, client/server to web, etc.) However, as the base of software and users grows, the reality is that things will change—work will change, apps will change, and thus work products will change, such that touch-first will continue to rise. My vote is that the modern “laptop” for business will continue to be large screen tablets with keyboards (just as the original iPad indicated). The above value propositions matter even more to todays mobile information worker as evidenced by the typical airport waiting area or hotel lobby lounge. I remain certain that innovation will continue to fill in the holes that currently exist in the mobile OS and tablets when it comes to keyboards. Software will continue to evolve and change the nature of precision pointing making it only something you need for PC only scenarios.
  10. Enterprise management. Even in the most tightly managed environment, the business PC demonstrates the challenges of the architecture. Enterprise control on a mobile OS is designed to be a state management system, not a compute based approach. When you use a managed mobile device, enterprise management is about controlling access to the device and some set of capabilities (policies), but not about running arbitrary code and consuming arbitrary system resources. The notion that you might type your PIN or password to your mobile device and initiate a full scan of your storage and install an arbitrary amount of software before you can answer a call is not something we will see on a modern mobile OS. So many of the previous items in the list have been seen as challenges by enterprise IT and somewhat ironically the tools developed to diagnose and mitigate them have only deepened the challenges for the PC. With mobile storage deeply encrypted, VPN access to enterprise resources, and cloud data that never lands on your device there are new ways to think of “device management”.

Each of these are fundamental to the shift to the mobile OS. Many other platform features are also significantly improved such as accessibility, global language support, even the clipboard and printing.

What is important about these is how much of a break from the traditional PC model they are. It isn’t any one of these as much as the sum total that one must look at in terms of the transition.

Once one internalizes all these moving parts, it becomes clear why the emphasis on the newly architected OS and the break from past software and hardware is essential to deliver the benefits. These benefits are now what has come to be expected from a computing device.

While a person new to computing this year might totally understand a large screen device with a keyboard for some tasks, it is not likely that it would make much sense to have to reboot, re-image, or edit the registry to remove malware, or why a device goes from x hours of battery life to 1/2 x hours just because some new app was installed. At some point the base expectations of a device change.

The mobile OS platforms we see today represent a new paradigm. This new paradigm is why you can have a super computer in your pocket or access to millions of apps that together “just work”.

–Steven Sinofsky (@stevesi)

 

Written by Steven Sinofsky

August 12, 2014 at 1:00 pm

Going Where the Money Isn’t: Wi-Fi for South African Townships

project-isizweSpending time in Africa, one is always awestruck. The continent has so much to offer, from sands to rain forests, from apes to zebras, from Afrikaans to Zulu. More than 1.1 billion people, 53 countries and at least 2,000 different spoken languages make for amazing diversity and energy.

Yet even while spending just a little time, you quickly see the economic challenges faced by many — slums, townships, settlements and the poverty they represent are seen too frequently. The contrast with the developing world is immense. As a visitor, you’re not particularly surprised to find the difficulties in staying connected to wireless services that you’ve become reliant upon.

South African Alan Knott-Craig is an experienced entrepreneur who is setting out to bring connectivity via Wi-Fi to slums and townships across South Africa.

We hear about the mobile revolution in Africa all the time. Today, this is a revolution in voice and text on feature phones and increasingly on smartphones, phablets and small tablets. Smartphones are making a rapid rise in use, if for no other reason than they have become inexpensive and ubiquitous on the world stage, and also thanks, in part, to reselling of used phones from developed markets.

But keeping smartphones connected to the Internet is straining the spectrum in most countries, and is certainly straining the connectivity infrastructure. Africa, for the most part, will “skip over” PCs, as hundreds of millions of people connect to the Internet exclusively by phones and tablets. But there’s an acute need for improved connectivity.

The problem is that, even in the most developed areas of Africa, the deployment of strong and fast 3G and 4G coverage is lagging, and the capital that is available will flow to build out areas where there are paying customers. That means that the outlying areas, where a lot of people live, will continue to be underserved for quite some time.

Alan Knott-Craig, an experienced South African entrepreneur who is setting out to bring connectivity via Wi-Fi across his homeland, knows that Internet access is transformative to those in slums and townships. His previous company, Mxit, where he was CEO, developed a wildly popular social network for feature phones. It delivered a vast array of services, from education to community to commerce, and is in use by tens of millions.

Given the challenges of connectivity in Africa, you often find yourself searching for a Wi-Fi connection for any substantial browsing or app usage. The best case — except for a couple of markets and capital cities — is that you will get a strong 3G and occasional 4G that is highly dependent on carrier and location. It is not uncommon for folks to have smartphones that are used for voice and text when on the network, and apps that are used only when there is Wi-Fi. It’s not just a way to save money or avoid your data cap — Wi-Fi is a necessity.

“Going where the money isn’t”

One can imagine there’s a big business to be had building out the Wi-Fi hotspot infrastructure in the country. Knott-Craig recognized this as he began to explore how to bring connectivity to more people.

Having grown up in South Africa and deeply committed to both the social and business needs of the country, Knott-Craig has also dedicated his businesses to those who are least well served and would benefit the most. Over the past 20 years, the improvements in service delivery to the slums and townships of South Africa have improved immensely, reducing what once seemed like an insurmountable gap. While there is clearly a long way to go, progress is being made.

One of many settlements or townships you can see in South Africa. This one is outside of Capetown, adjacent to a highway. The cement buildings at the edge of town are public toilets.

The transformation that mobile is bringing to townships is almost beyond words to those who are deeply familiar with the challenges. Talking and texting with family and friends are great and valuable. A mobile phone brings empowerment and identity (a phone number is the most reliable form of identity for many) in ways that no other service has been able to. Access to information, education and community all come from mobile phones. Mobile is a massive accelerator when it comes to closing economic divides.

All too often in business, the path is to build a business around where the money is. Knott-Craig’s deep experience in mobile communications told him that the major carriers will address connectivity in the cities and where there is already money. So, in his words, he set out to improve mobile connectivity by “going where the money isn’t.”

It was obvious to Alan that setting up Wi-Fi access would be transformative. The question was really how to go about it.

Building bridges

Time and again, one lesson from philanthropy is that the solutions that work and endure are the ones that enroll the local community. Services that are created by partnerships between the residents of townships, the government and business are the only way to build sustainable programs. The implication is that rolling into town with a bunch of access points and Internet access sounds like a good idea — who wouldn’t want connectivity? — but in practice would be met with resistance from all sides.

Thinking about the parties involved, Knott-Craig created Project Isizwe — helping to deliver Wi-Fi to townships on behalf of municipalities. “Isizwe” is Xhosa for “nation,” “tribe” or “people.”

Project Isizwe is located in Stellenbosch, which has an uncanny resemblance to Silicon Valley in weather, tempo and proximity to a premier technical talent pool from a leading university.

In the townships, people pay for Internet access by the minute, by the text and by the megabyte. Rolling out Wi-Fi needed to fit within this model, and not create yet another service to buy. So the first hurdle to address would be to find a way to piggyback on that existing payment infrastructure.

To do this, Knott-Craig worked with carriers in a very smart way. Carriers want their customers on the Internet, and in fact would love to offload customers to Wi-Fi when available. While they can do this in densely populated urban areas where access points can be set up, townships pose a very different environmental challenge, discussed below.

Given the carriers’ openness to offloading customers to Wi-Fi, the project devised a solution based on the latest IEEE standards for automatically signing on to available hotspots (something that we wish we would experience in practice in the U.S.). A customer of one of the major carriers, MTN for example, would initiate a connection to the Isizwe network, and from then on would automatically authenticate and connect using the mobile number and prepaid megabytes, just as though the Wi-Fi were a WWAN connection.

This “Hotspot 2.0″ implementation is amazingly friendly and easy to use. It removes the huge barrier to using Wi-Fi that most experience (the dreaded sign-on page), and that in turn makes the carriers very happy. Because of the value to the carriers, Knott-Craig is working to establish this same billing relationship across carriers, so this works no matter who provides your service.

Of course this doesn’t solve the problem of where the bandwidth comes from in the first place. Since Knott-Craig is all about building bridges and enrolling support across the community, he created unique opportunities for those that already have unused bandwidth to be part of the solution.

Whether it is large corporations or the carriers themselves, Project Isizwe created a wholesale pool of bandwidth by either purchasing outright or using donated bandwidth to create capacity. The donated bandwidth provides a tax deduction benefit at the same time. Everyone wins. Interestingly, the donated bandwidth makes use of off-peak capacity, which is exactly when people in the townships want to spend time on the Internet anyway.

Government

With demand and supply established, the next step is to enroll the government. Here again, the team’s experience in working with local officials comes into play.

As with any market around the world, you can’t just put up public-use infrastructure on public land and start to use it. The same thing is true in the townships of South Africa. In fact, one could imagine an outright rejection of providing this sort of service from a private organization, simply because it competes with the service delivery the government provides.

In addition, the cost factor is always an issue. Too many programs for townships start out free, but end up costing the government money (money they don’t have) over time. It isn’t enough to provide the capital equipment and ask the government to provide operational costs, or vice versa. Project Isizwe is set up to ensure that public free Wi-Fi networks are a sustainable model, but needed government support to do so.

With the enrollment of the carriers and community support, bringing along the government required catering to their needs, as well. One of the biggest challenges in the townships is the rough-and-tumble politics — not unlike local politics in American cities. The challenge that elected officials have is getting their voice heard. Without regular television coverage, and with sporadic or limited print coverage, the Internet has the potential to be a way for the government to reach citizens.

As part of the offering, Knott-Craig and his team devised a platform for elected officials to air their point of view through “over the top” means. Essentially, part of the Wi-Fi service provides access to a public-service “station” filled with information directly from governmental service providers. Because of the nature of the technology, these streams can be cached and provided at an ultra-low cost.

The bottom line for government is that they are in the business of providing basic services for the community. Providing Internet access only adds to the menu of services, including water, electrical, sanitation, police, fire and more. Doing so without a massive new public program of infrastructure is a huge part of what Isizwe did to win over those officials.

Access points

With all the parties enrolled, there still needs to be some technology. It should come as no surprise that setting up access points in townships poses some unique challenges: Physical security, long-haul connectivity and power need to be solved.

One of the neat things about the tech startup ecosystem in South Africa is the ability to draw on resources unique to the country. The buildup of military and security technology, particularly in Pretoria, created an ecosystem of companies and talent well-suited to the task. Given the decline of these industries, it turns out that these resources are now readily available to support new private-sector work.

First up was building out the access points themselves. Unlike a coffee shop, where you would just connect an access point to a cable modem and hide it above a ceiling tile, townships have other challenges. Most of the access points are located high up in secured infrastructure, such as water towers. These locations also have reliable power and are already monitored for security.

The access points are secured in custom-designed enclosures, and use networking equipment sourced from Silicon Valley companies Ruckus Wireless and Ubiquiti Networks, which implement hotspots around the world. This enclosure design and build was done by experienced steel-manufacturing plants in Pretoria. In addition, these enclosures provide two-way security cameras with night vision to monitor things.

This provided for a fun moment the first time someone signed on. A resident had been waiting for the Wi-Fi and was hanging out right below the tower. As soon as they signed on for the first time, back at the operations center they could see this on the dashboard, as well as the camera, and used the two-way loudspeaker to ask, “So how do you like the Wi-Fi?” which was quite a surprise to a guy just checking football scores on his mobile phone.

Along with using engineers from Pretoria to design the enclosure, Isizwe also employed former military engineers to go on-site to install the access points. This work involved two high-risk activities. First, these men needed to climb up some pretty tall structures and install something not previously catered for. Their skills as linemen and soldiers helped here.

More importantly, these were mostly Afrikaner white men venturing into the heart of black townships to do this work. Even though South Africa is years into an integrated and equality-based society, the old emotions are still there, just as has been seen in many other societies.

This would be potentially emotionally charged for these Afrikaners in particular. No only were there no incidents, but the technicians were welcomed with open arms, given the work that they were doing — “We are here to bring you Wi-Fi” — turns out to make it easy to put aside any (wrongly) preconceived notions. In fact, after the job, the installers were quite emotional about how life-changing the experience was for them to go into the townships for the first time and to do good work there.

The absence of underground cabling presents the challenge of getting these access points on the Internet in the first place. To accomplish this, each access point uses a microwave relay to connect back up to a central location, which is then connected over a landline. This is a huge advantage over most Wi-Fi on the African continent, which is generally a high-gain 3G WWAN connection that gets shared over local Wi-Fi.

Bytes flowing

The service is up and running today as a 1.0 version, in which Wi-Fi is free but limited to 250 megabytes; the billing infrastructure is just a few months away, which will enable pay-as-you-go usage of megabytes. The service will be free when there is capacity going unused.

The cost efficacy of the system is incredible, and that is passed along to individual users. Wi-Fi is provided at about 15 cents (ZAR cents) per gigabyte, which compares to more than 80 cents per megabyte for spotty 3G. That is highly affordable for the target customers.

Because of the limits of physics of Wi-Fi, the system is not set up to allow mass streaming of football, which is in high demand. Mechanisms are in place to create what amounts to over-the-top broadcast by using fixed locations within the community.

The most popular services being accessed are short videos on YouTube, music, news, employment information and educational services like Khan Academy and Wikipedia. The generation growing up in the townships is even more committed to education, so it is no surprise to see such a focus. Another important set of services being accessed are those for faith and religion, particularly Christian gospel content.

The numbers are incredible and growing rapidly, as the Isizwe scales to even more townships. In the middle of the afternoon (when people are at school and working), we pulled up the dashboard and saw some stats:

  • 609 people were online right at that moment.
  • 4,455 people had already used the service that day.
  • 304 people had already reached their daily limit that day.
  • More than 70,000 unique users since the system went online with 1.0 in November 2013.
  • 208GB transferred since going online
  • Most all of the mobile traffic is Android, along with the newest Asha phones from Nokia. Recycled iPhones from the developed market also make a showing.

In terms of physical infrastructure required, it takes about 200 access points to cover a densely populated area of one million residents. This allows about 200,000 simultaneous users overall, with about 50-500 users per access point, depending on usage and congestion.

Growing

We talk all the time about the transformational nature of mobile connectivity, and many in the U.S. are deeply committed to getting people connected all around the world. Project Isizwe is an incredible example of the local innovation required to build products and services to deliver on those desires.

The public/private/community partnerships that are the hallmark of Isizwe will scale to many townships across South Africa. Building on this base, there are many exciting information-based services that can be provided. Things are just getting started.

 

–Steven Sinofsky (@stevesi)

This post originally appeared on Re/code.

Written by Steven Sinofsky

August 1, 2014 at 12:00 pm

Posted in recode

Tagged with , ,

Disrupting Payments, Africa Style

Note from the author: For the past 10 years or so, I’ve been spending time informally in Africa, where I have a chance to visit with government officials, non-government organizations, and residents of towns, settlements and cities. In the next post, I’ll talk about free Wi-Fi in South Africa slums. This post originally appeared on Re/code.

Spending time in the developing world, one can always marvel at the resourcefulness of people living in often extraordinarily difficult conditions. The challenges of living in many parts of the world certainly cause one to reflect on what we see from day to day. Here in the U.S., we’re all familiar with the transformative nature of mobile phones in our lives. And for those in extreme poverty, the mobile phone has been equally, if not more, transformative.

One particular challenge faced by many in Africa, especially those living in fairly extreme poverty (less than $500 a year in purchase power), is dealing with money and buying things, and how the mobile phone is transforming those needs.

One could fill many posts with what it is like to live at such low levels of income, but suffice it to say that even when you are fortunate enough to ground your perspective in firsthand experience, it is still not possible to really internalize the challenges.

Slum life

Imagine living in a place where your small structure, like the one pictured below, is under constant threat of being demolished, and you run the risk of being relocated even farther away from work and family. Imagine a place where you don’t have the means of contacting the police, even if they might show up. Imagine a place where it takes a brick-sized amount of cash to buy a new cooking pot. image Representative home, or “struct,” in an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky

These and untold more challenges define day-to-day life in slums, settlements and townships in developing countries in Africa, where the introduction of mobile phones has transformed a vast array of daily living tasks. Take the structure seen above, for example. It is a settlement in a vacant lot next to an office park in Harare, Zimbabwe. About 120 of these “structs” are occupied by about 600 people. For the most part, residents sell what they can make or cook; a small number possess some set of trade skills. Below, you can see a stand run out of one struct that sells eggs farmed on-site.

image Shop window in front of home where fresh eggs are sold in an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky.

Mobile phones and extreme poverty

Through a Xhona-speaking interpreter, I had a chance to be part of a group (representing the government) hearing about life in the settlement. One question I got to ask was how many had mobile phones. Keeping in mind that the per capita spending power of these folks would be formally labeled “extreme poverty,” the answer blew me away. Nearly every adult had a mobile phone. When I asked for a show of hands, some proudly said they didn’t bring it to the meeting.

image Group of representatives showing off their mobile phones (all pictured owned a phone) of an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky

Right away, you see the importance of a mobile phone when you consider the cost of the phone as a percentage of income. It is hard for us to imagine the trade-offs phone owners here are making, but in earning-power equivalence, a phone in this village is roughly what a car and its operation costs us — and we already have food, shelter and clothing in ample supply.

Communicating with family is a key function, because families are often separated by distance, as members go looking for work or to find a better place to live.

Phones are also used to call the police. Before mobile phones, there was simply no way to get the police to your home or settlement, since there are no landlines or nearby telephones. Keep in mind that most residents in these areas have no formal identification or address, and the settlements are often unofficial and unrecognized by authorities.

Phones are also used as an early warning system for authorities that might be on the way to evict folks, or perhaps perform some other type of inspection. The legalities of settlements and how that works are a separate topic altogether, but I won’t go into that here.

Phones are used to keep track of what goods are selling where, or what goods might be needed. A network of people helps each other to maximize income from goods based on where and when they can be sold, because they are needed. Think of this as extremely local information that was previously unavailable. This is crucial, because many goods have limited shelf life and, frankly, many people produce the same goods.

A specific example for some people was the use of phones to monitor the supply chain for beer and alcohol. One set of people specialized in redistribution of beverages, and needed to keep tabs on events and unique needs in the community.

A favorite example of mine is “queue efficiency.” One of the many challenging aspects of life in extreme poverty is waiting — waiting on line for water, for transportation, for public services of all kinds. Phones play an important role in bringing some level of optimization to this process by sharing information on the size of queues and the quality of service available. We might think of this as Waze for lines, implemented over SMS friends and family networks.

Some of these uses seem straightforward, or simply cultural adaptations of what anyone with a phone would do. The fact that Africa skipped landlines is a fascinating statement about technological evolution — just as, for the most part, the continent will skip PCs in favor of smartphones, and will likely skip private ownership of transportation for shared-economy solutions (the history of Lyft is one that begins with shared rides in Zimbabwe).

Skipping over traditional banking

An old-economy service that Africa is likely to skip will be personal banking. In the U.S., our tech focus tends to be on China and the role that mobile payments play there with WeChat or AliPay, or more broadly on the innovation going on payments between the innovative PayPal, Square and, of course, bitcoin. In Africa, almost no one has a bank account, and definitely no credit cards. But as we saw, everyone has a mobile phone.

The most famous mobile banking solution in Africa is M-Pesa (M for mobile, pesa is Swahili for money), which started in Kenya. People there use their phones to store cash and pay for goods. Similar solutions exist in many countries. Even in a place as remote and difficult as Somaliland, you can see these at work, as I did recently.

Madagascar is an island-country with incredible beauty and an abundance of things not seen across Africa, including natural resources, farmable land and water, not to mention lemurs. Yet the country is incredibly poor, with a countrywide per capita GDP of $400, which puts it in the bottom 10 countries of the world. On average, people live at the extreme poverty level of $1.25 per day in purchase power. One city I visited in Madagascar is home to the UN Millennium Development Goals, which is programmatically working to improve these extremely impoverished areas.

image Sign signifying the entrance to a town in rural Madagascar designated a Millennium Development Goals location, one of about a dozen worldwide. Steven Sinofsky

Yet technology is making a huge difference in lives there. Madagascar has three main mobile phone carriers. These are all prepay, and penetration is extremely high, even in the most remote areas. The country is wired with mostly 2G connectivity; there is some coverage at 3G, but it is highly variable. The only common use for 3G is for Internet access using external USB modems connected to PCs (usually netbooks) and shared.

Most of the phones in use are feature phones, often hand-me-downs from the developed market. I’ve even seen a few iPhone 3s. One person complained about being unable to update iOS because he has no high-speed connection for such a download (showing that people are connected to the world, just not at a high download speed). A developed-market smartphone is pretty much a feature phone here, and the cost of another network upgrade means that one is far off. People are anxious for more connectivity, but along with cost, the current state of government will make progress a bit slower than citizens would like.

A huge problem in this type of environment is safely dealing with money. Madagascar’s currency trades at $1 U.S. to 2,500 Madagascar ariary. When you live off of 3,000 or so a day, you’re not going to carry around three bills, so very quickly you end up with a brick of 100 Ar notes. What to do with all those? Where can you put them? How do you keep them safe? How can you even keep them dry in a rain forest?

Well, along comes mobile “banking.” As easy as you can recharge your phone, you can add money to your stored money account. You walk up to a kiosk — there are thousands and thousands of them — and in a series of text messages with the shopkeeper, you give her money and your phone gains stored value.

image Home and storefront selling recharge minutes for pay-per-use mobile phones; also a station for mobile-phone banking in rural Madagascar. Steven Sinofsky

With iOS and Android fragmentation, how would these apps work, given what must be finite dev resources? The implementation of this is all through an old-school standard called SIM Apps or Sim Application Toolkit.

This set of APIs and capability allow the installation of apps that reside on your SIM. These apps are simple menu-driven apps that look like WAP sites. They are secure and controlled by carriers. Using this framework, mobile banking has reached unprecedented usage and importance in developing markets, particularly in Africa.

The scenario for usage is quite simple. You charge your phone with money, just as you would with minutes. When you want to buy something, you bring up the SMS app (pictured below, on an iPhone 3 in Malagasy) and initiate a transaction. The merchant gives you a code, which you enter along with the merchant’s identifying code. You then type in an amount, which is verified against your current balance. The merchant then receives a notification, and the transaction is complete. The whole system is safe from theft because of the connection to your mobile number, two-factor authentication and so on. There is no carrier dependency, so you can easily send/receive to any carrier, though the carrier has your balance. This isn’t an interest-earning savings account, but rather a transaction or debit account (of course, in the U.S., few of us earn interest on demand deposits these days, anyway). image Screen showing “My Account” in Malagasy, displayed on a recycled iPhone 3 (note the absence of a cellular connection). Steven Sinofsky

You can also give and receive money from individuals. This is extraordinarily important, given how there can be distance between family or even the main wage-earning in a family. The idea of sending money around to family members is an incredibly important part of the cash economy of low-income people. This market, called “remittance,” is estimated to be over $400 billion in developing markets alone.

Life is easier and safer for those using mobile banking this way. You can count on your money being safe. You don’t need to carry around cash and worry about loss, theft, or water and weather destroying physical currency. You can easily deal with small and exact amounts. As a merchant, you don’t have to make change. It is just better in every dimension.

The carriers profit by taking a percentage of the transaction, which is high in the same way that check-cashing in the U.S. is high (and credit cards, for that matter). The fee is about two percent, which I am not sure will be sustainable, given the competition between carriers. I also think it will be fascinating to see how developed-market companies like Western Union evolve to support mobile payments, as they provide integration points to the developed-market financial systems. It is not uncommon to see a Western Union representative also offering phone recharge and mobile banking services.

In our environment, we would see this as a convenience, like a debit card. But in Africa, it is far more secure and convenient, because you only need your phone, which you will carry with you almost all the time, just as we do in the U.S.

I think the most interesting point of note in this solution is how it essentially skips over banking. If we think about our own lives, and especially those of the generation entering the workforce now, banking is most decidedly archaic. The whole idea of opening an account and dealing with a level of indirection which offers very little by way of useful services — it just feels like there’s a need for disruption. Our installed base of infrastructure makes this very difficult, but in the developing world that challenge doesn’t exist. It isn’t likely that most people will graduate to full-fledged banking just as we don’t expect people to graduate from a mobile phone to a full-fledged PC.

It also isn’t hard to imagine this type of mobile banking taking off first in the cash-based part of the developed world, where today people pay fees to cash checks and buy money orders, absent a bank account. The large numbers of check-cashing storefronts located near lower-income areas share much in common in some ways. One example is remittance. Many immigrants in the U.S. are the source for remittance funds going to developing markets. Seattle, for example, has one of the largest populations of Somalians outside of Northern Africa, and they routinely send funds back to their families. Today, this is a difficult process, and could be made a lot easier with a global and mobile solution.

Looking forward

image

Merchant using a credit-card reader attempting to get a stronger signal to complete the transaction in Anosibe, Madagascar. Steven Sinofsky

I look forward to solutions like this for our own lives here in the U.S. We see some of this in service-by-service cases. For example, using Lyft is completely cashless. I can use PayPal at merchants like Home Depot. Obviously, we all see Square and other payment mechanisms. Each of these shares a common connection to established banking and plastic cards. That’s where I think disruption awaits. Will this be bitcoin alone? Will someone, even a carrier, develop and scale a simple stored-value mechanism like that being used by billions of people already?

For myself, and no doubt for many reading this, this transformation is old hat. I’ve seen these changes over the past decade across many countries in Africa and elsewhere. Africa isn’t single-marketplace by any stretch. What is working in Madagascar, Kenya, Somaliland and others might not work elsewhere, or might not work for all segments of a given economy. Stay tuned for more observations from this trip.

It is always worth a reminder how some changes can bring about a massive difference in quality of life.

–Steven Sinofsky @stevesi

P.S.: What happens when you’re forced to use high-tech 3G connectivity to do a Visa card transaction? The merchant (pictured above) goes outside in a rain forest and aims for a stronger connection for the card reader. Yikes!

Written by Steven Sinofsky

July 25, 2014 at 9:00 am

Posted in posts, recode

Tagged with , ,

Apps: Shrapnel v. Bloatware

imageMuch is being said lately about the trend to unbundle capabilities for the web and apps. Is this a new trend, a pendulum, or another stage in the evolution of providing software solutions for work and life? Are we going to learn what some would say are lessons from a past generation of software and avoid bloatware? Perhaps we will relive some of the experiences from that era and our phones and tablets will be littered with app shrapnel as our PCs once were?

My own personal experience in product choices is marked by a near constant tension over not just bundle v. unbundle from a product perspective, but also from a business perspective. Whether on development tools, Office, Windows, or internet services I’ve experienced the unbundle <> bundle dynamic. I’ve bundled, unbundled, and had the “internal” debates about what to do when, what went well or not. If you’re interested in an early debate about bundling Office you can see the Harvard case study on the choice of “best of breed v. suite” in Finding the Suite Spot ($).

The Pendulum

This HBR article does a good job of bringing forth some of the history and describing the challenges of positioning unbundle/bundle as both a binary choice and a pendulum or Krebs-like cycle of resource conservation. Marc Andreessen does a great job in these two tweetstorms of detailing the bundle/unbundle cycle on the internet and the computer history we both grew up with (http://tweetstorm.io/user/pmarca/481554165454209027 and http://tweetstorm.io/user/pmarca/481739410895941632).

There’s one maxim in business that drives so much of the back and forth or pendulum behavior we tend to see, which is that most strategies have a complementary approach (vertical v. horizontal, direct v. indirect, integrate v. distinct, first v. third-party, product org v. discipline org, quantitative v. qualitative performance evaluation, hack v. plan, etc.) So in business depending on your roots or your history, and most importantly the context you find yourself, you are going down a path of one of more of these attributes.

Over time your competition tends to pick you apart the other way or ways. Equally likely, your ecosystem builds up around you innovating in parts where you are weaker, gaining strength, and showing off new approaches to product or market. Certainly, if you’re a new company entering an established market you will not just copy the approach of the incumbent which is why new products seem to be at the other end of one of these spectrums.

Then as you get in trouble you look around and try to figure out what to do. There’s a good chance the organization will double down on the approach that has always worked—after all as Christensen says, that is the natural energy force in an organization. That happens until a big moment of change (a major competitive success, leadership change, etc.) and then you change approaches. More often than not, your choice is to do the thing you weren’t doing before. If you’re around in the workforce long enough, you start to see things as a series of these evolutionary steps.

This is business, context is everything. There’s never a right answer in absolute, only a right answer given the context.

The moments of change, of breaking the cycle or swinging back the other way, are the moments that unleash significant improvements in the work, the product, or the workplace.

History and Customers

As consumers we adopt new technologies without realizing or thinking about whether they are bundled or unbundled, and our choices and selections for one or other are highly dependent on the context at the time. There are times when bundling is essential to the distribution of technology, just as there are times when unbundling brings with it more choice, flexibility, and opportunity. Obviously the same holds for businesses buying products, only businesses have purchasing power that can make bundled things appear unbundled or vice versa.
It is worth considering a few tech examples:

  • Autos began with minimal electronics, followed by optional electronics, then increasingly elaborate integrated electronics and many now think that smartphones will be the best device for in-car electronics/apps (for example the BMW i series).
  • LinkedIn began as a network for professionals to list their credentials and connect to others professionally. Recently it has bundled more and more content-based functionality.
  • Mobile telephony used to have distinct local, long distance, text and then data plans, which have now been bundled into all-you-can-consume multi-device plans.
  • Word processing used to have optional spell-checking and mail merge which was then bundled into single products which were then subsequently bundled into suites and also now bundle cloud services. Similarly, financial spreadsheets, data analysis, and charting were previously distinct efforts that are now bundled. Today we are seeing new tools that have different feature sets and approaches, representing some unbundling and some bundling.
  • Operating systems were once highly hardware dependent, then abstracted from hardware but with optional graphical interfaces, followed by a period of bundling of OS+Graphics, followed by a bundling of OS, graphical interface, and hardware in a single package. Today with services we’re seeing different combinations of bundling and unbundling innovations.
  • Microprocessors have been on a fairly continuous bundling effort relative to peripherals, graphics, and even storage.
  • Modern smartphones are a wonder of bundling, first at the hardware level (SoC packaging) followed by hardware+software, then through all the devices that were previously distinct (GPS, still camera, video camera, pedometer, game controller, USB storage, and more).

There are countless examples depending on what level in the full consumer offering is being considered (i.e. product, price, place, promotion). Considering just these examples, one can easily see the positives and potential pitfalls of any of these.

Yet in looking these examples and others, one can make a few observations about how customers and teams approach bundling choices for products and services:

  • People like distinct products when exploring new capabilities and product teams like building single purpose tools early in product lifecycle, out of both focus and necessity/resources.
  • People like it when their favorite product adds features that previously required a separate product, especially when their favorite product is growing in usage. Product teams love to add more features to existing products when those features map to obvious needs.
  • People have some threshold for when an integrated product turns into an overwhelming product, but that “line in the sand” is impossible to define a priori and depends a great deal on how products are evolving around your product. Mobile phone plans today are great, but many are very unhappy with Cable TV bundles.
  • Competition can come from a bundle that you were previously not considering **or** competition can come from unbundling the product you make.
  • Product managers often reach a point where they can no longer solve the problem of adding new features while seeing them get used and also getting credit for innovating.
  • Macro factors can radically alter your own views of what could/should be bundled. If your business does not have a software component and your competitors add one, attempting to bundle that functionality could be quite challenging (technically, organizationally). If the platform you target (autos, spectrum, screens) undergoes a major change in capability then so too does your view of bundling or unbundling.

These examples and observations make one thing perfectly clear: whether to bundle or unbundle features depends a great deal on context and customer scenarios and so the choices require a great deal of product management thought. The path to bundle or unbundle is not linear, predictable, or reactionary but a genuine opportunity and need for solid product thought.

Strategic questions

On the one hand, considering whether to bundle or unbundle innovations might just be “do what we can that is differentiated”. In practice there are some key strategy questions that come up time and time again when talking to product folks.

  • Discoverability. The most critical strategic question to bundle or unbundle is whether the new work will be discoverable by intended customers. In a new product, the early waves of innovative features often make sense bundled. Over time, just responding to customers means you’ll be bundling in new capabilities (whether organic or competitive).
  • Usability. When faced with a new feature or business approach, the usability of this approach is a key factor in your choice. If you’re unable to develop a user experience that permits successful execution of the desired outcome, then it doesn’t really matter whether your bundled or unbundled.
  • Depth. When making the choice to bundle or unbundle you have to think through how much you plan on innovating in the spaces. If you’re setting yourself up for a long-term head to head on depth versus believing you are “checking a box” you have different choices. Incumbents often view the best path to fending off a disruptive unbundled feature as adding a checkbox to compete (to avoid the trauma of a major change in approach). Marketing often has an urgency that drives a need for market response and that can be represented as an unbundled “add-on that no one cares about” or “a checkbox that can be communicated” — that might sound cynical until you’ve been through a sales cycle losing out to a “feature as a product”.
  • Business economics. If you charge directly for your product or service (or freemium), then there will be a strong incentive to bundle more and more into your existing offering. Sales will generally prefer to add more features at the current price. Marketing will potentially advocate for a new pricing level to increase revenue. If you choose to unbundle and develop a new product, side-by-side or companion, then you’ll need to consider what your attach rate might be. A bundled solution essentially sees a 100% attach rate to your existing product whereas a whole new product brings with it the need to generate demand and subsequent purchase or usage. An advertising-based service will see increased surface area for an unbundled solution but will also dilute usage. A web-based service allows for cross-linking and easy connection between two different properties, but apps will require separate downloads and minimal cross-app connections.
  • Usage economics. It might sound strange separating out business from usage, since especially in a SaaS world they are the same thing. In practice, if you’re revenue is tied to usage directly (page views, transactions, etc.) then your design needs to factor in how you measure and drive usage of the features, bundled or unbundled. If you’re economics are not tied directly to usage you will have more strategic latitude to consider how your offering plays out bundled v. unbundled (assuming your boss lets you keep working on something no one uses).

Product management approach

Should you add that new feature or capability to your existing product or should you create a new destination (app, site)? Should you break out a feature because unbundling is the new normal or will that just break everything? Those are the core questions any PM faces as a product grows.

One tip: do not claim that one approach (bundle v. unbundle) is good for users and the other approach is only good for business. In other words, bundle v. unbundle cannot be distilled down to pro-user or anti-user, or more importantly marketing v. product. The best product people know that context is everything and that positioning a choice as A against B is counter productive—everyone is on the same team and has the same broad goals. As difficult as it is, working through these questions with as much dialog as possible and as much “walk in the other’s shoes” is absolutely critical.

There are many natural forces at play that will drive one way or another.

For example, most organic product development will tend to expand the existing product as it builds on the infrastructure and momentum already present.

Most new acquisitions will tend towards acquiring unbundled solutions, aka competition, though in the enterprise space one can expect significant calls to integrate even disparate technologies.

Part of being a good PM is to step back and go through a thoughtful process about whether to bundle or unbundle new capabilities. The following are some design choices.

  • Advertising new features in proportion to expected usage. There’s a general view to advertise a new feature in the UX in an excessively prominent manner. You want people to know you fixed or added a feature. At the unbundle extreme this means a whole new app and a trend to shrapnel. In the bundle extreme this means a big UX to drive you to a new thing. The most critical choice is really making sure that you are designing the access to the feature to be in relative proportion to how much you expect your customer base to use something.
  • Plan for “n+1” in all experience choices. As you make the choice to bundle or unbundle, know ahead of time that this will not be the first place you make this choice. If you’re adding a new app today then chances are that will become the way you solve things down the road. If you’re adding new UX access to a feature then plan on more depth in that feature or more peer features. Is the choice you are making scalable for the growth in creativity and innovation you expect?
  • Integrate or connect in one direction, not both. If you bundle or unbundle there will be a relentless push to promote the connection between elements of the product or service. Demo flows, top-level UX, even deep linking between apps. At some extreme if you bundle n items, it might not be unrealistic to go down a path where every n is connected to every other n-1 and vice versa. This is incredibly common in line of business apps/modules.
  • Bundle and innovate, don’t bundle and deprecate. If you make a choice to bundle a capability into your mainline effort, do not bundle it to make it go away. Bundle it and think of it as just as important as other things you do. This dynamic appears when your competition does something you don’t like so you hope to have a checkbox and make the competitor go away. This never happens.
  • Designing for good enough leaves you open to disruption. Closely related to deprecating while bundling is the idea that a “tie is a win”. Once you’re established you often think that you can continue to win against a competitor with an integrated implementation that is “good enough”. That might work in short-term marketing but over time, if the area is important you’ll lose.
  • Expect hardware to be relentlessly bundled. If you connect to hardware in any way, then you’ll be faced with a relentless march towards bundling. Hardware naturally bundles because of the economics of manufacturing, the surplus of transistors, and the need to reduce power and surface area. Never bet on hardware or peripherals staying unbundled for long.
  • Expanding software depth is easy, but breadth often adds more value. Engineers and product managers love to round out features, add more depth, more customization, and more incremental improvements. This is where the customer feedback loop is really clear. In terms of growing the business and attracting new customers, expansion in breadth is almost always a better approach so long as you “bundle” features that seem natural. Over indexing on depth, particularly early in a product life-cycle leaves you open to a competitor that does you plus other valuable things, no matter how much you think you’re unbundling approach is cleaner and simpler.
  • Defined categories do not remain defined for long. In enterprise products the “category” or “magic quadrant” is everything. In practice, these very definitions are always in transition. Be in the lookout for being redefined by an action of bundling or unbundling.
  • Assume sales and marketing will prefer new capability to be bundled, or maybe not. Finally to highlight how contextual this is, there is no default as to how outbound efforts will prefer you approach the problem. It is not necessarily the opposite of what you are doing or the same as a competitor. For example, if your sales force economics are such that they are strongly connected to a single product and sales motion, it will be clear that bundling will be preferred no matter what a competitor is up to. At an extreme, even an unbundled feature will be used as a closer or a discount, particularly in the enterprise. Conversely, even if your competition is highly bundled, you’re own outbound efforts might be structured such that unbundling is a competitive and sales win. You just never know. Most importantly, the first reaction isn’t the way to base your approach—spend the time to engage and debate.

To bundle or unbundle is a complex question that goes beyond the simplistic view that minimal design makes for good products. Take the time to engage broadly across the team, organization, and to project forward where you want to be as these are some of the most critical design choices you will make.

–Steven Sinofsky @stevesi

 

Written by Steven Sinofsky

June 28, 2014 at 3:00 pm

Posted in posts

Tagged with , , ,