Companies often pay very close attention to new products from startups as they launched and ponder their impact on their scale, mainstream work. Almost all of the time the competitive risk was deemed minimal. Then one day the impact is significant.
In fact up until such a point most pundits and observers likely said that the startup will get overrun or crushed by a big company in the adjacent space. By this time it is often too late for the incumbent and what was a product challenge now looks like an opportunity to take on the challenges of venture integration.
Why is this dynamic so often repeated? Why does the advantage tilt to startups when it comes to innovation, particularly innovation that disrupts the traditional category definition or go to market of a product?
Much of the challenge described here is rooted in how we discuss technology disruption. Incumbents are faced with “disruption” on a daily basis and from all constituencies. To a great degree as an incumbent the sky is always falling. For every product that truly disrupts there are likely hundreds of products, technologies, marketing campaigns, pricing strategies and more that some were certain would be last straw for an incumbent.
Because statistically new ideas are not likely to disrupt and new companies are likely to fail, incumbents become experts at defining away the challenges and risks posed by a new entrant into the market. Incumbents view the risk of wild swings in strategy or execution as much higher risk than odds of a 1 in 100 chance a new technology upending the near term business. Factoring in any reasonable timeline and the incumbent has every incentive to side with statistics.
To answer “why startups aren’t features” this post looks at the three elements of a startup that competes with an incumbent: incumbent’s reaction, challenges faced by the incumbent, and the advantages of the startup.
When a startup enters a space thought (by the incumbent or conventional wisdom) to be occupied by an incumbent there are series of reasonably predictable reactions that take place. The more entrenched the incumbent the more reasoned and bullet proof the logic appears to be. Remember, most technologies fail to take hold and most startups don’t grow into significant competitors. I’ve personally reacted to this situation as both a startup and as the incumbent.
Doesn’t solve a problem customers have. The first reaction is to just declare a product as not solving a customer problem. This is sort of the ultimate “in the bubble” reaction because the reality is that the incumbent’s existing customers almost certainly don’t have the specific problem being solved because they too live in the very same context. In a world where enterprises were comfortable sending PPT/PDFs over dedicated lines to replicated file servers, web technologies didn’t solve a problem anyone had (this is a real example I experienced in evangelizing web technology).
Just a feature. The first reaction to most startups is that whatever is being done is a feature of an existing product. Perhaps the most famous of all of these was Steve Jobs declaring Dropbox to be “a feature not a product”. Across the spectrum from enterprise to consumer this reaction is routine. Every major communication service, for example, enabled the exchange of photos (AIM, Messenger, MMS, Facebook, and more). Yet, from Instagram to Snapchat some incredibly innovative and valuable startups have been created that to some do nothing more than slight variations in sharing photos. In collaboration, email, app development, storage and more enterprise startups continue to innovate in ways that solve problems in uniquely valuable ways all while incumbents feel like they “already do that”. So while something might be a feature of an existing product, it is almost certainly not a feature exactly like one in an existing product or likely to become one.
Only a month’s work. One asset incumbents have is an existing engineering infrastructure and user experience. So when a new “feature” becomes interesting in the marketplace and discussions turn to “getting something done” the conclusion is usually that the work is about a month. Often this is based on estimate for how much effort the startup put into the work. However, the incumbent has all sorts of constraints that turn that month into many months: globalization, code reviews, security audits, training customer support, developing marketing plans, enterprise customer roadmaps, not to mention all the coordination and scheduling adjustments. On top of all of that, we all know that it is far easier to add a new feature to a new code base than to add something to a large and complex code base. So rarely is something a month’s work in reality.
One thing worth doing as a startup (or as a customer of an incumbent) is considering why the challenges continue even if the incumbent spins up an effort to compete.
Just one feature. If you take at face value that the startup is doing just a feature then it is almost certainly the case that it will be packaged and communicated as such. The feature will get implemented as an add-on, an extra click or checkbox, and communicated to customers as part of the existing materials. In other words, the feature is an objection handler.
Takes a long time to integrate. At the enterprise level, the most critical part of any new feature or innovation is how it integrates with existing efforts. In that regard, the early feedback about the execution will always push for more integration with existing solutions. This will slow down the release of the efforts and tend to pile on more and more engineering work that is outside the domain of what the competitor is doing.
Doesn’t fit with broad value proposition. The other side of “just one feature” is that the go to market execution sees the new feature as somehow conflicting with the existing value proposition. This means that while people seem to be seeing great value in a solution the very existence of the solution runs counter to the core value proposition of the existing products. If you think about all those photo sharing applications, the whole idea was to collect all your photos, enable you to later share them or order prints or mugs. Along comes disappearing photos and that doesn’t fit at all with what you do. At the enterprise level, consider how the enterprise world was all about compliance and containing information while faced with file sharing that is all about beyond the firewall. Faced with reconciling these positioning elements, the incumbent will choose to sell against the startup’s scenario rather than embrace it.
Startups also have some advantages in this dynamic that are readily exploitable. Most of the time when a new idea is taking hold one can see how the startup is maximizing the value they bring along one of these dimensions.
Depth versus breadth. Because the incumbent often views something new as a feature of an existing product, the startup has an opportunity to innovate much more deeply in the space. In any scenario becomes interesting, the flywheel of innovation that comes from usage creates many opportunities to improve the scenario. So while the early days might look like a feature, a startup is committed to the full depth of a scenario and only that scenario. They don’t have any pressure to maintain something that already exists or spend energy elsewhere. In a world where customers want the app to offer a full stack solution or expect a tool to complete the scenario without integrating something else, this turns out to be a huge advantage.
Single release effort. The startup is focused on one line of development. There’s no coordination, no schedules to align, no longer term marketing plans to reconcile and so on. Incumbents will often try to change plans but more often than not the reactions are in whitepapers (for enterprise) or beta releases (for consumer). While it might seem obvious, this is where the clarity, focus, and scale of the startup can be most advantageous.
Clear and recognizable value proposition/identity. The biggest challenge incumbents face when adding a new capability to their product/product line is where to put it so it will get noticed. There’s already enormous surface area in the product, the marketing, and also in the business/pricing. Even the basics of telling customers that you’ve done something new is difficult and calling attention to a specific feature it often ends up as a supporting point on the third pillar. Ironically, those arguing to compete more directly are often faced with internal pressures that amount to “don’t validate the competitor that much”. This means even if the feature exists in the incumbent’s product, it is probably really difficult to know that and equally difficult to find. The startup perspective is that the company comes to stand for the entire end-to-end scenario and over time when customers’ needs turn to that feature or scenario, there is total clarity in where to get the app or service.
Even with all of these challenges, this dynamic continues: initially dismissing startup products, later attempting to build what they do, and in general difficulty in reacting to inherent advantages of a startup. One needs to look long and hard for a story where an incumbent organically competed and won against a startup in a category or feature area.
More often than not the new categories of products come about because there is a change in the computing landscape at a fundamental level. This change can be the business model, for example the change to software as a service. It could also be the architecture, such as a move to cloud. There could also be a discontinuity in the core computing platform, such as the switch to graphical interface, the web, or mobile.
There’s a more subtle change which is when an underlying technology change is simply too difficult for incumbents to do in an additive fashion. The best way to think about this is if an incumbent has products in many spaces but a new product arises that contains a little bit of two of the incumbent’s products. In order to effectively compete, the incumbent first must go through a process of deciding which team takes the lead in competing. Then they must address innovator’s dilemma challenges and allocate resources in this new area. Then they must execute both the technology plans and go to market plans. While all of this is happening, the startup unburdened by any of these races ahead creating a more robust and full featured solution.
At first this might seem a bit crazy. As you think about it though, modern software is almost always a combination of widely reused elements: messaging, communicating, editing, rendering, photos, identity, storage, API / customization, payments, markets, and so on. Most new products represent bundles or mash-ups of these ingredients. The secret sauce is the precise choice of elements and of course the execution. Few startups choose to compete head-on with existing products. As we know, the next big thing is not a reimplementation of the current big thing.
The secret weapon in startups competing with large scale incumbents is to create a product that spans the engineering organization, takes a counter-intuitive architectural approach, or lands in the middle of the different elements of a go to market strategy. While it might sound like a master plan to do this on purpose, it is amazing how often entrepreneurs simply see the need for new products as a blending of existing solutions, a revisiting of legacy architectural assumptions, and/or emphasis on different parts of the solution.
—Steven Sinofsky (@stevesi)
Managing product development and management in general are ripe with clichés. By definition of course a cliché is something that is true, but unoriginal. I like a good cliché because it reminds you that much of management practice boils down to things you need to do but often forget or fail to do often enough.
The following 15 clichés might prove helpful and worth making sure you’re really doing the things in product development that need to get done on a daily basis. Some of these are my own wording of other’s thoughts expressed differently. There’s definitely a personal story behind each of these
Promise and deliver. People love to play expectations games and that is always bad for collaboration internal to a team, with your manager, or externally with customers. The cliché “under promise and over deliver” is one that people often use with pride. If you’re working with another group or with customers, the work of “setting expectations” should not be a game. It is a commitment. Tell folks what you believe, with the best of intentions, what you will do and do everything to deliver that. Over time this is far more valuable to everyone to be known as someone that gets done what you say.
Make sure bad news travels fast. Things will absolutely go wrong. In a healthy team as soon as things go wrong that information should be surfaced. Trying to hide or obscure bad news creates an environment of distrust or lack of transparency. This is especially noticeable on team when the good news is always visible but for some reason less good news lacks visibility. Avoid “crying wolf” of course by making sure you are broadly transparent in the work you do.
Writing is thinking. We’re all faced with complex choices in what to do or how to go about what will get done. While some people are great at spontaneously debating, most people are not and most people are not great at contributing in a structured way on the fly. So when faced with something complex, spend the time to think about some structure write down sentences, think about it some more, and then share it. Even if you don’t send around the writing, almost everyone achieves more clarity by writing. If you don’t then don’t blame writer’s block, but consider that maybe you haven’t formulated your point of view, yet.
Practice transparency within your team. There’s really no reason to keep something from everyone on the team. If you know something and know others want to know, either you can share what you know or others will just make up their own idea of what is going on. Sharing this broad base of knowledge within a team creates a shared context which is incredibly valuable.
Without a point of view there is no point. In our world of A/B testing, MVPs, and iteration we can sometimes lose sight of why a product and company can/should exist. The reason is that a company brings together people to go after a problem space with a unique point of view. Companies are not built to simply take requests and get those implemented or to throw out a couple of ideas and move forward with the ones that get traction. You can do that as work for hire or consulting, but not if you’re building a new product. It is important to maintain a unique point of view as a “north star” when deciding what to do, when, and why.
Know your dilithium crystals. Closely related to your point of view as a team is knowing what makes your team unique relative to competition or other related efforts. Apple uses the term “magic” a lot and what is fascinating is how with magic you can never quite identify the specifics but there is a general feeling about what is great. In Star Trek the magic was dilithium crystals–if you ever needed to call out the ingredient that made things work, that was it. What is your secret (or as Thiel says, what do you believe that no one else does)? It could be branding, implementation, business model, or more.
Don’t ask for information or reports unless they help those you ask to do their jobs. If you’re a manager you have the authority to ask your team for all sorts of reports, slides, analysis, and more. Strong managers don’t exercise that authority. Instead, lead your team to figure out what information helps them to do their job and use that information. As a manager your job isn’t a superset of your team, but the reflection of your team.
Don’t keep two sets of books. We keep track of lots of things in product development: features, budgets, traffic, revenue, dev schedules, to do lists, and more. Never keep two versions of a tracking list or of some report/analysis. If you’re talking with the team about something and you have a different view of things than they do, then you’ll spend all your time reconciling and debating which data is correct. Keeping a separate set of books is also an exercise in opacity which never helps the broader team collaboration.
Showdowns are boring and nobody wins. People on teams will disagree. The worst thing for a team dynamic is to get to a major confrontation. When that happens and things become a win/lose situation, no one wins and everyone loses. Once it starts to look like battle lines are being draw, the strongest members of the team will start to find ways to avoid what seems like an inevitable showdown. (Source: This is a line from the film “Wall Street”.)
Never vote on anything. On paper, when a team has to make a decision it seems great to have a vote. If you’re doing anything at all interesting then there’s almost certainty that at least one person will have a different view. So the question is if you’re voting do you expect a majority rule, 2/3rds, consensus, are some votes more equal? Ultimately once you have a vote then the choice is one where the people that disagree are not singled out and probably isolated. My own history is that any choice that was ever voted on didn’t even stick. Leadership is about anticipating and bringing people along to avoid these binary moments. It is also about taking a stand and having a point of view if you happen to reach such a point.
When presenting the boss with n alternatives he/she will always choose option n+1. If you’re asked to come up with a solution to a problem or you run across a problem you have to solve but need buy in from others, you’re taking a huge risk by presenting alternatives. My view is that you present a solution and everything else is an alternative–whether you put it down on paper or not. A table of pros/cons or a list of options like a menu almost universally gets a response of trying to create an alternative that combines attributes that can’t be combined. I love choices that are cost/quality, cheap/profitable, small/fast and then the meeting concludes in search of the alternative that delivers both.
Nothing is ever decided at a meeting so don’t try. If you reach a point where you’re going to decide a big controversial thing at a meeting then there’s a good chance you’re not really going to decide. Even if you do decide you’re likely to end up with an alliterative you didn’t think of beforehand and thus is not as thought through or as possible as you believed it to be by the end of the meeting. At the very least you’re not going to enroll everyone in the decision which means there is more work to do be done. The best thing to do is not to avoid a decision making meeting but figure out how you can keep things moving forward every day to avoid these moments of truth.
Work on things that are important not urgent. Because of mobile tools like email, twitter, SMS, and notifications of all kinds from all sorts of apps have a way of dominating your attention. In times of stress or uncertainty, we all gravitate to working on what we think we can accomplish. It is easier to work towards inbox zero than to actually dive in and talk to everyone on the team about how they are handling things or to walk into that customer situation. President Eisenhower and later Stephen Covey developed amazing tools for helping you to isolate work that is important rather than urgent.
Products don’t ship with a list of features you thought you’d do but didn’t. The most stressful list of any product development effort is the list of things you have to cut because you’re running out of time or resources. I don’t like to keep that list and never did, for two reasons. First, it just makes you feel bad. The list of things you’re not doing is infinitely long–it is literally everything else. There’s no reason to remind yourself of that. Second, whatever you think you will do as soon as you can will change dramatically once customers start using the product you do end up delivering to them. When you do deliver a product it is what you made and you’re not obligated to market or communicate all the things you thought of but didn’t get done!
If you’re interesting someone won’t agree with what you said. Whether you’re writing a blog, internal email, talking to a group, or speaking to the press you are under pressure. You have to get across a unique point of view and be heard. The challenge is that if you only say things everyone believes to already be the case, then you’re not furthering the dialog. The reality is that if you are trying to change things or move a dialog forward, some will not agree with you. Of course you will learn and there’s a good chance you we wrong and that gives you a chance be interesting in new ways. Being interesting is not the same as being offensive, contrarian, cynical, or just negative. It is about articulating a point of view that acknowledges a complex and dynamic environment that does not lend itself to simple truths. Do make sure you have the right mechanisms in place to learn just how wrong you were and with how many people.
Like for example, if you write a post of 15 management tips, most people won’t agree with all of them :)
–Steven Sinofsky (@stevesi)
More products are being created and developed faster today than ever before. Every day new services, sites, and apps are introduced. But with this surge in products, it’s become more difficult to get noticed and connect with users. In late 2013, Ryan Hoover founded Product Hunt to provide a daily view of new products that brings together an engaged community of product users with product makers. Today marks the next step in the growth of the company.
Interconnecting a Community
When you first meet Ryan it becomes immediately clear he has a passion for entrepreneurship and its surrounding ecosystem. Well before starting Product Hunt, he hosted intimate brunches to bring founders together. This came out of another email-based experiment named Startup Edition, where he assembled a weekly newsletter of founder essays on topics of marketing, product development, fundraising, and other challenges company builders face. This enthusiasm is prevalent on Twitter where he shares new products and regularly interacts with fellow enthusiasts in the startup community.
Ryan’s background comes from games, an ecosystem that is regarded as one of the most connected. Gamers love to stay on top of the latest products. Game makers love to connect with gamers. There’s an even larger community of game enthusiasts who value being observers in this dialog. Ryan grew up in the midst of a family-owned video game store so it’s no surprise that he has an incredibly strong sense of community. That’s why after college, he got involved in the gaming industry, first at InstantAction and then at PlayHaven. Each of these roles allowed Ryan to build the skills to foster both the product and community engagement sides of gaming, while also creating successful business opportunities for the whole community.
Spending time in the heart of gaming, between gamers and game makers, Ryan saw how those makers that fostered a strong sense of community around their game had stronger engagement and improved chances of future growth. Along the way he saw a wide variety of ways to build communities — and most importantly to maintain an open and constructive environment where praise, criticism, and wishes could be discussed between makers and enthusiasts.
About a year ago, Ryan launched, in his words, “an experiment” — a daily email of the latest products. After a short time, interest and subscribers to the mail list grew. So with a lot of hustle, the email list turned into a site. Product Hunt was launched.
Product Hunt started with a passion for products and has grown into a community of people passionate to explore and discuss new products with likeminded enthusiasts and makers of those products.
Product Hunt: More Than a Site
Product Hunt has become something of a habit for many since its debut. Today hundreds of thousands of “product hunters” visit the site plus more through the mobile apps, the daily email, and the platform API. Every month, millions of visits to product trial, app stores, and download sites are generated. And nearly half of all product discussions include the product maker, from independent hackers to high-profile veteran founders.
Product Hunt is used by enthusiasts to learn about new products, colored with an unfiltered conversation with its makers. It servers the industry as a source for new and trending product areas. For many, Product Hunt is or will evolve to be the place you go to discover products in the context of similar products along with a useful dialog with a community.
Product Hunt is much more than a site. Product Hunt is a community. In fact, Ryan and the team spend most of their energy creating, curating, and crafting a unique approach to building a community. His own experience as a participant and a maker led him to believe deeply in the role of community and engagement not just in building products, but also in launching new products and connecting with customers.
This led the team to create a platform for products, starting with the products they know best — mobile and desktop apps and sites.
The challenge they see is that today’s internet and app stores are overwhelmed with new products, as we all know. The stores limit interaction to one-way communication and reviews. If you want to connect with the product makers, there’s no way to do so. Ironically, makers themselves are anxious to connect but do so in an ad hoc manner that often lacks the context of the product or community. Product Hunt allows this type of community to be a normal part of interaction and not just limited to tech products.
Product Hunt is just getting started, but the enthusiasm is incredible. A quick Twitter search for “addicted to product hunt” shows in just a short time how many folks are making the search for what’s new a part of a routine. The morning email with the latest news is now a must-read and Ryan is seeing the technology industry use this as a source for the most up to date launches.
Product Hunt’s uniqueness comes from the full breadth of activity around new products and those enthusiastic about them:
Launch. Product Hunt is a place where products are announced and discovered for the first time. Most new products today don’t start with marketing or advertising, but simply “show up”. Makers know how hard it is to get noticed. They upload an app to a store or set up a new site and just wait. Gaining awareness or traction is challenging. Since the first people to use most new products are themselves involved in making products, they love to know about and experience the latest creations. New product links come from a variety of sources and already Product Hunt is becoming the go-to place for early adopters.
Learn. Learning about what’s new is just as challenging for enthusiasts. Most new products launched do not yet have full-blown marketing, white papers, or other information. In fact, in today’s world of launching-to-learn more about how to refine products, there are often more questions than answers. Community members submit just a short tagline and link to the product. Then the dialog begins. There are robust discussions around choices in the product, comparisons to other products, and more. Nearly half of the products include the makers in the discussion, sharing their stories and directly interacting with people. And these discussions are also happening in the real world, as members of the community organize meetups across the globe from Tokyo to Canada.
Share. Early adopters love to share their opinions and engage with others. On Product Hunt, the people determine which products surface as enthusiasts upvote their favorite discoveries and share their perspective in the comments. Openness, authenticity, and constructive sharing are all part of the Product Hunt experience, and naturally this enthusiasm spills outside the community itself.
Curate. With the help of the community, the team is constantly curating collections of products into themes that are dynamic and changing. This helps raise awareness of emerging product categories and gives consumers a way to find great products for specific needs. Recent lists have included GIF apps, tools used by product managers, and productivity apps. One favorite that shows the timeliness of Product Hunt was a list of iOS 8 keyboards the day after iOS 8’s launch.
One attribute of all products that serve an enthusiastic community is the availability of a platform to extend and customize the product. Product Hunt recently announced the Product Hunt API and already has apps and services that present useful information gathered from Product Hunt, such as the leaderboard and analytics platform.
Product Hunt + a16z
When I first hung out with Ryan outside of a conference room, he brought me to The Grove coffee shop on Mission St. We sat outside and began to talk about products, enthusiasts, and community. It was immediately clear Ryan sees the world or products in a unique way — he sees a world of innovation, openness to new ideas, and unfiltered communication between makers and consumers. As founder, Ryan embodies the mission-oriented founders a16z loves to work with and he’s built a team that shares that passion and mission.
Andreessen Horowitz could not be more excited to lead this next round of investing, and I am thrilled to serve on the board. Please check out Product Hunt for yourself onthe web, download its iOS app, or sign up for the email digest.
Note: This post originally appeared on a16z.com.
In a post last week,@davewiner described The Lost Art of Software Testing. I loved the post and the ideas about testing expressed (Dave focuses more on the specifics of scenario and user experience testing so this post will broaden the definition to include that and the full range of testing). Testing, in many forms, is an integral part of building products. Too often if the project is late or hurry up and learn or agile methods are employed, testing is one of those efforts where corners are cut. Test, to put it simply, is the conscience of a product. Testing systematically determines the state of a product. Testers are those entrusted with keeping everyone within 360º of a product totally honest about the state of the project.
Before you jump to twitter to tell correct the above, we all know that scheduling, agile, lean, or other methods in no way at all preclude or devalue testing. I am definitely not saying that is the case (and could argue the opposite I am sure). I am saying, however, that when you look at what is emphasized with a specific way of working, you are making inherent tradeoffs. If the goal is to get a product into market to start to learn because you know things will change, then it is almost certainly the case that you also have a different view of fit and finish, edge conditions, or completeness of a product. If you state in advance that you’re going to release every time interval and too aggressively pile on feature work, then you will have a different view of how testing fits into a crunched schedule. Testing is as much a part of the product cycle as design and engineering, and like those you can’t cut corners and expect the same results.
Too often some view testing as primarily a function of large projects, mature products, or big companies. One of the most critical hires a growing team can make is that first testing leader. That person will assume the role of a bridge between development and customer success, among many other roles. Of course when you have little existing code and a one-pizza sized dev team, testing has a different meaning. It might even be the case that the devs are building out a full test infrastructure while the code is being written, though that is exceedingly rare.
No one would argue against testing and certainly no one wants a product viewed as low quality (or one that has not been thoroughly tested as the above referenced post describes). Yet here we are in the second half century of software development and we still see products and services referred to as buggy. Note: Dave’s post inspired me, not any recent quality issues faced by other vendors.
Are today’s products actually more buggy than those of 10, 15, or 20 years ago? Absolutely not. Most every bit of software used today is on the whole vastly higher quality than anything built years ago. If vendors felt compelled, many could prove statistically (based on telemetry) that customers experience far more robust products than ever before. Products still do, rarely, crash (though the impact of that is mostly just a nuisance rather than a catastrophic data loss) and as a result the visibility seems much higher. It wasn’t too long ago that mainstream products would routinely (weekly if not daily) crash and work would be lost with the trade press anxiously awaiting the next updates to get rid of bugs. Yet products still have issues, some major, and all that should do is emphasize the role of testing. Certainly the more visible, critical, or fatal a quality issue might be the more we might notice it. If a social network has a bug in a feed or fails to upload a photo that might be vastly different from a tool that loses data you typed and created.
Today’s products and services benefit enormously from telemetry which informs the real world behavior of a product. Many thought the presence of this data would in a sense automate testing. As we often see with advances that some believe would reduce human labor, the challenges scale to require a new kind of labor or to understand and act on new kinds of information.
What is Testing?
Testing has many different meanings in a product making organization, but in this post we want to focus on testing as it relates to the *verification that a product does what it is intended to do and does so elegantly, efficiently, and correctly. *
Some might just distill testing down to something like “find all the bugs”. I love this because it introduces two important concepts to product development:
- Bug. A bug is simply any time a product does not behave the way someone thought it should. This goes way beyond crashes, data loss, and security problems. Quite literally, if a customer/user of your product experiences the unexpected then you have a bug and should record it in some database. This means by definition testing is not the only source of bugs, but certainly is the collection and management point for the list of all the bugs.
- Specification. In practice, deciding whether or not a bug is something that requires the product to change means you have a definition or of how a product should behave in a given context. When you decide the action to take on a bug that is done with a shared understanding across the team of what a product should be doing. While often viewed as “old school” or associated with a classic “waterfall” methodology, specifications are how the product team has some sense of “truth”. As a team scales this becomes increasingly important because many different people will judge whether something is a bug or not.
Testing is also relative to the product lifecycle as great testers understand one the cardinal rules of software engineering—change is the enemy of quality. Testers know that when you have a bug and you change the code you are introducing risk into a complex system. Their job is to understand the potential impact a change might have on the overall product and weigh that against the known/reported problem. Good testers do not just report on problems than need to be fixed, but also push back on changing too much at the wrong time because of potential impact. Historically, for every 10 changes made to a stable product, at least one will backfire and cause things to break somehow.
Taken together these concepts explain why testing is such a sophisticated and nuanced practice. It also explains why it requires a different perspective than that of the product manager or the developer.
Checks and Balances
The art and science of making things at any scale is a careful balance of specialized skills combined with checks and balances across those skills.
Testing serves as part of the checks and balances across specializations. They do this by making sure everyone is clear on what the goals are, what success looks like, how to measure that success, and how to repeat those measures as the project progresses. By definition, testing does not make the product. That puts them in the ideal position to be the conscience of the product. The only agenda testing has is to make sure what everyone signed up to do is actually happening and happening well. Testing is the source of truth for a product.
Some might say this is the product manager’s role or the dev/engineering manager’s role (or maybe design or ops). The challenge is that each of these roles has other accountabilities to the product and so are asked to be both the creator and judge of their own work. Just as product managers are able to drive the overall design and cohesiveness of a product (among other things) while engineering drives the architecture and performance (among other things), we don’t normally expect those roles to reverse and certainly not to be held by a single person.
One can see how this creates a balanced system of checks:
- Development writes the code. This is the ultimate truth of what a product does, but not necessarily what the team might want it to do. Development is protective of code and has one view of what to change, what are the difficult parts of code or what parts are easy. Development must balance adding and changing code across individual engineers who own different parts of the code and so on.
- Operations runs the live product/service. Working side by side with development (in a DevOps manner) there are the folks that scale a product up and out. This is also about writing the code and tools required to manage the service.
- Product management “designs” the product. I say design to be broader than Design (interaction, graphical, etc.) and to include the choice of features, target customers, and functional requirements.
- Product design defines how a product feels. Design determines the look and feel of a product, the interaction flows, and the techniques used to express features.
- And so on across many disciplines…
That also makes testing a big pain in the neck for some people. Testers want precision when it might not exist. Testers by their nature want to know things before they can be known. Testers by their nature prefer stability over change. Testers by their nature want things to be measurable even when they can’t be measured. Testers tend towards process or procedural thinking when others might tend towards non-linear thinking. We all know that engineers tilt towards wanting to distill things to 1’s and 0’s. To the uninitiated (or the less than amazing tester) testers can come across as even more binary than binary.
That said, all you need is testing to save you from yourself one time and you have a new best friend.
Why Do We (Still) Need Testing?
Software engineering is a unique engineering discipline. In fact for the whole history of the field different people have argued either that computer software is mostly a science of computing or that computing is a craft or artistic practice. We won’t settle this here. On the other hand, it is fair to say that at least two things are true. First, even art can have a technology component that requires an engineering like approach, for example making films or photography. Second, software is a critical part of society’s infrastructure and from electrical to mechanical to civil we require those disciplines to be engineers.
Software has a unique characteristic which is that it is actually the case that a single person can have an idea, write the code, and distribute it for use. Take that civil engineers! Good luck designing and building a bridge on your own. Because of this characteristic of software there is desire to scale to large projects this same way.
People who know about software bugs/defects know that there are two ways to reduce the appearance and cost of shipping bugs. First, don’t introduce them at all. Methodologies like extreme or buddy programming or code reviews are all about creating a coding environment that prevents bugs from ever being typed.
Yet those methods still yield bugs. So the other technique employed is to attempt to get engineering to test all the code they write and to move the bug finding efforts “upstream”. That is write some new code for the product and then write code that tests your code. This is what makes software creation seem most like other forms of engineering or product creation. The beauty of software is just how soft it is—complete redesigns are keystrokes away and only have a cost in brain power and time. This contrasts sharply with building roads, jets, bridges, or buildings. In those cases, mistakes are enormously costly and potentially very dangerous. Make a mistake on the load calculations of a building and you have to tear it down and start over (or just leave the building mostly empty like the Stata Center at MIT).
Therefore moving detection of mistakes earlier in the process is something all engineering works to do (though not always successfully). In all but software engineering, the standard of practice employs engineers dedicated to the oversight of other engineers. You can even see this in practice in the basics of building a home where you must enlist inspectors to oversee electrical or steel or drainage, even though the engineers presumably do all they can to avoid mistakes. On top of that there are basic codes that define minimal standards. Software lacks all of these as a formality.
Thus the importance of specialized testing in software projects is a pressing need that is often viewed as counter-cultural. Lacking the physical constraints as well, engineers tend to feel “gummed” up and constrained by what would be routine quality practices in other engineering. For example, no one builds as much as a kitchen cabinet without detailed drawings with measurements. Yet routinely we in software build products or features without specifications.
Because of this tension between acting like traditional engineers and working to maintain the velocity of a single inspired engineer, there’s a desire to coalesce testing into the role of the engineer which can potentially allow for more agility or moving bug finding more upstream. One of the biggest changes in the field of software has been the availability of data about product quality (telemetry) which can be used to inform a project team about the state of things, perhaps before the product is in broad use.
There’s some recent history in the desire to move testing and development together and that is the devops movement. Devops is about rolling in operational efforts closer to engineering to prevent the “toss it over the wall” approach used by earlier in the evolution of web services. I think this is both similar and different. Most of the devops movement focuses on the communication and collaboration between development and operations, rather than the coalescing of disciplines. It is hard to argue against more communication and certainly within my own experience, when it came time to begin planning, building, and operating services our view of Operations was that it was adding to a seat at the table of PM, dev, test, design, and more.
The real challenge is that testing is far more sophisticated than anything an engineer can do solo. The reason is that engineers are focused on adding new code and making sure the new code works the way they wrote it. That’s very different than focusing on all that new code in the context of all other new code, all the new hardware, and if relevant all the old code as well (compatibility). In other words, as a developer is writing new code the question is really if it is even possible for the developer to make progress on that code while thinking about all those other things. Progress will quickly grind to halt if one really tries to do all of that work well.
As an aside, the role of developers writing unit tests is well-established and quite successful. Historically the challenge is maintaining these over time at the same level of efficacy. In addition, going beyond unit testing to include automation, configuration, API, and more to areas that the individual developer lacks expertise proves out the challenge of trying to operate without dedicated testing.
An analogy I’ve often used is to compare software projects to movies (they share a lot of similarities). With movies you immediately think of actor, director, screenwriter and tools like cameras, lights, sound. Those are the engineer and product manager equivalents. Put a glass of iced tea in the hand of an actor and the sunset in the background and all of a sudden someone has to worry about the level of the tea, condensation, and ice cube volume along with the level of the sun and number of birds on the horizon. Now of course an actor knows how that looks and so does the director. Movies are complex—they are shot out of order, reshot, and from many angles. So movie sets employ people to keep an eye on all those things—property masters, continuity, and so on. While the idea of the actor or director or camera operator trying to remember the size of ice cubes is not difficult to understand intellectually, in practice those people have a ton of other things to worry about. In fact they have so much to worry about that there’s no way they can routinely remember all those details or keep the big issues of the film front and center. Those ice cubes are device compatibility. The count of birds represent compatibility with other features. The level of the sun represents something like alternative scripts or accessibility, for example. All these things are things that need to be considered across the whole production in a consistent and well-understood manner. There’s simply no way for each “actor” to do an adequate job on all of them.
Therefore like other forms of engineering, testing is not an optional thing just because one can imagine software being made by just pure coding. Testing is a natural outcome of a project of any sophistication, complexity, or evolution over time. When I do something like run Excel 3 from 1990 on Windows 8, I think there’s an engineering accomplishment but I really know that is the work of testers validating whole subsystems across a product.
When to Test
You can bring on test too early, whether a startup or an existing/large project. When you bring on testing before you have a firm grasp from product management of what an end state might look like, then there’s no role testing can play. Testing is a relative science. Testers validate a product relative to what it is supposed to do. If what it is supposed to do is either unknown or to be determined then the last thing you want is someone saying it isn’t doing something right. That’s a recipe for frustrating everyone. Development is told they are doing the wrong thing. Product will just claim the truth to be different. And thus the tension across the team described by Dave in his post will surface.
In fact a classic era in Microsoft’s history with testing and engineering is based on wanting to find bugs upstream so badly that the leaders at the time drove folks to test far too early and eagerly. What resulted was no less than a tsunami of bugs that overwhelmed development and the project ground to a halt. Valuable lessons were passed on about starting too early—when nothing yet works there’s no need to start testing.
While there is a desire to move testing more upstream, one must also balance this with having enough of the product done and enough knowledge of what the product should be before testing starts. Once you know that then you can’t cut corners and you have to give the testing discipline time to do their job with a product that is relatively stable.
That condition—having the product in a stable state—before starting testing is a source of tension. To many it feels like a serialization that should not be done. The way teams I’ve worked on have always talked about this is that final stages of any project are the least efficient times for the team. Essentially the whole team is working to validate code rather than change code. Velocity of product development seems to stand still. Yet that is when progress is being made because testing is gaining assurance that the product does what it is supposed to do, well.
The tools of testing that span from unit tests, API tests, security tests, ad hoc testing, code coverage, UX automation, compatibility testing, and automation across all of those are the way they do their job. So much of the early stages of a project can be spent creating and managing that infrastructure when that does not depend on the specifics of how the product will work. Grant George, the most amazing test leader I ever had the opportunity to work with on both Windows and Office, used to call this the “factory floor”. He likened this phase to building the machinery required for a manufacturing line which would allow the team to rapidly iterate on daily builds while covering the full scope of testing the product.
While you can test too early you can also test too late. Modern engineering is not a serial process. Testers are communicating with design and product management (just like a devops process would describe) all along, for example. If you really do wait to test until the product is done, you will definitely run out of time and/or patience. One way to think of this is that testers will find things to fix—a lot of things—and you just need time to fix them.
In today’s modern era, testing doesn’t end when the product releases. The inbound telemetry from the real world is always there informing the whole team of the quality of the product.
One of the most magical times I ever experienced was the introduction of telemetry to the product development process. It was recently the anniversary of that very innovation (called “Watson”) and Kirk Glerum, one of the original inventors back in the late 1990’s, noted so on Facebook. I just wanted to share this story a little bit because of how it showed a counter-intuitive notion of how testing evolved. (See this Facebook post from Kirk). This is not meant to be a complete history.
While working what became Office 2000 in 1998 or so, Kirk had the brilliant insight that when a program crashed one could use the *internet* and get a snapshot of some key diagnostics and upload those to Microsoft for debugging. Previously we literally had either no data or someone would call telephone support and fax in some random hex numbers being displayed on a screen. Threading the needle with our legal department, folks like Eric LeVine worked hard to provide all the right anonymization, opt-in, and disclosure required. So rather than have a sample of crashes run on specific or known machines, Kirk’s insight allowed Microsoft to learn about literally all the crashes happening. Very quickly Windows and Office began working together and Windows XP and Office 2000 released as the first products with this enabled.
A defining moment was when a well-known app from a third party released a patch. A lot of people were notified by some automated method and downloaded the patch and installed it. Except the patch caused a crash in Word. We immediately saw a huge spike in crashes all happening in the same place and quickly figured out what was going on and got in touch with the ISV. The ISV was totally unaware of the potential problem and thus began an industry wide push on this kind of telemetry and using this aspect of the Windows platform. More importantly a fix was quickly released.
An early reaction was that this type of telemetry would obsolete much of testing. We could simply have enough people running the product to find the parts that crashed or were slow (later advances in telemetry). Of course most bugs aren’t that bad but even assuming they were this automation of testing was a real thought.
But instead what happened was testing quickly became the best users of this telemetry data. They were using it while analyzing the code base, understanding where the code was most fragile, and thinking ways to gather more information. The same could be said for development. Believe it or not, some were concerned that development would get sloppy and introduce bugs more often knowing that if a bug was bad enough it would pop up on the telemetry reports. Instead of course development became obsessed with the telemetry and it became a routine part of their process as well.
The result was just better and higher quality software. As our industry proves time and time again, the improvements in tools allow the humans to focus on higher level work and to gain an even better understanding of the complexity that exists. Thus telemetry has become an integral part of testing much the same way that improvements in languages help developers or better UX toolkits help design.
It Takes a Village
Dave’s post on testing motivated me to write this. I’ve written posts about the role of design, product management, general management and more over the years as well. As “software eats the world” and as software continues to define the critical infrastructure of society, we’re going to need more and more specialized skills. This is a natural course of engineering.
When you think of all the specialties to build a house, it should not be surprising that software projects will need increasing specialization. We will need not just front end or back end developers, project managers, designers, and so on. We will continue to focus on having security, operations, linguistics, accessibility, and more. As software matures these will not be ephemeral specializations but disciplines all by themselves.
Tools will continue to evolve and that will enable individuals to do more and more. Ten years ago to build a web service your startup required people will skills to acquire and deploy servers, storage networks, and routers. Today, you can use AWS from a laptop. But now your product has a service API and integration with a dozen other services and one person can’t continuously integrate, test, and validate all of those all while still moving the product forward.
Our profession keeps moving up the stack, but the complexity only increases and the demands from customers for a always improving experience continues unabated.
PS: My all time favorite book on engineering and one that shaped a lot of my own views is To Engineer Is Human by Henry Petroski. It talks about famous engineering “failures” and how engineering is all about iteration and learning. To anyone that ever released a bug, this should make sense (hint, that’s every one of us).
Spending time in Africa, one is always awestruck. The continent has so much to offer, from sands to rain forests, from apes to zebras, from Afrikaans to Zulu. More than 1.1 billion people, 53 countries and at least 2,000 different spoken languages make for amazing diversity and energy.
Yet even while spending just a little time, you quickly see the economic challenges faced by many — slums, townships, settlements and the poverty they represent are seen too frequently. The contrast with the developing world is immense. As a visitor, you’re not particularly surprised to find the difficulties in staying connected to wireless services that you’ve become reliant upon.
We hear about the mobile revolution in Africa all the time. Today, this is a revolution in voice and text on feature phones and increasingly on smartphones, phablets and small tablets. Smartphones are making a rapid rise in use, if for no other reason than they have become inexpensive and ubiquitous on the world stage, and also thanks, in part, to reselling of used phones from developed markets.
But keeping smartphones connected to the Internet is straining the spectrum in most countries, and is certainly straining the connectivity infrastructure. Africa, for the most part, will “skip over” PCs, as hundreds of millions of people connect to the Internet exclusively by phones and tablets. But there’s an acute need for improved connectivity.
The problem is that, even in the most developed areas of Africa, the deployment of strong and fast 3G and 4G coverage is lagging, and the capital that is available will flow to build out areas where there are paying customers. That means that the outlying areas, where a lot of people live, will continue to be underserved for quite some time.
Alan Knott-Craig, an experienced South African entrepreneur who is setting out to bring connectivity via Wi-Fi across his homeland, knows that Internet access is transformative to those in slums and townships. His previous company, Mxit, where he was CEO, developed a wildly popular social network for feature phones. It delivered a vast array of services, from education to community to commerce, and is in use by tens of millions.
Given the challenges of connectivity in Africa, you often find yourself searching for a Wi-Fi connection for any substantial browsing or app usage. The best case — except for a couple of markets and capital cities — is that you will get a strong 3G and occasional 4G that is highly dependent on carrier and location. It is not uncommon for folks to have smartphones that are used for voice and text when on the network, and apps that are used only when there is Wi-Fi. It’s not just a way to save money or avoid your data cap — Wi-Fi is a necessity.
“Going where the money isn’t”
One can imagine there’s a big business to be had building out the Wi-Fi hotspot infrastructure in the country. Knott-Craig recognized this as he began to explore how to bring connectivity to more people.
Having grown up in South Africa and deeply committed to both the social and business needs of the country, Knott-Craig has also dedicated his businesses to those who are least well served and would benefit the most. Over the past 20 years, the improvements in service delivery to the slums and townships of South Africa have improved immensely, reducing what once seemed like an insurmountable gap. While there is clearly a long way to go, progress is being made.
The transformation that mobile is bringing to townships is almost beyond words to those who are deeply familiar with the challenges. Talking and texting with family and friends are great and valuable. A mobile phone brings empowerment and identity (a phone number is the most reliable form of identity for many) in ways that no other service has been able to. Access to information, education and community all come from mobile phones. Mobile is a massive accelerator when it comes to closing economic divides.
All too often in business, the path is to build a business around where the money is. Knott-Craig’s deep experience in mobile communications told him that the major carriers will address connectivity in the cities and where there is already money. So, in his words, he set out to improve mobile connectivity by “going where the money isn’t.”
It was obvious to Alan that setting up Wi-Fi access would be transformative. The question was really how to go about it.
Time and again, one lesson from philanthropy is that the solutions that work and endure are the ones that enroll the local community. Services that are created by partnerships between the residents of townships, the government and business are the only way to build sustainable programs. The implication is that rolling into town with a bunch of access points and Internet access sounds like a good idea — who wouldn’t want connectivity? — but in practice would be met with resistance from all sides.
In the townships, people pay for Internet access by the minute, by the text and by the megabyte. Rolling out Wi-Fi needed to fit within this model, and not create yet another service to buy. So the first hurdle to address would be to find a way to piggyback on that existing payment infrastructure.
To do this, Knott-Craig worked with carriers in a very smart way. Carriers want their customers on the Internet, and in fact would love to offload customers to Wi-Fi when available. While they can do this in densely populated urban areas where access points can be set up, townships pose a very different environmental challenge, discussed below.
Given the carriers’ openness to offloading customers to Wi-Fi, the project devised a solution based on the latest IEEE standards for automatically signing on to available hotspots (something that we wish we would experience in practice in the U.S.). A customer of one of the major carriers, MTN for example, would initiate a connection to the Isizwe network, and from then on would automatically authenticate and connect using the mobile number and prepaid megabytes, just as though the Wi-Fi were a WWAN connection.
This “Hotspot 2.0″ implementation is amazingly friendly and easy to use. It removes the huge barrier to using Wi-Fi that most experience (the dreaded sign-on page), and that in turn makes the carriers very happy. Because of the value to the carriers, Knott-Craig is working to establish this same billing relationship across carriers, so this works no matter who provides your service.
Of course this doesn’t solve the problem of where the bandwidth comes from in the first place. Since Knott-Craig is all about building bridges and enrolling support across the community, he created unique opportunities for those that already have unused bandwidth to be part of the solution.
Whether it is large corporations or the carriers themselves, Project Isizwe created a wholesale pool of bandwidth by either purchasing outright or using donated bandwidth to create capacity. The donated bandwidth provides a tax deduction benefit at the same time. Everyone wins. Interestingly, the donated bandwidth makes use of off-peak capacity, which is exactly when people in the townships want to spend time on the Internet anyway.
With demand and supply established, the next step is to enroll the government. Here again, the team’s experience in working with local officials comes into play.
As with any market around the world, you can’t just put up public-use infrastructure on public land and start to use it. The same thing is true in the townships of South Africa. In fact, one could imagine an outright rejection of providing this sort of service from a private organization, simply because it competes with the service delivery the government provides.
In addition, the cost factor is always an issue. Too many programs for townships start out free, but end up costing the government money (money they don’t have) over time. It isn’t enough to provide the capital equipment and ask the government to provide operational costs, or vice versa. Project Isizwe is set up to ensure that public free Wi-Fi networks are a sustainable model, but needed government support to do so.
With the enrollment of the carriers and community support, bringing along the government required catering to their needs, as well. One of the biggest challenges in the townships is the rough-and-tumble politics — not unlike local politics in American cities. The challenge that elected officials have is getting their voice heard. Without regular television coverage, and with sporadic or limited print coverage, the Internet has the potential to be a way for the government to reach citizens.
As part of the offering, Knott-Craig and his team devised a platform for elected officials to air their point of view through “over the top” means. Essentially, part of the Wi-Fi service provides access to a public-service “station” filled with information directly from governmental service providers. Because of the nature of the technology, these streams can be cached and provided at an ultra-low cost.
The bottom line for government is that they are in the business of providing basic services for the community. Providing Internet access only adds to the menu of services, including water, electrical, sanitation, police, fire and more. Doing so without a massive new public program of infrastructure is a huge part of what Isizwe did to win over those officials.
With all the parties enrolled, there still needs to be some technology. It should come as no surprise that setting up access points in townships poses some unique challenges: Physical security, long-haul connectivity and power need to be solved.
One of the neat things about the tech startup ecosystem in South Africa is the ability to draw on resources unique to the country. The buildup of military and security technology, particularly in Pretoria, created an ecosystem of companies and talent well-suited to the task. Given the decline of these industries, it turns out that these resources are now readily available to support new private-sector work.
First up was building out the access points themselves. Unlike a coffee shop, where you would just connect an access point to a cable modem and hide it above a ceiling tile, townships have other challenges. Most of the access points are located high up in secured infrastructure, such as water towers. These locations also have reliable power and are already monitored for security.
The access points are secured in custom-designed enclosures, and use networking equipment sourced from Silicon Valley companies Ruckus Wireless and Ubiquiti Networks, which implement hotspots around the world. This enclosure design and build was done by experienced steel-manufacturing plants in Pretoria. In addition, these enclosures provide two-way security cameras with night vision to monitor things.
This provided for a fun moment the first time someone signed on. A resident had been waiting for the Wi-Fi and was hanging out right below the tower. As soon as they signed on for the first time, back at the operations center they could see this on the dashboard, as well as the camera, and used the two-way loudspeaker to ask, “So how do you like the Wi-Fi?” which was quite a surprise to a guy just checking football scores on his mobile phone.
Along with using engineers from Pretoria to design the enclosure, Isizwe also employed former military engineers to go on-site to install the access points. This work involved two high-risk activities. First, these men needed to climb up some pretty tall structures and install something not previously catered for. Their skills as linemen and soldiers helped here.
More importantly, these were mostly Afrikaner white men venturing into the heart of black townships to do this work. Even though South Africa is years into an integrated and equality-based society, the old emotions are still there, just as has been seen in many other societies.
This would be potentially emotionally charged for these Afrikaners in particular. No only were there no incidents, but the technicians were welcomed with open arms, given the work that they were doing — “We are here to bring you Wi-Fi” — turns out to make it easy to put aside any (wrongly) preconceived notions. In fact, after the job, the installers were quite emotional about how life-changing the experience was for them to go into the townships for the first time and to do good work there.
The absence of underground cabling presents the challenge of getting these access points on the Internet in the first place. To accomplish this, each access point uses a microwave relay to connect back up to a central location, which is then connected over a landline. This is a huge advantage over most Wi-Fi on the African continent, which is generally a high-gain 3G WWAN connection that gets shared over local Wi-Fi.
The service is up and running today as a 1.0 version, in which Wi-Fi is free but limited to 250 megabytes; the billing infrastructure is just a few months away, which will enable pay-as-you-go usage of megabytes. The service will be free when there is capacity going unused.
The cost efficacy of the system is incredible, and that is passed along to individual users. Wi-Fi is provided at about 15 cents (ZAR cents) per gigabyte, which compares to more than 80 cents per megabyte for spotty 3G. That is highly affordable for the target customers.
Because of the limits of physics of Wi-Fi, the system is not set up to allow mass streaming of football, which is in high demand. Mechanisms are in place to create what amounts to over-the-top broadcast by using fixed locations within the community.
The most popular services being accessed are short videos on YouTube, music, news, employment information and educational services like Khan Academy and Wikipedia. The generation growing up in the townships is even more committed to education, so it is no surprise to see such a focus. Another important set of services being accessed are those for faith and religion, particularly Christian gospel content.
The numbers are incredible and growing rapidly, as the Isizwe scales to even more townships. In the middle of the afternoon (when people are at school and working), we pulled up the dashboard and saw some stats:
- 609 people were online right at that moment.
- 4,455 people had already used the service that day.
- 304 people had already reached their daily limit that day.
- More than 70,000 unique users since the system went online with 1.0 in November 2013.
- 208GB transferred since going online
- Most all of the mobile traffic is Android, along with the newest Asha phones from Nokia. Recycled iPhones from the developed market also make a showing.
In terms of physical infrastructure required, it takes about 200 access points to cover a densely populated area of one million residents. This allows about 200,000 simultaneous users overall, with about 50-500 users per access point, depending on usage and congestion.
We talk all the time about the transformational nature of mobile connectivity, and many in the U.S. are deeply committed to getting people connected all around the world. Project Isizwe is an incredible example of the local innovation required to build products and services to deliver on those desires.
The public/private/community partnerships that are the hallmark of Isizwe will scale to many townships across South Africa. Building on this base, there are many exciting information-based services that can be provided. Things are just getting started.
–Steven Sinofsky (@stevesi)
This post originally appeared on Re/code.
Note from the author: For the past 10 years or so, I’ve been spending time informally in Africa, where I have a chance to visit with government officials, non-government organizations, and residents of towns, settlements and cities. In the next post, I’ll talk about free Wi-Fi in South Africa slums. This post originally appeared on Re/code.
Spending time in the developing world, one can always marvel at the resourcefulness of people living in often extraordinarily difficult conditions. The challenges of living in many parts of the world certainly cause one to reflect on what we see from day to day. Here in the U.S., we’re all familiar with the transformative nature of mobile phones in our lives. And for those in extreme poverty, the mobile phone has been equally, if not more, transformative.
One particular challenge faced by many in Africa, especially those living in fairly extreme poverty (less than $500 a year in purchase power), is dealing with money and buying things, and how the mobile phone is transforming those needs.
One could fill many posts with what it is like to live at such low levels of income, but suffice it to say that even when you are fortunate enough to ground your perspective in firsthand experience, it is still not possible to really internalize the challenges.
Imagine living in a place where your small structure, like the one pictured below, is under constant threat of being demolished, and you run the risk of being relocated even farther away from work and family. Imagine a place where you don’t have the means of contacting the police, even if they might show up. Imagine a place where it takes a brick-sized amount of cash to buy a new cooking pot. Representative home, or “struct,” in an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky
These and untold more challenges define day-to-day life in slums, settlements and townships in developing countries in Africa, where the introduction of mobile phones has transformed a vast array of daily living tasks. Take the structure seen above, for example. It is a settlement in a vacant lot next to an office park in Harare, Zimbabwe. About 120 of these “structs” are occupied by about 600 people. For the most part, residents sell what they can make or cook; a small number possess some set of trade skills. Below, you can see a stand run out of one struct that sells eggs farmed on-site.
Mobile phones and extreme poverty
Through a Xhona-speaking interpreter, I had a chance to be part of a group (representing the government) hearing about life in the settlement. One question I got to ask was how many had mobile phones. Keeping in mind that the per capita spending power of these folks would be formally labeled “extreme poverty,” the answer blew me away. Nearly every adult had a mobile phone. When I asked for a show of hands, some proudly said they didn’t bring it to the meeting.
Right away, you see the importance of a mobile phone when you consider the cost of the phone as a percentage of income. It is hard for us to imagine the trade-offs phone owners here are making, but in earning-power equivalence, a phone in this village is roughly what a car and its operation costs us — and we already have food, shelter and clothing in ample supply.
Communicating with family is a key function, because families are often separated by distance, as members go looking for work or to find a better place to live.
Phones are also used to call the police. Before mobile phones, there was simply no way to get the police to your home or settlement, since there are no landlines or nearby telephones. Keep in mind that most residents in these areas have no formal identification or address, and the settlements are often unofficial and unrecognized by authorities.
Phones are also used as an early warning system for authorities that might be on the way to evict folks, or perhaps perform some other type of inspection. The legalities of settlements and how that works are a separate topic altogether, but I won’t go into that here.
Phones are used to keep track of what goods are selling where, or what goods might be needed. A network of people helps each other to maximize income from goods based on where and when they can be sold, because they are needed. Think of this as extremely local information that was previously unavailable. This is crucial, because many goods have limited shelf life and, frankly, many people produce the same goods.
A specific example for some people was the use of phones to monitor the supply chain for beer and alcohol. One set of people specialized in redistribution of beverages, and needed to keep tabs on events and unique needs in the community.
A favorite example of mine is “queue efficiency.” One of the many challenging aspects of life in extreme poverty is waiting — waiting on line for water, for transportation, for public services of all kinds. Phones play an important role in bringing some level of optimization to this process by sharing information on the size of queues and the quality of service available. We might think of this as Waze for lines, implemented over SMS friends and family networks.
Some of these uses seem straightforward, or simply cultural adaptations of what anyone with a phone would do. The fact that Africa skipped landlines is a fascinating statement about technological evolution — just as, for the most part, the continent will skip PCs in favor of smartphones, and will likely skip private ownership of transportation for shared-economy solutions (the history of Lyft is one that begins with shared rides in Zimbabwe).
Skipping over traditional banking
An old-economy service that Africa is likely to skip will be personal banking. In the U.S., our tech focus tends to be on China and the role that mobile payments play there with WeChat or AliPay, or more broadly on the innovation going on payments between the innovative PayPal, Square and, of course, bitcoin. In Africa, almost no one has a bank account, and definitely no credit cards. But as we saw, everyone has a mobile phone.
The most famous mobile banking solution in Africa is M-Pesa (M for mobile, pesa is Swahili for money), which started in Kenya. People there use their phones to store cash and pay for goods. Similar solutions exist in many countries. Even in a place as remote and difficult as Somaliland, you can see these at work, as I did recently.
Madagascar is an island-country with incredible beauty and an abundance of things not seen across Africa, including natural resources, farmable land and water, not to mention lemurs. Yet the country is incredibly poor, with a countrywide per capita GDP of $400, which puts it in the bottom 10 countries of the world. On average, people live at the extreme poverty level of $1.25 per day in purchase power. One city I visited in Madagascar is home to the UN Millennium Development Goals, which is programmatically working to improve these extremely impoverished areas.
Yet technology is making a huge difference in lives there. Madagascar has three main mobile phone carriers. These are all prepay, and penetration is extremely high, even in the most remote areas. The country is wired with mostly 2G connectivity; there is some coverage at 3G, but it is highly variable. The only common use for 3G is for Internet access using external USB modems connected to PCs (usually netbooks) and shared.
Most of the phones in use are feature phones, often hand-me-downs from the developed market. I’ve even seen a few iPhone 3s. One person complained about being unable to update iOS because he has no high-speed connection for such a download (showing that people are connected to the world, just not at a high download speed). A developed-market smartphone is pretty much a feature phone here, and the cost of another network upgrade means that one is far off. People are anxious for more connectivity, but along with cost, the current state of government will make progress a bit slower than citizens would like.
A huge problem in this type of environment is safely dealing with money. Madagascar’s currency trades at $1 U.S. to 2,500 Madagascar ariary. When you live off of 3,000 or so a day, you’re not going to carry around three bills, so very quickly you end up with a brick of 100 Ar notes. What to do with all those? Where can you put them? How do you keep them safe? How can you even keep them dry in a rain forest?
Well, along comes mobile “banking.” As easy as you can recharge your phone, you can add money to your stored money account. You walk up to a kiosk — there are thousands and thousands of them — and in a series of text messages with the shopkeeper, you give her money and your phone gains stored value.
With iOS and Android fragmentation, how would these apps work, given what must be finite dev resources? The implementation of this is all through an old-school standard called SIM Apps or Sim Application Toolkit.
This set of APIs and capability allow the installation of apps that reside on your SIM. These apps are simple menu-driven apps that look like WAP sites. They are secure and controlled by carriers. Using this framework, mobile banking has reached unprecedented usage and importance in developing markets, particularly in Africa.
The scenario for usage is quite simple. You charge your phone with money, just as you would with minutes. When you want to buy something, you bring up the SMS app (pictured below, on an iPhone 3 in Malagasy) and initiate a transaction. The merchant gives you a code, which you enter along with the merchant’s identifying code. You then type in an amount, which is verified against your current balance. The merchant then receives a notification, and the transaction is complete. The whole system is safe from theft because of the connection to your mobile number, two-factor authentication and so on. There is no carrier dependency, so you can easily send/receive to any carrier, though the carrier has your balance. This isn’t an interest-earning savings account, but rather a transaction or debit account (of course, in the U.S., few of us earn interest on demand deposits these days, anyway). Screen showing “My Account” in Malagasy, displayed on a recycled iPhone 3 (note the absence of a cellular connection). Steven Sinofsky
You can also give and receive money from individuals. This is extraordinarily important, given how there can be distance between family or even the main wage-earning in a family. The idea of sending money around to family members is an incredibly important part of the cash economy of low-income people. This market, called “remittance,” is estimated to be over $400 billion in developing markets alone.
Life is easier and safer for those using mobile banking this way. You can count on your money being safe. You don’t need to carry around cash and worry about loss, theft, or water and weather destroying physical currency. You can easily deal with small and exact amounts. As a merchant, you don’t have to make change. It is just better in every dimension.
The carriers profit by taking a percentage of the transaction, which is high in the same way that check-cashing in the U.S. is high (and credit cards, for that matter). The fee is about two percent, which I am not sure will be sustainable, given the competition between carriers. I also think it will be fascinating to see how developed-market companies like Western Union evolve to support mobile payments, as they provide integration points to the developed-market financial systems. It is not uncommon to see a Western Union representative also offering phone recharge and mobile banking services.
In our environment, we would see this as a convenience, like a debit card. But in Africa, it is far more secure and convenient, because you only need your phone, which you will carry with you almost all the time, just as we do in the U.S.
I think the most interesting point of note in this solution is how it essentially skips over banking. If we think about our own lives, and especially those of the generation entering the workforce now, banking is most decidedly archaic. The whole idea of opening an account and dealing with a level of indirection which offers very little by way of useful services — it just feels like there’s a need for disruption. Our installed base of infrastructure makes this very difficult, but in the developing world that challenge doesn’t exist. It isn’t likely that most people will graduate to full-fledged banking just as we don’t expect people to graduate from a mobile phone to a full-fledged PC.
It also isn’t hard to imagine this type of mobile banking taking off first in the cash-based part of the developed world, where today people pay fees to cash checks and buy money orders, absent a bank account. The large numbers of check-cashing storefronts located near lower-income areas share much in common in some ways. One example is remittance. Many immigrants in the U.S. are the source for remittance funds going to developing markets. Seattle, for example, has one of the largest populations of Somalians outside of Northern Africa, and they routinely send funds back to their families. Today, this is a difficult process, and could be made a lot easier with a global and mobile solution.
Merchant using a credit-card reader attempting to get a stronger signal to complete the transaction in Anosibe, Madagascar. Steven Sinofsky
I look forward to solutions like this for our own lives here in the U.S. We see some of this in service-by-service cases. For example, using Lyft is completely cashless. I can use PayPal at merchants like Home Depot. Obviously, we all see Square and other payment mechanisms. Each of these shares a common connection to established banking and plastic cards. That’s where I think disruption awaits. Will this be bitcoin alone? Will someone, even a carrier, develop and scale a simple stored-value mechanism like that being used by billions of people already?
For myself, and no doubt for many reading this, this transformation is old hat. I’ve seen these changes over the past decade across many countries in Africa and elsewhere. Africa isn’t single-marketplace by any stretch. What is working in Madagascar, Kenya, Somaliland and others might not work elsewhere, or might not work for all segments of a given economy. Stay tuned for more observations from this trip.
It is always worth a reminder how some changes can bring about a massive difference in quality of life.
–Steven Sinofsky @stevesi
P.S.: What happens when you’re forced to use high-tech 3G connectivity to do a Visa card transaction? The merchant (pictured above) goes outside in a rain forest and aims for a stronger connection for the card reader. Yikes!