Learning by Shipping

products, development, management…

Archive for the ‘posts’ Category

Disrupting Payments, Africa Style

Note from the author: For the past 10 years or so, I’ve been spending time informally in Africa, where I have a chance to visit with government officials, non-government organizations, and residents of towns, settlements and cities. In the next post, I’ll talk about free Wi-Fi in South Africa slums. This post originally appeared on Re/code.

Spending time in the developing world, one can always marvel at the resourcefulness of people living in often extraordinarily difficult conditions. The challenges of living in many parts of the world certainly cause one to reflect on what we see from day to day. Here in the U.S., we’re all familiar with the transformative nature of mobile phones in our lives. And for those in extreme poverty, the mobile phone has been equally, if not more, transformative.

One particular challenge faced by many in Africa, especially those living in fairly extreme poverty (less than $500 a year in purchase power), is dealing with money and buying things, and how the mobile phone is transforming those needs.

One could fill many posts with what it is like to live at such low levels of income, but suffice it to say that even when you are fortunate enough to ground your perspective in firsthand experience, it is still not possible to really internalize the challenges.

Slum life

Imagine living in a place where your small structure, like the one pictured below, is under constant threat of being demolished, and you run the risk of being relocated even farther away from work and family. Imagine a place where you don’t have the means of contacting the police, even if they might show up. Imagine a place where it takes a brick-sized amount of cash to buy a new cooking pot. image Representative home, or “struct,” in an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky

These and untold more challenges define day-to-day life in slums, settlements and townships in developing countries in Africa, where the introduction of mobile phones has transformed a vast array of daily living tasks. Take the structure seen above, for example. It is a settlement in a vacant lot next to an office park in Harare, Zimbabwe. About 120 of these “structs” are occupied by about 600 people. For the most part, residents sell what they can make or cook; a small number possess some set of trade skills. Below, you can see a stand run out of one struct that sells eggs farmed on-site.

image Shop window in front of home where fresh eggs are sold in an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky.

Mobile phones and extreme poverty

Through a Xhona-speaking interpreter, I had a chance to be part of a group (representing the government) hearing about life in the settlement. One question I got to ask was how many had mobile phones. Keeping in mind that the per capita spending power of these folks would be formally labeled “extreme poverty,” the answer blew me away. Nearly every adult had a mobile phone. When I asked for a show of hands, some proudly said they didn’t bring it to the meeting.

image Group of representatives showing off their mobile phones (all pictured owned a phone) of an informal settlement in the suburbs of Harare, Zimbabwe. Steven Sinofsky

Right away, you see the importance of a mobile phone when you consider the cost of the phone as a percentage of income. It is hard for us to imagine the trade-offs phone owners here are making, but in earning-power equivalence, a phone in this village is roughly what a car and its operation costs us — and we already have food, shelter and clothing in ample supply.

Communicating with family is a key function, because families are often separated by distance, as members go looking for work or to find a better place to live.

Phones are also used to call the police. Before mobile phones, there was simply no way to get the police to your home or settlement, since there are no landlines or nearby telephones. Keep in mind that most residents in these areas have no formal identification or address, and the settlements are often unofficial and unrecognized by authorities.

Phones are also used as an early warning system for authorities that might be on the way to evict folks, or perhaps perform some other type of inspection. The legalities of settlements and how that works are a separate topic altogether, but I won’t go into that here.

Phones are used to keep track of what goods are selling where, or what goods might be needed. A network of people helps each other to maximize income from goods based on where and when they can be sold, because they are needed. Think of this as extremely local information that was previously unavailable. This is crucial, because many goods have limited shelf life and, frankly, many people produce the same goods.

A specific example for some people was the use of phones to monitor the supply chain for beer and alcohol. One set of people specialized in redistribution of beverages, and needed to keep tabs on events and unique needs in the community.

A favorite example of mine is “queue efficiency.” One of the many challenging aspects of life in extreme poverty is waiting — waiting on line for water, for transportation, for public services of all kinds. Phones play an important role in bringing some level of optimization to this process by sharing information on the size of queues and the quality of service available. We might think of this as Waze for lines, implemented over SMS friends and family networks.

Some of these uses seem straightforward, or simply cultural adaptations of what anyone with a phone would do. The fact that Africa skipped landlines is a fascinating statement about technological evolution — just as, for the most part, the continent will skip PCs in favor of smartphones, and will likely skip private ownership of transportation for shared-economy solutions (the history of Lyft is one that begins with shared rides in Zimbabwe).

Skipping over traditional banking

An old-economy service that Africa is likely to skip will be personal banking. In the U.S., our tech focus tends to be on China and the role that mobile payments play there with WeChat or AliPay, or more broadly on the innovation going on payments between the innovative PayPal, Square and, of course, bitcoin. In Africa, almost no one has a bank account, and definitely no credit cards. But as we saw, everyone has a mobile phone.

The most famous mobile banking solution in Africa is M-Pesa (M for mobile, pesa is Swahili for money), which started in Kenya. People there use their phones to store cash and pay for goods. Similar solutions exist in many countries. Even in a place as remote and difficult as Somaliland, you can see these at work, as I did recently.

Madagascar is an island-country with incredible beauty and an abundance of things not seen across Africa, including natural resources, farmable land and water, not to mention lemurs. Yet the country is incredibly poor, with a countrywide per capita GDP of $400, which puts it in the bottom 10 countries of the world. On average, people live at the extreme poverty level of $1.25 per day in purchase power. One city I visited in Madagascar is home to the UN Millennium Development Goals, which is programmatically working to improve these extremely impoverished areas.

image Sign signifying the entrance to a town in rural Madagascar designated a Millennium Development Goals location, one of about a dozen worldwide. Steven Sinofsky

Yet technology is making a huge difference in lives there. Madagascar has three main mobile phone carriers. These are all prepay, and penetration is extremely high, even in the most remote areas. The country is wired with mostly 2G connectivity; there is some coverage at 3G, but it is highly variable. The only common use for 3G is for Internet access using external USB modems connected to PCs (usually netbooks) and shared.

Most of the phones in use are feature phones, often hand-me-downs from the developed market. I’ve even seen a few iPhone 3s. One person complained about being unable to update iOS because he has no high-speed connection for such a download (showing that people are connected to the world, just not at a high download speed). A developed-market smartphone is pretty much a feature phone here, and the cost of another network upgrade means that one is far off. People are anxious for more connectivity, but along with cost, the current state of government will make progress a bit slower than citizens would like.

A huge problem in this type of environment is safely dealing with money. Madagascar’s currency trades at $1 U.S. to 2,500 Madagascar ariary. When you live off of 3,000 or so a day, you’re not going to carry around three bills, so very quickly you end up with a brick of 100 Ar notes. What to do with all those? Where can you put them? How do you keep them safe? How can you even keep them dry in a rain forest?

Well, along comes mobile “banking.” As easy as you can recharge your phone, you can add money to your stored money account. You walk up to a kiosk — there are thousands and thousands of them — and in a series of text messages with the shopkeeper, you give her money and your phone gains stored value.

image Home and storefront selling recharge minutes for pay-per-use mobile phones; also a station for mobile-phone banking in rural Madagascar. Steven Sinofsky

With iOS and Android fragmentation, how would these apps work, given what must be finite dev resources? The implementation of this is all through an old-school standard called SIM Apps or Sim Application Toolkit.

This set of APIs and capability allow the installation of apps that reside on your SIM. These apps are simple menu-driven apps that look like WAP sites. They are secure and controlled by carriers. Using this framework, mobile banking has reached unprecedented usage and importance in developing markets, particularly in Africa.

The scenario for usage is quite simple. You charge your phone with money, just as you would with minutes. When you want to buy something, you bring up the SMS app (pictured below, on an iPhone 3 in Malagasy) and initiate a transaction. The merchant gives you a code, which you enter along with the merchant’s identifying code. You then type in an amount, which is verified against your current balance. The merchant then receives a notification, and the transaction is complete. The whole system is safe from theft because of the connection to your mobile number, two-factor authentication and so on. There is no carrier dependency, so you can easily send/receive to any carrier, though the carrier has your balance. This isn’t an interest-earning savings account, but rather a transaction or debit account (of course, in the U.S., few of us earn interest on demand deposits these days, anyway). image Screen showing “My Account” in Malagasy, displayed on a recycled iPhone 3 (note the absence of a cellular connection). Steven Sinofsky

You can also give and receive money from individuals. This is extraordinarily important, given how there can be distance between family or even the main wage-earning in a family. The idea of sending money around to family members is an incredibly important part of the cash economy of low-income people. This market, called “remittance,” is estimated to be over $400 billion in developing markets alone.

Life is easier and safer for those using mobile banking this way. You can count on your money being safe. You don’t need to carry around cash and worry about loss, theft, or water and weather destroying physical currency. You can easily deal with small and exact amounts. As a merchant, you don’t have to make change. It is just better in every dimension.

The carriers profit by taking a percentage of the transaction, which is high in the same way that check-cashing in the U.S. is high (and credit cards, for that matter). The fee is about two percent, which I am not sure will be sustainable, given the competition between carriers. I also think it will be fascinating to see how developed-market companies like Western Union evolve to support mobile payments, as they provide integration points to the developed-market financial systems. It is not uncommon to see a Western Union representative also offering phone recharge and mobile banking services.

In our environment, we would see this as a convenience, like a debit card. But in Africa, it is far more secure and convenient, because you only need your phone, which you will carry with you almost all the time, just as we do in the U.S.

I think the most interesting point of note in this solution is how it essentially skips over banking. If we think about our own lives, and especially those of the generation entering the workforce now, banking is most decidedly archaic. The whole idea of opening an account and dealing with a level of indirection which offers very little by way of useful services — it just feels like there’s a need for disruption. Our installed base of infrastructure makes this very difficult, but in the developing world that challenge doesn’t exist. It isn’t likely that most people will graduate to full-fledged banking just as we don’t expect people to graduate from a mobile phone to a full-fledged PC.

It also isn’t hard to imagine this type of mobile banking taking off first in the cash-based part of the developed world, where today people pay fees to cash checks and buy money orders, absent a bank account. The large numbers of check-cashing storefronts located near lower-income areas share much in common in some ways. One example is remittance. Many immigrants in the U.S. are the source for remittance funds going to developing markets. Seattle, for example, has one of the largest populations of Somalians outside of Northern Africa, and they routinely send funds back to their families. Today, this is a difficult process, and could be made a lot easier with a global and mobile solution.

Looking forward

image

Merchant using a credit-card reader attempting to get a stronger signal to complete the transaction in Anosibe, Madagascar. Steven Sinofsky

I look forward to solutions like this for our own lives here in the U.S. We see some of this in service-by-service cases. For example, using Lyft is completely cashless. I can use PayPal at merchants like Home Depot. Obviously, we all see Square and other payment mechanisms. Each of these shares a common connection to established banking and plastic cards. That’s where I think disruption awaits. Will this be bitcoin alone? Will someone, even a carrier, develop and scale a simple stored-value mechanism like that being used by billions of people already?

For myself, and no doubt for many reading this, this transformation is old hat. I’ve seen these changes over the past decade across many countries in Africa and elsewhere. Africa isn’t single-marketplace by any stretch. What is working in Madagascar, Kenya, Somaliland and others might not work elsewhere, or might not work for all segments of a given economy. Stay tuned for more observations from this trip.

It is always worth a reminder how some changes can bring about a massive difference in quality of life.

–Steven Sinofsky @stevesi

P.S.: What happens when you’re forced to use high-tech 3G connectivity to do a Visa card transaction? The merchant (pictured above) goes outside in a rain forest and aims for a stronger connection for the card reader. Yikes!

Written by Steven Sinofsky

July 25, 2014 at 9:00 am

Posted in posts, recode

Tagged with , ,

Apps: Shrapnel v. Bloatware

imageMuch is being said lately about the trend to unbundle capabilities for the web and apps. Is this a new trend, a pendulum, or another stage in the evolution of providing software solutions for work and life? Are we going to learn what some would say are lessons from a past generation of software and avoid bloatware? Perhaps we will relive some of the experiences from that era and our phones and tablets will be littered with app shrapnel as our PCs once were?

My own personal experience in product choices is marked by a near constant tension over not just bundle v. unbundle from a product perspective, but also from a business perspective. Whether on development tools, Office, Windows, or internet services I’ve experienced the unbundle <> bundle dynamic. I’ve bundled, unbundled, and had the “internal” debates about what to do when, what went well or not. If you’re interested in an early debate about bundling Office you can see the Harvard case study on the choice of “best of breed v. suite” in Finding the Suite Spot ($).

The Pendulum

This HBR article does a good job of bringing forth some of the history and describing the challenges of positioning unbundle/bundle as both a binary choice and a pendulum or Krebs-like cycle of resource conservation. Marc Andreessen does a great job in these two tweetstorms of detailing the bundle/unbundle cycle on the internet and the computer history we both grew up with (http://tweetstorm.io/user/pmarca/481554165454209027 and http://tweetstorm.io/user/pmarca/481739410895941632).

There’s one maxim in business that drives so much of the back and forth or pendulum behavior we tend to see, which is that most strategies have a complementary approach (vertical v. horizontal, direct v. indirect, integrate v. distinct, first v. third-party, product org v. discipline org, quantitative v. qualitative performance evaluation, hack v. plan, etc.) So in business depending on your roots or your history, and most importantly the context you find yourself, you are going down a path of one of more of these attributes.

Over time your competition tends to pick you apart the other way or ways. Equally likely, your ecosystem builds up around you innovating in parts where you are weaker, gaining strength, and showing off new approaches to product or market. Certainly, if you’re a new company entering an established market you will not just copy the approach of the incumbent which is why new products seem to be at the other end of one of these spectrums.

Then as you get in trouble you look around and try to figure out what to do. There’s a good chance the organization will double down on the approach that has always worked—after all as Christensen says, that is the natural energy force in an organization. That happens until a big moment of change (a major competitive success, leadership change, etc.) and then you change approaches. More often than not, your choice is to do the thing you weren’t doing before. If you’re around in the workforce long enough, you start to see things as a series of these evolutionary steps.

This is business, context is everything. There’s never a right answer in absolute, only a right answer given the context.

The moments of change, of breaking the cycle or swinging back the other way, are the moments that unleash significant improvements in the work, the product, or the workplace.

History and Customers

As consumers we adopt new technologies without realizing or thinking about whether they are bundled or unbundled, and our choices and selections for one or other are highly dependent on the context at the time. There are times when bundling is essential to the distribution of technology, just as there are times when unbundling brings with it more choice, flexibility, and opportunity. Obviously the same holds for businesses buying products, only businesses have purchasing power that can make bundled things appear unbundled or vice versa.
It is worth considering a few tech examples:

  • Autos began with minimal electronics, followed by optional electronics, then increasingly elaborate integrated electronics and many now think that smartphones will be the best device for in-car electronics/apps (for example the BMW i series).
  • LinkedIn began as a network for professionals to list their credentials and connect to others professionally. Recently it has bundled more and more content-based functionality.
  • Mobile telephony used to have distinct local, long distance, text and then data plans, which have now been bundled into all-you-can-consume multi-device plans.
  • Word processing used to have optional spell-checking and mail merge which was then bundled into single products which were then subsequently bundled into suites and also now bundle cloud services. Similarly, financial spreadsheets, data analysis, and charting were previously distinct efforts that are now bundled. Today we are seeing new tools that have different feature sets and approaches, representing some unbundling and some bundling.
  • Operating systems were once highly hardware dependent, then abstracted from hardware but with optional graphical interfaces, followed by a period of bundling of OS+Graphics, followed by a bundling of OS, graphical interface, and hardware in a single package. Today with services we’re seeing different combinations of bundling and unbundling innovations.
  • Microprocessors have been on a fairly continuous bundling effort relative to peripherals, graphics, and even storage.
  • Modern smartphones are a wonder of bundling, first at the hardware level (SoC packaging) followed by hardware+software, then through all the devices that were previously distinct (GPS, still camera, video camera, pedometer, game controller, USB storage, and more).

There are countless examples depending on what level in the full consumer offering is being considered (i.e. product, price, place, promotion). Considering just these examples, one can easily see the positives and potential pitfalls of any of these.

Yet in looking these examples and others, one can make a few observations about how customers and teams approach bundling choices for products and services:

  • People like distinct products when exploring new capabilities and product teams like building single purpose tools early in product lifecycle, out of both focus and necessity/resources.
  • People like it when their favorite product adds features that previously required a separate product, especially when their favorite product is growing in usage. Product teams love to add more features to existing products when those features map to obvious needs.
  • People have some threshold for when an integrated product turns into an overwhelming product, but that “line in the sand” is impossible to define a priori and depends a great deal on how products are evolving around your product. Mobile phone plans today are great, but many are very unhappy with Cable TV bundles.
  • Competition can come from a bundle that you were previously not considering **or** competition can come from unbundling the product you make.
  • Product managers often reach a point where they can no longer solve the problem of adding new features while seeing them get used and also getting credit for innovating.
  • Macro factors can radically alter your own views of what could/should be bundled. If your business does not have a software component and your competitors add one, attempting to bundle that functionality could be quite challenging (technically, organizationally). If the platform you target (autos, spectrum, screens) undergoes a major change in capability then so too does your view of bundling or unbundling.

These examples and observations make one thing perfectly clear: whether to bundle or unbundle features depends a great deal on context and customer scenarios and so the choices require a great deal of product management thought. The path to bundle or unbundle is not linear, predictable, or reactionary but a genuine opportunity and need for solid product thought.

Strategic questions

On the one hand, considering whether to bundle or unbundle innovations might just be “do what we can that is differentiated”. In practice there are some key strategy questions that come up time and time again when talking to product folks.

  • Discoverability. The most critical strategic question to bundle or unbundle is whether the new work will be discoverable by intended customers. In a new product, the early waves of innovative features often make sense bundled. Over time, just responding to customers means you’ll be bundling in new capabilities (whether organic or competitive).
  • Usability. When faced with a new feature or business approach, the usability of this approach is a key factor in your choice. If you’re unable to develop a user experience that permits successful execution of the desired outcome, then it doesn’t really matter whether your bundled or unbundled.
  • Depth. When making the choice to bundle or unbundle you have to think through how much you plan on innovating in the spaces. If you’re setting yourself up for a long-term head to head on depth versus believing you are “checking a box” you have different choices. Incumbents often view the best path to fending off a disruptive unbundled feature as adding a checkbox to compete (to avoid the trauma of a major change in approach). Marketing often has an urgency that drives a need for market response and that can be represented as an unbundled “add-on that no one cares about” or “a checkbox that can be communicated” — that might sound cynical until you’ve been through a sales cycle losing out to a “feature as a product”.
  • Business economics. If you charge directly for your product or service (or freemium), then there will be a strong incentive to bundle more and more into your existing offering. Sales will generally prefer to add more features at the current price. Marketing will potentially advocate for a new pricing level to increase revenue. If you choose to unbundle and develop a new product, side-by-side or companion, then you’ll need to consider what your attach rate might be. A bundled solution essentially sees a 100% attach rate to your existing product whereas a whole new product brings with it the need to generate demand and subsequent purchase or usage. An advertising-based service will see increased surface area for an unbundled solution but will also dilute usage. A web-based service allows for cross-linking and easy connection between two different properties, but apps will require separate downloads and minimal cross-app connections.
  • Usage economics. It might sound strange separating out business from usage, since especially in a SaaS world they are the same thing. In practice, if you’re revenue is tied to usage directly (page views, transactions, etc.) then your design needs to factor in how you measure and drive usage of the features, bundled or unbundled. If you’re economics are not tied directly to usage you will have more strategic latitude to consider how your offering plays out bundled v. unbundled (assuming your boss lets you keep working on something no one uses).

Product management approach

Should you add that new feature or capability to your existing product or should you create a new destination (app, site)? Should you break out a feature because unbundling is the new normal or will that just break everything? Those are the core questions any PM faces as a product grows.

One tip: do not claim that one approach (bundle v. unbundle) is good for users and the other approach is only good for business. In other words, bundle v. unbundle cannot be distilled down to pro-user or anti-user, or more importantly marketing v. product. The best product people know that context is everything and that positioning a choice as A against B is counter productive—everyone is on the same team and has the same broad goals. As difficult as it is, working through these questions with as much dialog as possible and as much “walk in the other’s shoes” is absolutely critical.

There are many natural forces at play that will drive one way or another.

For example, most organic product development will tend to expand the existing product as it builds on the infrastructure and momentum already present.

Most new acquisitions will tend towards acquiring unbundled solutions, aka competition, though in the enterprise space one can expect significant calls to integrate even disparate technologies.

Part of being a good PM is to step back and go through a thoughtful process about whether to bundle or unbundle new capabilities. The following are some design choices.

  • Advertising new features in proportion to expected usage. There’s a general view to advertise a new feature in the UX in an excessively prominent manner. You want people to know you fixed or added a feature. At the unbundle extreme this means a whole new app and a trend to shrapnel. In the bundle extreme this means a big UX to drive you to a new thing. The most critical choice is really making sure that you are designing the access to the feature to be in relative proportion to how much you expect your customer base to use something.
  • Plan for “n+1” in all experience choices. As you make the choice to bundle or unbundle, know ahead of time that this will not be the first place you make this choice. If you’re adding a new app today then chances are that will become the way you solve things down the road. If you’re adding new UX access to a feature then plan on more depth in that feature or more peer features. Is the choice you are making scalable for the growth in creativity and innovation you expect?
  • Integrate or connect in one direction, not both. If you bundle or unbundle there will be a relentless push to promote the connection between elements of the product or service. Demo flows, top-level UX, even deep linking between apps. At some extreme if you bundle n items, it might not be unrealistic to go down a path where every n is connected to every other n-1 and vice versa. This is incredibly common in line of business apps/modules.
  • Bundle and innovate, don’t bundle and deprecate. If you make a choice to bundle a capability into your mainline effort, do not bundle it to make it go away. Bundle it and think of it as just as important as other things you do. This dynamic appears when your competition does something you don’t like so you hope to have a checkbox and make the competitor go away. This never happens.
  • Designing for good enough leaves you open to disruption. Closely related to deprecating while bundling is the idea that a “tie is a win”. Once you’re established you often think that you can continue to win against a competitor with an integrated implementation that is “good enough”. That might work in short-term marketing but over time, if the area is important you’ll lose.
  • Expect hardware to be relentlessly bundled. If you connect to hardware in any way, then you’ll be faced with a relentless march towards bundling. Hardware naturally bundles because of the economics of manufacturing, the surplus of transistors, and the need to reduce power and surface area. Never bet on hardware or peripherals staying unbundled for long.
  • Expanding software depth is easy, but breadth often adds more value. Engineers and product managers love to round out features, add more depth, more customization, and more incremental improvements. This is where the customer feedback loop is really clear. In terms of growing the business and attracting new customers, expansion in breadth is almost always a better approach so long as you “bundle” features that seem natural. Over indexing on depth, particularly early in a product life-cycle leaves you open to a competitor that does you plus other valuable things, no matter how much you think you’re unbundling approach is cleaner and simpler.
  • Defined categories do not remain defined for long. In enterprise products the “category” or “magic quadrant” is everything. In practice, these very definitions are always in transition. Be in the lookout for being redefined by an action of bundling or unbundling.
  • Assume sales and marketing will prefer new capability to be bundled, or maybe not. Finally to highlight how contextual this is, there is no default as to how outbound efforts will prefer you approach the problem. It is not necessarily the opposite of what you are doing or the same as a competitor. For example, if your sales force economics are such that they are strongly connected to a single product and sales motion, it will be clear that bundling will be preferred no matter what a competitor is up to. At an extreme, even an unbundled feature will be used as a closer or a discount, particularly in the enterprise. Conversely, even if your competition is highly bundled, you’re own outbound efforts might be structured such that unbundling is a competitive and sales win. You just never know. Most importantly, the first reaction isn’t the way to base your approach—spend the time to engage and debate.

To bundle or unbundle is a complex question that goes beyond the simplistic view that minimal design makes for good products. Take the time to engage broadly across the team, organization, and to project forward where you want to be as these are some of the most critical design choices you will make.

–Steven Sinofsky @stevesi

 

Written by Steven Sinofsky

June 28, 2014 at 3:00 pm

Posted in posts

Tagged with , , ,

Tanium Magic

ssLightning doesn’t often strike twice, but in the case of the father and son team of David and Orion Hindawi, founders of Tanium, Inc., that’s exactly what has happened. Tanium is a prime example of a modern enterprise software company—solving the new generation of today’s problems using skills and experience gained from being successful founders in the previous generation.

Forming the company

David Hindawi, a PhD in Operations Research from UC Berkeley is an entrepreneur who led the creation of several successful companies through the earliest days of the PC era. His early efforts focused on getting PCs connected to the “net” and keeping them running smoothly.

In 1997, David teamed up with his son Orion, then an undergraduate at UC Berkeley, to form BigFix. BigFix solved the problem of communicating with all the end-points (PCs, servers, virtual machines, and more) on enterprise networks to gather configuration data and deploy product updates. BigFix was a remarkable product for the time routinely scaling to 100,000 end-points. In 2010, IBM acquired BigFix and integrated it into the Tivoli Software portfolio marking a successful exit.

Some might have been content to rest on their collective laurels having invented the technology, built a company, and scaled a business to the most elite of enterprise success stories. Instead, David, Orion and the key architects of BigFix had an even bigger idea.

Forming Tanium came about as the team reflected on these product shortcomings. “We recognized that enterprises needed endpoint control that was much faster than they could get with existing tools, and challenged ourselves to leapfrog the state of the art, including BigFix, where basic management queries could take days.” Orion recounted, “We knew that nothing short of a 10,000 times speed improvement over the state of the art at the time would solve the problem, and we needed to fundamentally change the paradigm of systems management and end-point security to accomplish that. We are lucky to have one of the few engineering teams in enterprise management who are smart and ambitious enough to do that”.

The team, mostly members of the original BigFix engineering group and all experts with years of experience in large enterprise management, worked in their Berkeley, CA offices for almost two years before the first customers saw the early results of their new product. When seeing the product in action, it was clear to early customers that the team had in fact built a better mousetrap. Tanium was born.

Meeting Tanium @ a16z

When Orion first came to Andreessen Horowitz to meet us and introduce Tanium we had no idea what a surprise we were going to see. Collectively we are many old hands at systems management and security. Many folks at a16z share the experience of having built Opsware and my own experience at Microsoft make for an informed, and perhaps tough, audience.

Orion popped open his laptop, clicked a bookmark and navigated to Tanium’s web-based “console”. At the top of the screen, we saw a single edit control like you’d see for a search engine. He started typing in natural language questions such as “show computers where CPU > 75%” and “show computers with a process named WINWORD.EXE”. Within seconds, just like using search, a list of computers scrolled by as though it was just an existing spreadsheet or report. At this point we reached the only reasonable conclusion—­Orion was showing us a simulation of the product they hoped to build.

After all, we were all quite familiar with the state of the art for this type of telemetry (BigFix in particular represented the state of the art) and we knew that what we were seeing was just not possible.

But, the demonstration was not a simulation or edited screen capture. In fact, Tanium was running on a full scale deployment of thousands of end-points. This wasn’t even a demo scenario, but a live, production deployment—the magic of Tanium. As we learned more about Tanium and how it easily scales to 500,000 end-points (not theoretically, but in practice) and the breadth of capabilities, we were more than intrigued. We were determined to do what we could to invest in David, Orion, and team.

Redefining State of the Art

In enterprises, one team is generally responsible for securing end-points, while another is responsible for managing them (systems management). Typically, each team uses its own tools, and each is independently struggling to keep pace with modern network security threats and the scale of modern networks.

Today’s IT Pros on both security and management teams know the types of information they need from their network. With current tools these questions require careful planning, significant infrastructure, and a fine balance between what IT needs to know and the cost to the end user who is working on the computers that are being queried – if you get it wrong, you can cause slow logons and sluggish performance at inconvenient times. However, to effectively manage and secure networks and provide assurance of compliance with government and industry regulations IT Pros absolutely require information such as hardware configuration, software inventory, network usage, patch and update status, and more. In addition, today’s socially engineered security risks are often combinations of seemingly simple combinations of running programs, files or attachments on the system, and a few other clues. An IT Pro walking up to a PC or Mac could easily obtain all of this information, but for all practical purposes it is impossible for them to gather that data from the thousands of end-points they are responsible for with any level of ease or timeliness.

Getting that data at scale is typically hard and slow because almost every Systems Management tool uses a classic hub (servers) and spoke (end-points) architecture.  IT Pros deploy multiple servers running on network segments with high-end databases and significant networking hardware combined with fairly elaborate end-point runtimes. Even when this state of the art deployment is carefully tuned, the best case at very large scales can be 3 days to “compute” the answer to critical operational questions, assuming you knew ahead of time you were going to ask those questions. By this time the information would be out of date and by then the whole problem you were thinking about has probably changed. As a result most IT Pros know that best case the data is approximate, and worst case just worthless. For mission critical problems, such as compliance with HIPAA (healthcare) or PCI (electronic payment) regulations, this is more than just inconvenient for IT, it can cause a painful failure with board-level visibility.

The state of the art for Security is all about building stronger and taller walls between the enterprise network and the internet.  We’re familiar with these approaches across the basics of firewalls, more sophisticated security appliances and adaptive architectures, and of course the typical security suites that run on end-points. Unfortunately, the bad guys are wise to that game, and modern threats are created anticipating that these protections are in place—in many cases, the bad guys actually “QA” their attacks against the systems enterprises use before they release them. In addition, today’s malware is targeted to particular organizations, and is often put in place by a series of seemingly benign or undetectable actions. Malware, a bot, or a backdoor make their way onto the network leaving behind a series of benign clues—a running process, a changed file, a memory signature, or a specific network packet.  It is only taken together that a pattern emerges. It is only after the fact or with an IOC (indicator of compromise) in hand that IT Pros can potentially track down end-points that have been compromised. Unfortunately, IT is literally swamped by IOCs to investigate and there are no effective tools that support this wide range of questions and even if you could, the state of the art would give answers in days, long after the damage was done.

Even with these challenges, both of these state of the art approaches have their place in a modern network. It would be irresponsible to run a network without basic asset management or network firewalls and end-point protection such as anti-virus.  Unfortunately, for the vast majority of both threats and systems management, the needs of IT Pros are far more dynamic and complex than existing systems can provide.  This is the opportunity where Tanium adds unique value to the tools of the modern IT and Security professional.

At 16z, we love the opportunity to partner with enterprise companies that are either working to radically improve the way a given IT need is met with software or transforming the IT landscape by re-creating or re-defining the traditional categories with unique software. Tanium is magical because it is transformative across both of those measures.

Innovating Tanium

In practice, the Tanium team accomplished nothing short of a complete rethinking of how IT Pros manage, secure, and maintain the end-points in their network—every node on the network can now be interrogated, managed, updated, and secured, instantly from a browser. Literally, you can ask almost anything of an end-point from basics such as configuration, patch status, software inventory compliance, performance, reliability measures, telemetry, network activity, files, and more (basically anything you can ask of a running system) and get answers back in seconds. Not only can you ask questions, but you can take actions as well—distribute and install updates, shut down processes or executables, remove or quarantine files, and so on. All of this happens in seconds, across your entire network of end-points, across LAN segments and the WAN, from branch offices to headquarters to the data center.

Orion walked us through the magic of Tanium. It became clear very quickly that David, Orion and team have invented a completely new way to think about managing and securing a network of computers. The magic of Tanium is built out of four innovative technology pillars:

  1. Runtime. The Tanium runtime builds on the end-point management lessons of BigFix. The runtime serves as the platform for asking the end-point questions in the scripting language of your choice (VBscript, Powershell, WMI, Python, Unix Shell, and most any other language), packaging up the answers and getting them to single server/VM that coordinates the activities. The runtime also provides actions allowing you to make changes across your entire network, instantly. The end-point runtime is a couple megabytes, takes almost no CPU or RAM, and incurs nearly imperceptible network usage.
  2. LP2P Networking: End-points secured by Tanium do not drive up costly WAN traffic but instead communicate between end-points on the local area network. Expensive WAN load is vastly reduced because rather than all end-points trying to reach a single data center across the WAN, answers and actions are coordinated across an incredibly efficient linear peer-to-peer (LP2P) architecture—an innovative hybrid of mesh and peer-to-peer concepts designed and validated for the enterprise. LP2P is self-healing and architected for fault tolerance, transient end-points, and global WAN segments connected in a typical manner.
  3. Natural Language. The interface to Tanium is through a simple text box where you can use natural language to ask questions of the entire set of end-points. Just like using web search, each question gives you suggestions for follow up questions, refinements, and ways to improve your queries. You use natural language questions to generate tables, charts, time series, and other representations of your near real-time network status—instantly.
  4. Security. The entire Tanium platform was of course architected from the ground up to be secure enough for the largest enterprise and federal networks – Tanium affords IT Pros incredible power and flexibility in managing and securing end-points, and they recognize the need to ensure that power stays in the right hands. As a result, all traffic is FIPS level secured, actions are controlled and validated by signed certificates, and administrators have fine-grained control over the types of queries and actions permitted by different users within IT.

If you’re running existing state of the art tools for managing and securing your end-points, you have a fixed set of diagnostic questions that you routinely ask and then store the answers in a database for later analysis. Even if it’s a simple question like what version of OS software your computers are running, it will take a few days or more to get answers. If you have a crisis requiring new information, you likely push out an emergency logon script or dreaded background process to add a new question to the list of slowly collected answers, and days later you know the approximate answer.

As a result of the innovations above, Tanium completely upends the thinking about how this should work. By analogy, if you think about the current state of the art as a printed set of classic encyclopedias then Tanium is like having the entire internet at your disposal through a search engine. Rather than a set of fixed questions and answers, you use Tanium to explore your end-points. When new security threats arise you can immediately explore your risk by using any telemetry to diagnose your risk and then using any mechanism to take corrective actions—instantly.

A top of mind example for all of us is the outbreak of Heartbleed. As soon as your operations center  received notice of this vulnerability, there was one simple question “what variants and versions of OpenSSL are we running across all servers and VMs”. Almost no management and inventory system would have this readily available. Many would have first relied on what was believed to the “standard” images, but later would find out that isn’t enough. With Tanium, you just ask a question in natural language and within seconds you can have any level of details required on the servers and VMs running OpenSSL. You can then shut those servers down, deploy updates, or monitor actions—instantly.

Identifying and securing end-points for compliance with regulations, software licensing, or corporate policy is equally simple. When talking to Orion about Tanium, I searched my own experience for what I thought was a trick question. I wanted to know “how many end-points had attached USB memory stick and written to it recently” (a potential information leak, compliance issue, or malware vector all in one simple and common operation). Once again Tanium’s magic delivered an answer from a natural language query in just a few seconds for thousands of computers.

In addition to all of this, Tanium is also a true platform. IT Pros can utilize mature REST, SOAP, and syslog APIs to connect the results of Tanium queries to their favorite big data destination and develop time series models of their end-points, and mine the data for patterns. Because the Tanium runtime has such a minimal impact it is possible to collect thousands of independent data points continuously from hundreds of thousands of end-points, feeding the predictive analytics and big data systems that enterprises are building today with extremely valuable data. This type of analysis allows for finding points in time when the network changed, identifying malware, bots, and other exploits that we all know escape traditional firewalls and anti-virus. Using the platform, IT can also create tailored dashboards and custom actions that enable monitoring and guarantee compliance of end-points with standards.

Tanium and a16z

I could go on and on about the magic of Tanium that David, Orion, and the amazing team created. In fact when we talk about Tanium we describe it as an entrepreneur trifecta. First, David and Orion are experienced and successful entrepreneurs. Second, Tanium is a product that builds on innovative and inventive technology that could only come about from a team with years of experience and a depth of understanding of the enterprise. And third, Tanium is already a successful and profitable company with dozens of customers in massive, mission-critical and global deployments.

With this incredible story, Andreessen Horowitz could not be more excited to be leading an investment in Tanium. I’m personally super excited to be joining the Tanium Board where I will work closely with David, Orion, and the team.

–Steven Sinofsky (@stevesi, steven@a16z.com)

This post is also on a16z.

Written by Steven Sinofsky

June 22, 2014 at 3:30 pm

Posted in a16z, posts

Tagged with ,

#codecon and reflecting on generational changes

RecodeAttending the <code/conference> (#codecon) this past week turned out to be a remarkable experience, even more remarkable than I expected. The generational shift in our computing experience from desktop to mobile, from software to services, and from hundreds of millions to trillions was on display through the interviews with a dozen industry CEOs.

This post will explore this generational change through the speakers at the conference. Before diving into the details of each session, we will explore this change and the implicit context.

Generational Change

Reflecting on the interviews and demonstrations as well as the “lobby chatter” is a key part of learning by attending. I’ve always viewed this conference and predecessor D Conference as the most relevant conferences for learning about the strategic drivers of our industry. You can read my report from last year here. Writing these reports is part of the learning for me and reading the old reports lets me checkpoint on my own learning and journey.

If you move beyond the insights from any single speaker or the announcements at the event (all were widely reported by re/code and others and new this year by re/code partner CNBC), one theme just keeps coming back to me—the vast difference in tone and content between the incumbents and the challengers, between legacy and disruptors, between the old guard and the new, or whatever labels you want to use. We talk all the time about the transition of our industry from one era to another (and don’t forget the term “post-PC” was first used in this very forum) and the conference provides a microcosm expressed through leaders of these transitions taking place.

There is a vast difference in tone and content between the incumbents and the challengers, between legacy and disruptors, between the old guard and the new.

The transition is in full force. This does not mean by definition that all existing companies will lose and only new companies will win. Quite the contrary, the fact that these changes are now visible to all makes the creation, purchase, and use of new products and technologies evidence of the transition, as well as opportunity to create new plans and adjust. The mobile internet is causing the transition but also making the communication of that very transition much more transparent, which is unlike the progressive unveiling that characterized the mainframe to mini to PC transition.

Are the new companies doing enough to transition customers as well as their own business to new paradigms? How much should new companies bridge from existing solutions or should they expect a wholesale change from customers? Is there an understanding of the existing complexities of the real world?

Are the incumbents changing enough to build new products and business that reflect the new generation? Are they trying too much to “thread the needle” and incrementally step to a new context by maintaining status quo or “repotting the plants”? Is there an understanding of the complexities of existing solutions?

The puts this "generational" change out there for us to experience through the always challenging, yet always consistently even-handed questioning (interrogation) from Walt and Kara (and a great addition this year were interviews featuring seasoned members of the re/code team).

Context (is everything in business)

The attendees (in the audience) are people who have worked in the industry often times since the earliest days. The interviewers are professionals who cover deeply the industry and the subjects. It is hard to imagine creating a more informed or tougher environment. That’s the challenge.

Yet, industry leaders both line up and are obliged to appear (for the most part). Because the environment is so challenging and widely covered, leaders gain a great deal of credibility by standing up to the challenge.

Leaders gain a great deal of credibility by standing up to the challenge of appearing.

The conference takes place the same time every year, whether a company has something to announce or not. For example, last year attendees were frustrated because Apple’s Tim Cook did not announce anything. This is an unfair way to look at the “performance” of a participant. This conference has an amazing audience, but it is also an “uncontrolled” environment so announcing a new product is not without risk and not without huge upside (Disclaimer: I’ve been part of several product announcements/interviews at this forum). Apple, along with many companies, has a tried and true approach to announcing new things as we will see next week.

What is most interesting about the forum, however, is that the format and depth of the dialog allows for a strong “how did we get here” or “how are you wrestling with challenges” discussion. This is not a one-way speech or a forum where talking points go unchallenged. That is in a sense what separates the men from the boys so to speak.

When speakers prepare for the interview, especially at larger companies, folks in communications prepare talking points, responses to tough questions, anecdotes, and even jokes. This is a forum where this can take on “Presidential debate” levels of preparation. The challenge is that everyone in the audience and certainly the interviewers are all well-versed in these techniques. For the presenters, all of that over-preparation cycles through your mind during the tough questions and unpredictable questions from the audience. This is a tough environment.

When speakers choose not to say anything of depth or the answer is clearly a prepared message, you can almost feel the energy in the room drain. There is a collective sense of a missed opportunity to learn more among attendees.

When speakers choose not to say anything of depth or the answer is clearly a prepared message, you can almost feel the energy in the room drain.

Too many people focus on CEOs evading questions about the next big deal or the features/availability of the next product. I don’t think that is a way to evaluate speakers and in almost all cases the interviewers ask a question like this one time often make a joke and move on.

Reporters have an obligation to ask or they look like they are not doing their job. Speakers have an obligation to acknowledge such a forward-looking, material statement and move on. There’s a big caveat to this and where I wanted to share my own learning, my own journey. I believe when it comes to challenges and strategy, CEOs specifically and companies in general can and should do more to inform the dialog. The way I would say this is that if there is something out there that everyone knows to be a fact and the speaker knows to be a fact and everyone knows everyone knows, then talk about it. By not talking about it, the conventional wisdom becomes the reality and the conventional wisdom is often wrong and always incomplete.

I have personally experienced this in the transition from Windows Vista to Windows 7. “Everyone” knew something was up with Vista and certainly Microsoft knew, but no one was saying anything. The result was a strong desire to know the next features of Windows, which was the only thing that folks knew to ask. It served no one to talk about the features of the next product but it also served no one to pretend everything was going well. I missed a big opportunity and looked foolish in a very early interview I did with a (now) re/code reporter. I followed the tried and true approach of the incumbent which is to say nothing, redirect, and so on. See several thousand words without saying anything appear here, from 6 years ago this week.

It turns out that in a world of global instant communication, transparency, open source, platform shifts, and so on that the story about the products, the strategy, and more can come to define efforts more than folks think. This isn’t always the case because business is a social science, but by and large what distinguishes the way the PC era evolved from the way the mobile era is evolving is a vast difference in the flow of information and pace of change. Corporate communications and the leadership approach need to adapt to this era. Recognizing this one thing we did on the above transition in Windows was start blogging about the “why” of the product long before the release, which to this day was a unique level of transparency (and also a huge challenge).

The generational change taking place now is challenging large companies more than ever before. Technology companies are seeing their investments and assets have faster lifecycles and shorter lifespans. They should address head on the challenges of these timescales and commitments. Business approaches are also being challenged and everyone knows this on all sides, but not talking about the challenges means everyone just assumes how things will evolve, and collectively everyone can’t be right.

These changes are also pushing and pulling customers more than ever before. As individual consumers we invest a little bit in a new phone or tablet and maybe a gadget and services here and there. Some of these pan out and some don’t. But large companies looking to define themselves in a new era of mobility, bring your own devices, cross-organizational boundaries, and cloud need much more information and a clearer understanding of what and why things have transpired like they have. Discussing the rationale behind choices provides much more context for customers making bets and allows a much more open dialog to compare and contrast choices. This goes way beyond features and gets to the strategy, learning from the past, direction for the future–it is a fine line.

It is too easy to fall back to wanting to know the next products and features. Companies still have secrets. That’s what defines a company relative to competition. As Jeff Bezos commented recently, “sure, I’d like to know Apple’s product roadmap”. To interpret the need for openness as a public roadmap or feature list misses the point—what was missing from the incumbent perspective was a view of what has transpired over the past 5 years and with that understanding a view of what could provide more understanding of how investments are moving forward.

The real question is if incumbents are going to change enough, fast enough, and in a sense disrupt themselves and do so with a clear understanding of what has transpired in the past few years. Or will they take on all the characteristics of “Innovator’s Dilemma” and operate hoping incremental change dampens any effect of big transitions will allow them to weather the storm and return to normal.

To see how significant this transition is, I think it is best to start with Mary Meeker’s always informative “Internet Trends 2014″. The complete report is available and so is the video. There were many interesting data points—the rise of China, the conversion of smartphones from feature phones, the move of OS platforms to Silicon Valley companies, messaging, and more. One slide that sums up the transition along with the challenge showed the growth of tablets relative to PCs with the title “Tablet Units = Growing Faster Than PCs Ever Did…+52%, 2013”.

 

Tablet growth relative to PCs

Because business is a social science and because there are many ways to look at data, no doubt some will challenge this data or conclusions. In fact, IDC just revised their tablet numbers down. Some feel that Tablets are reverting to their role as “media consumption” or lightweight computing devices. That I’m writing this on a tablet (yes one with a keyboard, but one with LTE, 10 hour battery life, weighs nothing, B5 size, etc.) provides my own anecdote about where things are heading.

This growth will change. It might sputter and then increase. There’s no doubt tablets are overtaking notebooks in terms of unit volumes. They are definitely not taking over all notebook workloads. But that would be like saying the growth of email was irrelevant to word-processing because it ignores the growth of the pie and shift in total volume to the new technology. As Steve Jobs said on stage at this conference, the software will catch up. This is happening. Despite what people might think, large numbers of attendees had their tablets at the conference and they were being “productive”.

Just as mainframe companies attempted to point out the shortcomings of PCs as servers, pointing out the shortcomings of tablets is not helpful, especially as tablets continue to gain more and more features of laptops while maintaining their unique characteristics (lightweight, fanless, quality over time, connectivity, reliability, security, apps, etc.)

One more slide from Mary sets the context that dominated the divergence of incumbents and disruptors and that was the view of the market size of each generation of computing, “Each New Computing Cycle = >10x > Installed Base Than Previous Cycle.”

Each New Computing Cycle = >10x > Installed Base Than Previous Cycle

“More than just phones” might lump too many devices into the last data point for some wishing to make the point that things are not changing so much. Let’s be clear—many mainframes still run the most critical systems of the world (I was in a briefing with an insurance company last week that wanted to hire me because I happened to know PL/1!). Today’s laptops have massive utility that isn’t being replaced overnight and probably won’t ever be “replaced”. That’s the Innovator’s Dilemma argument that does not equip either product developers or customers to innovate and prosper during these cycle changes.

Once you get beyond the specifics of what is coming next, which no one should be obliged to answer at #codecon, the dialog that gets to the heart of what is going on is worth having. What was missed? What was learned? What was tried? What did you think of what was tried? What is being done differently? How are big technology changes being thought of in isolation? Relative to existing investments? What point of view does a company have? What led the new company to be formed? What is different about investments being made? How do customers cope with change?

These questions and how they were answered made for quite a contrast between incumbents and disruptors. If you’re interested in per-speaker reports or the full interviews for any of them, please see the re/code site. My intent is not to summarize the sessions but to reflect on the sessions through this lens of forward leaning versus backward looking.

Incumbents

The incumbents of Microsoft, Intel, Comcast and Wal-Mart had a common theme which is that they each face significant challenges in the technology platforms and business models that brought them wildly successful. At the same time, each in my view missed an opportunity to say how they intend to change. In a sense, each asked us to leap to a future with them in leadership but without the detail to support that assertion.

It is key to understand that it is incredibly important for an industry to have large and healthy players operating at scale. In many ways, the startups we love serve as disruptive R&D for larger players and a healthy M&A pipeline is critical for all as evidenced by some of the recent mega-deals and dozens of smaller ones all aimed at the long term evolution of core products.

It is incredibly important for an industry to have large and healthy players operating at scale

Yet, many investments, particularly in hardware and manufacturing, require billions of dollars that can only be made by large companies. Incremental improvements we come to take for granted such as doubling of capacity, improved batteries, thinner devices, more pixels, massive data centers, and so on can only come from huge scale and well-functioning large companies.

At the same time, one look at Meeker’s slide above and one can’t help but notice that these large companies come to define the cycles she represents. Is that a convenient way we recall changes or were strategic changes part of a causal relationship? Don’t be so quick to judge. There’s a significant amount of subtlety and nuance.

Let’s look at some of the specific speakers.

Microsoft’s Satya Nadella and Intel’s Brian Krzanich both sit in the hot seat (the red chairs that define the #codecon set) with the same question so it is worth considering them together—what happened with respect to mobile and tablets. Satya talked about wishing to have taken the bet to build hardware all the way, sooner. Intel talked about the challenges in manufacturing at 14nm, not having the right product relative to power and the need to do better at 10nm. Mossberg kicked off Brian’s interview with the observation that he’s using a laptop half as frequently and using ARM based products a great deal. In a moment of candor, Brian talked about how many at Intel wished that the march towards mobility would have stopped at Ultrabooks and that Intel lacked the right parts to do tablets, which many at Intel did not think tablets would break out beyond consumption. I felt Brian’s comments showed a good acknowledgement about why things didn’t happen. At the same time, collectively the view of a strategy in the near to medium term didn’t come through. In eerily similar approaches, both Intel and Microsoft looked to a future beyond phones and tablets to an internet of things or more personal computing as where they will see greater success. I left both of these sessions feeling there was more to be told about where things are right now and what will happen over the next year or two (again not the features but the strategy—Microsoft and tablets small and large, Intel and mobile or even Chrome and Android). It isn’t that nothing was said, it was that everyone knows where things are today and the speakers know everyone knows, and the upside to keeping things close to the vest seems minimal and equates to “go with the disruptors” at some level.

One must admit that the challenge faced by Wal-Mart’s Doug McMillon is even greater in this audience which has few Walmart regulars (note, I shop at Walmart). In particular, many in this crowd are on the leading edge of home delivery and uber-for-everything and so visiting stores is already a thing of the past. That said, so much of what was said about online commerce felt too much like an expected incumbent response. For example, the idea that the lines are blurring between ecommerce and retail or that it is really hard to measure ecommerce if a person looked up an item on their mobile device before coming to the store (I wondered if there really was a metric that tried to give credit internally to the ecommerce division if someone did that). Ultimately, Doug said “physical still matters and digital makes it more valuable”. Maybe, except the last morning of the show I ordered a wall mount for the Sonos speaker we received at the show (yes elite gifts are part of the elite show) and it beat me home. Yes that is a luxury good and more, but to put forward the notion that ecommerce is still an add-on to physical stores seemed tricky for me.

Comcast’s Brian Roberts not only faces the challenge of cord cutters represented in the audience or the prospects of dealing with questions on net neutrality, but also just the fact that a lot of people have a lot of less than positive feelings about the products and services Comcast offers. When you look at Comcast as an incumbent and consider things like Netflix, Hulu, cord cutting, and more as the disruptive force it is very tough to see the dialog Brian led as satisfying. My feeling was that there is a strong response to keep everything as it is, while putting forward a notion that things are improving.  There was a long demonstration of the X1 cable box. Yet in the same session when questioned about net neutrality, Brian said that it is too bad that Netflix should pay a cost of doing business as he has to pay for cableboxes. I think that they love the cablebox (evidence, it seems to be an incredible headache to get cablecards and very costly to switch to TiVo and the rent for cable boxes is pretty high). The fact that they spent 10 minutes doing a demo on the new platform seemed to indicate that—yet the platform has none of the elements of a modern platform relative to apps or openness as was asked by an attendee. The responses to questions about net neutrality seemed to show a strong desire to avoid change while at the same time not acknowledging a changing world and changing needs of what is going on relative to connectivity. The overall dialog around Netflix seemed harsh to me and it failed to consider just how much more pleasant (and modern) Netflix is as a consumer than the X1 experience shown. Disclaimer: I have had really significant problems with Comcast in our new place and having never used them before; this is my first time as a customer. As I have no choice for video or broadband, one could say it is challenging for me to be totally objective.

Each also stuck to revealing little, defending the status quo, and offering a view of the future that is the same but better.

Each of these CEOs and companies have enormously challenging jobs and situations. Having shareholders demanding consistent quarter by quarter results, customers who do not really want change from these service providers but seek change elsewhere, and massive organizations to change all make for the potential of no-win interviews. Yet, each also stuck to revealing little, defending the status quo, and offering a view of the future that is the same but better. My own experience and learning would offer than when facing massive disruptive challenges, engaging in the dialog serves all parties better even though the normal school of thought for the incumbent is to double-down, stick to talking points, and only reveal challenges through the lens of opportunity.

Disruptors

Several CEOs represented the leading edge of disruption. It is super easy to be a fan of disruption and to look at all that is going well with these leaders just as it is easy to look at all the challenges the incumbents face. At the same time, these disruptions are also representative of a new level of frankness and openness about what they face or have faced.

More than the great work these leaders represent, I think it is important to look at how each is communicating and participating in a dialog. One might suggest that when these leaders are under pressure or face challenges of being disrupted they will start to take on the characteristics demonstrated above. I don’t think that is the case, simply because several of these leaders have already faced (or are facing) these challenges in their business. While clearly disruptors have less to lose, it is important not to lose sight of the fact that some of these represent large public companies (not mega cap, but large) and all represent very large customer bases from consumer to enterprise.

It was exciting to see these leaders head to the future, demonstrate a unique point of view, and engage in a two-way dialog about where things are going

For me, it was exciting to watch these interviews and how these leaders took on their own challenges. It was also exciting to see these leaders head to the future, demonstrate a unique point of view, and engage in a two-way dialog about where things are going.

Let’s look at some of these speakers.

Uber’s Travis Kalanick is arguably the most used and mission critical service for the attendees. The love for the service runs deep. Equally deep is the love for how Uber is taking on the government in the regulation of taxis and ride sharing (along with Lyft, an a16z portfolio company). At the same time, Travis faces a lot of questions about his aggressive style and reputation. He didn’t hold back, characterizing the task ahead at Uber as “a political campaign, and the candidate is Uber and the opponent is an asshole named Taxi.” OK, probably a bit colorful. What I loved was how he embraced even the disruption to his own business. After seeing a truly autonomous car from Google the night before we heard the CEO of Uber telling us that self-driving cars are the future, not drivers. Considering that Uber is a marketplace for drivers, this embrace of your own disruption is great to see.

Most people expected a characteristically polite interview by Softbank’s Masayoshi Son-san, but were treated to candor and aggressiveness, though in a very polite way. This would be consistent with the amazing success Softbank and Yahoo BB had in Japan ten plus years ago bringing amazing broadband and low prices to a market easily dominate by the goliaths like NTT (the most visible building from the Shinjuku train station is the DoCoMo tower). Son-san told the story of starting Yahoo BB and “how they had: No experience, No technology, No capital. Just anger.” This was a true disruptor story, much like Uber’s story of realigning city government only at a national scale. While it was not so challenging to be candid about WiMax, Son-san was super clear about the failed technological approach. He was clear about the intention to go after broadband in the US with the same zeal he went after it in Japan.

Salesforce and Workday (Marc Benioff / Aneel Bhusri) together offered an incredibly clear view of disruption at the enterprise software level. If there’s one interview to watch, I would suggest this one because it has so much relevance to how software is made and brought to market from two CEOs who made and brought to market software in a previous generation. These are CEOs learning from their experience who have also engaged the marketplace differently as disruptors. There were many statements that are starting to seem less and less “bold” but nevertheless remain monumentally disruptive: “in a few years no one will run business software on premises”, “I run the company from a smartphone”, “if you’re going to build a cloud app you need to start from a clean sheet of paper—there’s no way around it”, “incumbents are holding on to the past and basically trying to monetize it”, “90% of the company can do all of HR on a smartphone” and so on. There were many profound elements of the dialog that revealed the depth of the strategic and technological shift these leaders are both creating and have experienced. For example, there was a description of competing with an incumbent like SAP who would go to a customer, negotiate a $40M deal to “upgrade” and then wait two years to get the latest features or start to use a SaaS model and the new features just show up. Yes there’s a ton of complexity in there and yes it is horribly disruptive to how businesses operate, but so was the introduction of the PC, client/server (upon which that $40M upgrade was based) and more. Finally, the discussion about being in a “post-server” world resonated with me as I just don’t see it as viable for companies to be building out their own data centers and this session provided a lot of evidence as to what these vendors are doing to make that a realistic assertion. From a format perspective I love the adjacency of these two and wish a couple of the incumbents were paired together.

Dropbox’s Drew Houston brought innovation, competition, and regulatory oversight into focus with his interview. This is another service that many people in the room not only use but rely on and that brings with it a degree of comfort and also a challenge in that the audience knows a lot about the services represented. Not content to simply reiterate what was previously known and said about the company, Drew talked about the genuine frustration he represents as a cloud provider learning about the revelation that the NSA tapped into cloud based services. It would have been easy to lay low but instead made the quip that the “NSA doesn’t send a muffin basket and say welcome”.

Netflix’s Reed Hastings represents learning and the learning from disruption incredibly well and can also be chronicled in his own appearances in the hot seat. Sometimes we forget that Netflix has been a public company for 12 years, to the day of this interview! For many of us it seems like ancient history that we used to get plastic discs in the mail and then return them Monday morning. Netflix is famously known for having disrupted itself and not with grace while on a path to streaming and today’s Orange is the New Black. I found the discussion looking backwards to missed opportunities and disruption absolutely fascinating. Reed talked about how the team would discuss “managing to the point of feeling like your skin crawled” and making decisions that were unbelievably difficult. While given the success right now, perhaps it is less difficult to look backwards at the challenges faced and mistakes made. It was amazing to hear this level of candor. Reed was even candid about something he said just a short time ago about the high price of Netflix stock which he said at the time was too high and represented a euphoria. In contrast to Comcast, Reed was much clearer about the net neutrality issues are playing out—he used a great example of Comcast trying to charge at both ends (both for the consumer and the internet service) by talking about the flow of money through the system. He offered an operational view of “strong net neutrality”. Putting aside the specifics of the issue, the tone of looking forward, candor about the past, expression of a clear point of view, and a view of delivering new products and services along with the inherent risks and challenges comes across as modern and consistent with a new style of leadership.

What comes next?

It might be too easy to read this and conclude big companies are legacy and being disrupted and new companies disrupt, but that would ignore two things.

First, this is a moment in time. While some would say disruption is akin to physics and must happen, there are dominant companies that reinvent themselves. Few even recall that IBM was close to bankruptcy when it reinvented itself from one dominant company to another, albeit in a very different way. And that reinvention progressed through nearly 20 years and returned 7X the broad stock market overall during that time.

Second, companies that disrupt are themselves prone to disruption down the road. We haven’t seen this dynamic play out yet for the companies here (though Netflix might be one). There is also a great deal of learning about how to reinvent and avoid the risk of being locked into a strategy and execution. Google doing the unthinkable of shutting down services or Facebook acquiring very large scale indirect competitors or technology complements are examples of a new generation of leaders acting differently relative to the potential disruption of core businesses.

Nothing is quite inevitable in business, but the potential to fall into familiar patterns is high.

Nothing is quite inevitable in business, but the potential to fall into familiar patterns is high. This past week at #codecon demonstrated the challenges and approaches to the core risk of the technology industry. In technology, the only thing you really do is monetize the work of the past and deliver innovation to the future. How leaders approach this reality is an evolving skill and #codecon allows us all to witness this evolution firsthand.

–Steven (@stevesi)

Written by Steven Sinofsky

June 1, 2014 at 11:30 am

Posted in posts, recode

Tagged with , ,

Everyone starts with simplicity, no-one ends there and that’s OK

simplicityDesigning a user experience for many millions of people is a unique job that a relatively small number of people practice. The responsibility of such an undertaking is immense, stressful, and one that can be all-consuming. Cold sweats, sick to your stomach, and a constant feeling of messing up are the norm for those that take on these challenges.

But someone has to do it!

This week brought quite a few big design changes that have folks talking including twitter adding mute, gmail testing out a major revamp, and iOS 8 bringing “Surface split-screen to iPad”.

Everyone starts with simplicity, then what?

At introduction almost every successful product champions simplicity as a design and execution goal. Products are declared simple, minimal, and tailored to specific uses. Almost no one argues against these attributes and when marketing goes to position a tech product, invariably these attributes bubble up to the top of the favorable list. That’s because of the inherent and expected complexity of tech products as a starting point.

At introduction almost every successful product champions simplicity as a design and execution goal.

But where to go next? Tech products that are simple can start off well, but three things exist immediately after launch.

  1. A customer need to address feedback and “fix” things that might be simple but are not quite there yet.
  2. A product need to remain competitive with the products that follow your introduction touting the same simplicity but also do a few more things (reading the reviews of your product will always demonstrate examples of wish lists)
  3. A business need to develop new products that can enhance revenue, margins, or maintain price points in the face of commoditization.

Tech products, particularly software products, are unique in that there is an almost natural tendency to organically add or to absorb features from competitive or adjacent products. Unlike physical hardware products that have COGS and BOM challenges, the incremental cost of software is simply limited to R&D (and operational costs for SaaS). That means when faced with the above existential properties, tech products will get new features pretty rapidly.

These new features will do constant battle with the simplicity of the initial release. Some argue that this is just bloat and invariably ruins products. Certainly from a design perspective this is a massive challenge. It takes enormous discipline. On the other hand, there are very few examples of software-based products that remain static. To remain static in features is to open yourself up to commoditization or disruption—a static target is an easy target.

Example: Palm Pilot

The introduction of the Palm Pilot (http://en.wikipedia.org/wiki/PalmPilot) is a fascinating historic example of simplicity leading to isolation and expiration of a product. The designers of the product did an amazing job building an amazing product. All day battery life, simplicity, specific and purpose-built as the first truly modern and truly mobile productivity tool used by the masses.

I remember in 1998 the Palm Pilot was standard issue for all new MBA students when I taught. Shortly after that time, I recall a panel discussion with one of the original designers of the product. At the time the pitch was overarching simplicity and ease of use. Everyone agreed. Then there was an audience question that changed the dynamic.

Most leading edge folks at this time were carrying Motorola flip phones along with the calendar/notes/contacts in their Palm Pilot. The problem was every time they wanted to make a call, it was a multi-step process that involved looking up a number on the Palm Pilot and juggling the two devices while typing into the phone. While this was vastly easier than going back to your desktop or attempting to pull an 8lb laptop out of your bag, it was a usability disaster.

The question was simply—when could I have all the Palm Pilot functionality on my phone? Lots of words about how you could sync (with a cable, not the cloud that didn’t yet exist), but a hardcore answer about how adding a fifth function to the Palm, a phone, would overload the functionality and make the product too complex and unusable. So the phone would never converge with things like your contacts and calendar.

Honest, that was the answer.

The problem was I was sitting there with my pre-production blackberry merrily connecting in real-time to my calendar, contacts, and email on my Exchange server. It was incredibly clear that the need of a non-converged device with a static copy of some of the important mobile tasks was no longer useful.

A pattern for how things evolve in practice

This challenge in software product design happens time and time again. It is the very nature of disruption. The new product does some things brilliantly well and simply, but is “missing” features people value from an existing product or an adjacent product.

Designers face the choice of adding new capabilities and potentially challenging the beauty of the initial release or facing competition and disruption from new competitors without that same strongly held belief. Marketing, channel, business and pricing can defend against these for a while but ultimately the ease and costs of just adding features in software will win.

The tension between user interface design and the realities and capabilities of software leads to a fairly predictable pattern for how tech/software products evolve. We can think of this pattern as evolution in five stages:

  • Introduce
  • Optimize
  • Deliberate
  • Succumb
  • Mature or Renew

Introduce. First you introduce a new product. In your view it is a thing of beauty. Whether you spent 3 years or 3 months, you are convinced it has exactly the right features done exactly the right way, though you know there are ton of things on your “to do” list. Even if you are practicing lean methodologies you are pretty sure you got it right in your heart even though there is a lot of learning to follow. Your design embodies simplicity in design and messaging. Once your product starts to get used and you have the luxury of people relying on your work you begin to see the holes and maybe even misfires in your experience design. Optimize. You have a lot of work to do to reconcile your “to do” list with what actual people using your product. It turns out that what you thought the product was missing is pretty different than what everyone else thought the product was missing. You shrug this off and take the feedback seriously because you have real-world people using your product. Quite often the innovations introduced at this stage are formalizations for how people were using your product. Add-ins, customizations, or just conventions that enhanced usage become the sorts of things you formalize in the product. You very quickly iterate and get to a much more robust, reliable, stable, and usable version of what you had originally envisioned. This becomes the foundation of your product.

Deliberate. Evolving your product at this stage is very fun. While you believe you have a product that embodies your vision, with usage you begin to see broader usage and scenarios as part of your product. There might be third parties that do similar things as you but with a slightly different or much improved take on a specific mechanism you have in your product. Because you have become a leader with your product in “the way” things are done, when you decide to introduce an innovation it comes as a deliberate and thoughtful extension of your experience. Rarely do you see pushback from a broad base of customers when something new is introduced at this stage in your product’s evolution. In fact you often are seen as taking the product to a new level and providing a broader context in which your whole category or class of products should evolve. You are basking in the glow of innovating in the user experience of your space—you have come to define the category and now you’re defining the category to include new elements of user-experience.

Succumb. The feedback your product is receiving is growing, both positive and negative. As your product is used more and more, the usage scenarios and skill-levels of your customers change dramatically. Your product is used in ways you could never imagine and customers are asking for your product to do things you would never have imagined they would ask. If your product becomes essential for some scenario, then people will ask for your product to take on attributes and features of other familiar products (if you share photos, then you’re likely to be asked for photo editing for example; if you communicate, then before long you will be asked for rules and filters; if you type then you will be asked for more and more formatting, spelling, and entry features). If during the previous stage you really believed you had achieved a level of almost Bauhaus minimalism about your product, this is the stage when you feel a relentless pressure to add more. You’re hearing from customers, pundits, press and more about the must-haves and must-dos. This is by far the most stressful time in product development—you can’t just step back and not change things, but you constantly feel like changes are all part of a slippery slope. You constantly find yourself struggling between the minimalist view of the product you have been perfecting and the need from different types of customers for seemingly contradictory types of features. It is why at this stage as a designer you feel like you are succumbing to feedback and introducing features that you know some people will value and others will see the other way or maybe just not even notice—you feel like you’re bloating your simple product. These are the hardest decisions to make and are the price of success. If you try to hang on to simplicity, you will see competitors pass you by or you’ll see engagement stagnate.

Mature or Renew. The natural evolution of most every product involves a fairly long period of incorporating features in the previous stage—you add some new things, incrementally change some existing things, and in general are working to find a path through the maze of contradictory feedback and complex market needs. Over time your product will develop a different personality and unique set of assets, but is going to be far from that original version. While you might have hundreds of millions of customers, at some point the experience of your product is such that the market collectively demands an overhaul. The challenge of course is that the collective market is very different than any one individual or an organization (for enterprise products). The latter two, unlike the aggregate view of the market, do not necessarily embrace change. This point in the evolution of your product is where you face disruption—the telltale signs of reduced engagement, alternative tools and experiences, or just a lack of energy in your ecosystem are signs that your product is overdue for improvements, new features and more. Software affords you the chance to reimagine your product and presents you with the opportunity at this time. Of course with hundreds of millions of customers, a very large number in absolute will not want any change at all. That’s why this stage of evolution is a choice—you can incrementally mature your product design or you can choose to renew your product design. These two are really that very rare of “either-or” choices. As a product designer, you will be faced with a big set of decisions when you have to design what comes next for a mature product. Be careful what you wish for you as your design might be so successful that one day you face the prospect of redesigning it in the context of a significant customer base.

Products reaching a mature stage face a fork in the road.

Products reaching a mature stage face a fork in the road—one where you can renew or watch your product slowly shrink in relevance. This might seem dramatic, but the velocity of change in the technology world combined with the ease of switching shows that one day what might seem like “the way things are done” will risk becoming “the way things used to be” much sooner than expected.

Disruption and technology transitions are part of the context of designing products and experiences.

From search home pages, to photo editors, word processors, operating systems, music players, and more these stages are all part of the evolution of a user experience. The beauty of software interface is that unlike the physical world you are given the chance to move things around, change, and improve the product for little to no manufacturing cost, but at each stage you have to work through the cost of change to customers.

No other product in history has had the ability to be used by so many yet be so flexible in how it is used.

Simplicity in design is what we all strive for and often how we begin a product lifecycle. With success, maintaining simplicity over time while also remaining competitive is where design and product management are really challenged.

The “soft” of software makes this challenge even more acute and the pressures to add or change a product even more difficult to resist.

–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

May 13, 2014 at 11:00 am

Posted in posts

Tagged with , , ,

Tablets v. the World

Every time the topic of tablets versus laptops (and or smartphones) comes up, we end up in another endless debate about scenarios, consumption, productivity, keyboards, mice, screen size, multitasking, and more. In every case the debate centers around the core uses of “PCs” today—and PC is in quotes because the PC itself is a remarkably flexible device that has morphed over the years into many form factors. People study run-rates and trends and try to predict the demise of one over another and so on.

It isn’t so simple.  But it also isn’t so binary.

For more on this dialog, you can also catch a couple of podcasts from Benedict Evans and I (see a16z Podcast: Engineering a Revolution at Work and a16z Podcast: When Your PC Expires).

Disruption

Every disruptive innovation shares (at least) two characteristics.  First, the newly introduced technology is more often than not inferior in some key dimensions, while superior in some dimensions that in the current context seem to matter more.  Second, despite much consternation, the technology being disrupted is almost certainly going to remain a vital part of the landscape in some form or another for quite some time—either simply because of the long tail of legacy or because it serves a function that is not replicated at all.

What changes, however, is where the emphasis takes place around an ecosystem and with a, usually, broader set of customers. The ecosystem is not a static world and it too plays a vital role in the transition. Where the ecosystem is investing is always a leading indicator of where the transition is heading.

We can look at transitions such as entertainment (theater, radio, film, TV, video, streaming) or transportation (horses, boats, trains, cars, planes) or even storage (removable, hard drives, USB, flash) as examples of where these traits are demonstrated. Computer user-interface moving from characters to GUI to touch shows these traits as well.

The introduction of the iPad, and the modern mobile OS (and smartphones) in general, shows many of these characteristics.  The modern OS in combination with new hardware has many characteristics that separate it from the PC era including sealed case (non-extensible hardware), ultra-low power consumption, rich embedded graphics, touch user interface, app store, exclusively wireless connectivity, and more.  This is the new platform which is where so much innovation in apps is taking place.

Here is where the debate starts—some of those features are either not valued or true limitations when compared to the vastly more capable PC model. There’s no doubt about that. It is just a fact. Not only does the PC have a wider range and more “powerful” hardware options, but it also benefits from 20 years of software that drives a vast array of processes, devices, workflows, and more.  Tablet hardware is still immature relative to “PC standards” and apps do not seem to cover so many of the existing PC scenarios (even if they cover scenarios not even dreamed of or possible on PCs).

Hardware and Software

Two things are still rapidly changing that will account for a much broader transition from the dichotomy of tablet OR laptop today to a world where tablets with modern operating systems begin (or have begun) to replace many scenarios occupied by laptops.

We will soon start to see more innovation in tablets.

First, the hardware in tablets will benefit enormously from Moore’s law. While the pace of changes in smartphones (screen size, cpu, gpu, specs) has been faster than we have seen in tablets, my guess is we will soon start to see more innovation in tablets. In terms of both form factor and specs, tablets have been reasonably static since introduction. There are give or take two screen sizes and fairly modest spec bumps. My guess is that since the same vendors make both smartphones and tablets, the vast amount of energy has been focused on smartphones for now (just as when the PC industry shifted innovation from desktops to laptops and then swung back again to focus on all-in-ones).  I suspect we will start to see more screen sizes for tablets and more innovation in peripherals and capabilities, along with specs that benefit from the rapid progress in Moore’s law.

Second, all the hardware innovation in the world isn’t enough to drive new scenarios or even more dramatic replacement scenarios. The amazing innovation in software on smartphones shows what can take place when developers of the world see potential and tap into the power of a new platform.

Two Examples

I wanted to offer two examples of where the transition to tablets has been surprisingly “behind the scenes” and really out of sight, but very interesting from a technical perspective.

Many of us find ourselves in the AT&T store all too often because we’re adding a line, replacing a phone, getting a new SIM or whatever.  Over the past year or so, AT&T has aggressively rolled out iPads to replace the in-store PCs that were used for customer service. This is a massive software challenge. The in-store PCs had point of sale capability, bar code readers (for SIMs), and a large array of apps that drove the entire customer engagement (some of these apps ran Windows OpenStep believe it or not).

He kept telling me how frustrating it was to deal with the lack of capabilities of the new tools.

If you happened to visit the store during the early stages of the transition, you would have been able to sense the frustration with the account managers.  There were many unfamiliar elements to the new apps on the iPads and worse there seemed to be many things that the desktop tools could do that the iPad apps could not.  For example, I got caught trying to merge two accounts and the rep was forced to call the regional call center to do the work and while on hold he kept telling me how frustrating it was to deal with the lack of capabilities of the new tools.  At the same time, the iPad had cool integration with portable bar code readers, the reps could easily show you what is on the screen to verify information (like picking a new phone number) and so on.

The transition is well underway now and I don’t think folks notice any more.

Today I spent a few hours with my friendly Comcast technician while he diagnosed something faulty with our cable signal.  While he has a fancy signal meter, most of the work he does is actually adjusting things via a remote app on an iPad.  Comcast technicians (as I learned, the ones in vans but not “bucket trucks”) were recently issued iPads. Sure enough during the visit he was on the phone to a central office and was saying “I have an iPad now and so without my PC I’m not able to get that measurement”.

The tech said, “I have an iPad now and so without my PC I’m not able to get that measurement”.

I was having flashbacks to the frustrated AT&T reps. Turns out this technician used to have a PC and ran the same software as the tech at the other end of the phone (and in the bucket trucks). They are moving techs to iPads because they do not have to carry chargers; they are more resilient when dropped; and the integrated Verizon connectivity all make for a far more convenient service tool.  Plus things like entering the MAC address become much easier with bar code readers and the ability to use a much more agile form factor, as one example.

The conversation I had with a tech (always the anthropologist) was fun.  He said they have a whole tracking and feedback process that helps them to prioritize what features the software folks need to add to the apps being used in the field. Turns out, I’m guessing, they built some pretty elaborate desktop software that did just about everything since it was used on the ground and in the data center, but they likely had little understanding of just what was used and how often. The creation of new apps will drive a new level of customer service and technician capabilities, even if there are some hiccups along the way.

Broader Implications

These two examples are hard core line of business tools. We’re seeing the same thing in the line of business tools used by folks at all sorts of companies big and small. The new generation of mobile-first SaaS tools make it far easier to create “documents” for sharing and collaboration, access business information, or participate in business services from CRM to accounting to benefits.  The tools these are supplanting were developed over a decade and have tons of features and optimizations but lack the mobility and internet access that is so highly valued in a modern workplace. The transition will have some hiccups but is happening.

Along with these tools, so many of the tools for creation and production that are PC based on being reimagined and recast for modern work. We can see this revolution in Adobe’s work on photography for professionals with tablets, Paper and Penci from fiftythree, and of course the long list of productivity tools we talk about often on this blog. These tools do less, but they also do more. When combined with tablets and smartphones on modern platforms they enable a new view on the work and scenarios.

The characterization of tablets as “neither here nor there” or “in between tablet and a laptop” misses the reality that the modern nature of tablet platforms—both hardware and software—will drive innovation and subsequent transition for many many scenarios from traditional laptop platforms to tablet platforms.  We’re in the middle period where this is happening—just as when people said cars were too expensive for the masses and would not be mainstream or when the GUI interface lacked the hardware horsepower and “keystroke productivity” to replace character based tools.

New hardware and new software will surface new capabilities and scenarios not previously possible (or imagined).

The traditional laptop will power hundreds of millions of endpoints for a very long time. But as the two examples here show, even in the most hardcore worlds where device integration meets custom software, there is a transformation and transition taking place.  New hardware and new software will surface new capabilities and scenarios not previously possible (or imagined). It won’t be smooth and it won’t please everyone immediately, but it is happening–just as both of those same scenarios transitioned from character to GUI.

It really is about the software. That change is happening all around us.

–Steven (@stevesi)

Written by Steven Sinofsky

May 6, 2014 at 3:30 pm

Posted in posts

Tagged with ,

Designing for BYO, a product manager view

imageMany companies I work with are creating tools to enhance workplace or personal productivity depends on the “bring your own” or BYO movement to get their product bootstrapped or to just get in the door. Once in the door, the product design challenges of BYO begin.

After those first customers they count on broader, viral, usage within a company to drive revenue growth. While they likely built your product with the notion that “customer equals purchaser” once this changes to “business equals purchaser”, you are going to get a whole different level of feedback.

My guess is most every app and service is both excited and terrified to get to the moment when there is a choice between cozying up to IT and risking alienating your newly minted enthusiasts. It is, by all accounts, a choice. Most I talk to feel like they will navigate this by focusing on customers first and hope to overwhelm the negatives often associated with IT.

Walk that fine line to enable your product to be at some state of détente with IT.

Get over it. Not entirely of course, but there’s some subtly at play. At some point you are going to face a fork in the road; navigate enterprise management or face existential challenges. You can choose to be managed without your cooperation or worse blocked and literally unable to access important assets that your product requires. Alternatively, you might also choose to walk that fine line to enable your product to be at some state of détente with IT.

I know that sounds awful and while I am sure there are some exceptions (in both organizations and products), this is by far the most normal path. It doesn’t have to be a sell-out, but when done well you can bet that you’re going to be in great position to advance the state of the art and contribute positively to enterprise infrastructure.

In fact, as I was typing this post there was this thoughtful article on putting customers first in business apps.

The essence of BYO is that one can easily acquire and begin to use a device, product or service without IT involvement of any kind. You might need to know the server name for email or maybe how to export data from a line of business system, but otherwise the device or app can tap into the necessary resources without first going through IT and/or purchasing. Even better, these tools likely make it very easy to share information with coworkers or collaborators at other organizations. All folks need is a free email account as a gateway to sharing.

Of course all this ease of use has at least two main IT downsides.

First and foremost is security of the network overall. Devices on a network, running code of unknown origin, tapping into servers is a big risk. What can be transmitted by those devices and apps concerns IT. Inbound PDF attachments or simple USB sticks seemed harmless enough at first until they became a massive vectors.

Second, the data and servers being accessed contain information that you need to use but do not own. These are corporate assets and managing and tracking those is a fiduciary responsibility for IT and in some cases such as HIPPA or SEC regulations the penalties for messing up are severe. That simple case of putting something like logmein or internet messaging can potentially become a significant liability.

My own personal experience “helps” me to see this pattern. Working on Microsoft Office in the early days, we were very clearly a “bottom up adoption” product. People were going to stores and buying the product with their own money and creating amazing looking documents they would bring into work (often on PCs bought with personal funds at those same stores). Pretty soon groups of people were using corporate expense accounts to acquire “5 packs” of Office. Then over time, Microsoft grew an enterprise sales force that could offer large deals.

That’s the sales side, but on the product side the management and deployment of the product (deployment being decidedly old school now) became unwieldy. As a result, the late 1990’s saw a movement to reduce so called “TCO” or total cost of ownership. TCO mandated a vast number of controls across the entire platform and from that grew a whole generation of features from the registry, to logon scripts, to the, now, dreaded “corporate desktop”. TCO reached an epic volume as it described owning a $1500 PC as a $20,000 per year expense to companies.

While I was dragged kicking and screaming to deliver features that I felt could be used to make the product worse, the reality was at the time this is also what grew the business.  The tradeoffs, debates, and design choices were all very real.

In a startup, these choices are much more existential than they were for us back then. Given the hurdles to overcome to become a widely used tool, there’s a good chance you might want to be more proactive about how your product fits with BYO.

As a product manager facing this decision point, you have this intense belief that IT wants to make your product worse, harder to use, and to basically ruin your good work. The fact that so few built-for-IT products have the design sense, usability, or approachability of apps and services focused on consumers only reinforces this.

While there are dozens of potential traps and pitfalls that can result in a product falling out of favor, it is a good idea to consider a few important design choices you can make now that will enable your consumer and BYO product to be viewed through a positive light. It is important that these design choices be considered product assets rather than object handlers.

Ultimately, if you design a product to be used in business where you can charge more it should be better, not worse, than a product used in the consumer space. It used to be that the business versions of products charged more so they could do less and be harder to use and acquire. The SaaS and App models invert this. Phil Libin, founder of Evernote, says it best when he says “business class means superior and we challenge ourselves to make our product better when you upgrade to the business version”.

Business class means superior and we challenge ourselves to make our product better when you upgrade to the business version. — Phil Libin, Evernote

The following are five product areas to consider when it comes to making a product business ready:

  • Identity and authentication. The first thing a business needs from a product is that employees should sign into the product using the business-owned credentials (such as Active Directory). This allows IT to send a clear message to the individual that they are operating in a business context. This needs to include authentication mechanisms used at the organization and enforce associated password policy and security. At the same time, you owe it to your own ease of use that stand-alone credentials can be used, especially for collaboration. How you manage the bridge and the commingling of credentials depends on the flow of assets through your product.
  • Network usage. IT organizations guard their network across several dimensions. Platform providers make it possible to use VPN (secured with enterprise credentials) or other access methods for WiFi. Your product should use well-known/documented ports and be clear with IT about what travels over the wire and in what volumes. Techniques like polling, using obscure ports, and more will only hinder your product usage.
  • Changes related to re-orgs. In an organization of any size employees quit, vendors are fired, or staffing on a project just changes. If your product is used across a group of people then IT will want to be there to assist in supporting these changes within your product. How can content remaining on devices be recalled or how can a person lose permissions to content are important design choices you can make in building a product that it BYO friendly.
  • Content “ownership”. If your product creates or consumes content then your product owes it to IT to participate in the content management responsibility of the organization. At one extreme, the clipboard exists on IT apps and in every other app so you can dodge this question by saying it isn’t your exclusive responsibility. On the other hand, by having mechanisms for IT to have some telemetry and actions on content then you invite your product to be desired by IT, not just challenged. More than any other area this is where many potential solutions exist and many possible ways to make the product worse or upgrade to business class.
  • Features. Products are more than editors and tools for sharing, so there are going to be unique features in your product. Some of those unique features will intersect in ways that might run counter to a business policy. Sometimes this could be simple such as an ability to generate email notifications which might be frowned upon. Other times it might be complex as a feature runs directly afoul of regulatory compliance. At some level there are going to be features you give IT permission to enable/disable. No area is more challenging of course and thinking hard about the design tradeoffs when a feature might not be there is important. A feature like password protection might be great for consumers but becomes a huge problem for IT when personnel change. Alternatively, you might have a feature that becomes a “must use” and if that’s the case you want to consider how something you might have thought of as optional becomes permanent. For example, you might optionally support a confirmation email when adding new people to a project and IT might require that email be sent to produce a record of access changes.

There are many other avenues to consider. I think it is possible to make a product better when enabled for business even if you start from the very solid business and design foundation of customer first.

The modern mechanisms for administering IT control are vastly superior to the PC era mechanisms. The idea of running arbitrary code, tweaking every aspect of the UI, or installing add-ins that alter base functionality of a product are long gone. These approaches showed how great products can be made unfamiliar, hard to use, and less robust even with the best intentions Worse, the mechanisms developed to enable these approaches proved to be vectors for security problems, performance challenges, and in general sources of unpredictability and unreliability.

Today’s devices support state-based management, app stores, and security contexts that greatly improve the ability to deliver upgraded business features. To many, these tools are not yet enough. The platform vendors are carefully balancing the approaches they introduce with the known downsides by the old approaches.

There’s a disruption in the way devices, apps, and information are managed, but that does not necessarily mean an elimination.

–Steven Sinofsky

Written by Steven Sinofsky

May 1, 2014 at 3:00 pm

Posted in posts

Tagged with , ,

Shipping is a Feature: Some Guiding Principles for People That Build Things

I love questions about advice because they really force one to think carefully about what to say. My former colleague (and fellow Cornellian), Jackie Bavaro now at Asana, who recently co-authored a thoughtful book on the ins and outs of securing a role in product management, asked the following question on Quora:

Untitled

In the back of my head I always have that product manager view of accountability and so rather than “advise” I much prefer to have a dialog in context and make sure accountability stays with the person asking. It is difficult enough to be a manager and avoid the constant pull of “telling people what to do” and certainly on big topics one really has to be careful. At the same time, spouting cliche’s like “what do you think” or “it depends” can frustrate folks. This in itself is a valuable PM lesson.

I’ve been really lucky (literally) to work with many amazing folks and so many routine interactions yield empowering and powerful insights that one can bring forward. I’ve used blogging over many years to share those and many are also republished in our book on strategy and collaboration.

In thinking about the question and some of the recent design-oriented discussions, here are five takeaways that have always guided me. They didn’t originate with me, except in the sense that I discovered their value while making some mistake.

For these five bits of advice, I chose to focus on what I think is the most challenging aspect of being a PM, which is achieving clarity and maintaining a point of view for a product when all forces work against this very thing. What customers value most in a product is that “it just work” or “does what it is supposed to do,” and yet at every step in a product, the dynamics of design work to make this the most difficult to achieve. For those that have not built products, understanding the context and dynamics of decision making while building something is a bit abstract.

Shipping is a feature. Every PM knows this but it is also the hardest thing to get right. As a PM you throw around things like “the enemy of the good is the perfect” or, well, “shipping is a feature” all the time, yet we all have a hard time getting a product out the door. There’s always more to do to get it right. The way this was taught to me was so old it involved software being shipped in a box on floppies, but the visual has stuck with me. When you ship a product in a box on the back of the box are screen shots and marquee features. What does not come on the box are lists of all the features you thought of doing or different executions you considered. It is that simple. Once you release the product you begin a new adventure building then next iteration. It is almost always the case that what you were thinking before you had customers will change in some ways in the version that comes next. So ship. Learn. Gather data. Iterate. Whether you spend three years or three months developing a product this motion is the same.

You get paid to decide. Some people love making decisions on their own. Other people need socialization and iteration to make a choice. Either way can work (or not) as a product manager, but to be great you really do have to decide. Deciding anything important or meaningful at all means some people will disagree. Some might really disagree a huge amount. The bottom line is a decision has to be made. A decision means to not do something, and to achieve clarity in your design. The classic way this used to come up (and still does) is the inevitable “make it an option”. You can’t decide should a new mouse wheel scroll or zoom? Should there be conversation view or inbox view? Should you AutoCorrect or not? Go ahead and add it to Preferences or Options. But really the only correct path is to decide to have the feature or not. Putting in an option to enable it means it doesn’t exist. Putting in an option to disable means you are forever supporting two (then four, then eight) ways of doing something. Over time (or right away) your product has a muddled point of view, and then worse, people come to expect that everything new can also be turned off, changed, or otherwise ignored. While you can always make mistakes and/or change something later, you have to live with combinatorics or a combinatoric mindset forever. This is the really hard stuff about being a PM but the most critical thing, you bring a point of view to a product—if a product were a person you would want that person to have a clear, focused world-view.

Can’t agree to disagree. Anything that requires more than one person to do (and by definition as a PM you are working with Engineering/Dev so that means what you do) will reach a point where you have to do something and not everyone will agree. On a well-run team there are very rarely that many decisions that span many people all of whom have a voice (if you do, then fix that problem first). When you do reach a point where you just don’t agree, first, contemplate the first lesson and realize you have to ship. Second, see the previous lesson and realize you do have to decide. That leaves you deciding something that some people (or one person) won’t like. What you don’t want to do is end that meeting over coffee with the infamous “we have to ship and I think we should do X, so let’s just move on and agree to disagree”. Endings like that are never good. The “told you so moment” is just out there waiting to appear. The potential for passive-aggressive org dynamics is all too real. Ultimately, this is just a yucky place to be. So if you’re on the “winning” side of such a dialog then you have to bring people along every day for a while. You can’t remind people who was right, or that it is your decision and so on. If you’re on the “losing” side you need to support the team. You can’t remind people when little things go wrong (which they will) that you were right. Once a choice is made, the next step is all about the greater good. Nothing is harder for technologists than this because as technologists we believe there is a “right” answer and folks that don’t agree are simply “wrong”. Context is everything and remember you have to ship–as a team.

Splitting the baby is, well, splitting the baby. Even with all those lessons, time and time again I’ve faced situations where there is a stalemate on the team and the suggestion is made for a middle-of-the-road choice. A feature will appear sometimes. Performance won’t be terrible, but it won’t be great. Customers can do 90% of something, but not everything. Yet it would be possible to decide to have the feature, have great performance, or deliver 100%—it is just that the team dynamic is placing a value on finding a middle road. The biblical narrative of splitting the baby often comes into play here, because in practice if you do arrive at such a comprise what you’ve in effect done is reached a state where in fact you have made no one happy in the room and no one happy down the road. Of course, compromise is a critical part of product design for many reasons. The bigger the team, the more varied customers, the increased divergence of customer needs all lead to a stronger need to find middle paths for complex choices. There is magic when you can do this without just muddling the product. But there is risk that if your design language turns into splitting the baby that the output is exactly what you don’t want to achieve.

10% better can be 100% different. The hardest choices a PM can make are not the new choices for a product—a clean slate is challenging, but that is the truly fun part of design for many. It is not easy, but it is fun. The real challenge comes when deciding what to do next time around (in three weeks, three months or more). The first thing you do is remember all those things you could not get done or had to decide sub-optimally. So you think you’ll go back and “polish” off the work. But remember, now you have customers and they are using the product. They might not see what wasn’t done. They might actually like what you ended up doing. Your temptation to tweak things to “finish” them might come across as better in an incremental sense, but will it be that much better for existing customers? Will it be so much better for new customers that it is worth the risk of touching that code again? The big question for you is whether you can really measure how much better something is—is it more efficient, faster, deeper, etc.? There are many cases, particularly in user-experience flow and design, where incremental improvement simply amounts to speed bumps in using a new release and the downside masks the upside. Sometimes when you improve something 10% what you really do is make it 100% different.

Context is everything in decision making as a PM . The skills and experience of the team matter. The realities of where the business is at a given time or the ability to execute on a proposal are all factors that weigh heavily. Hindsight is 20/20 or better in the world of PM, and we all know there are many standing by to offer perspective, advice, or even criticism of the choices a product makes.

If you don’t make those choices in a timely manner then there won’t be much to talk about. Sometimes the most difficult thing to do is keep moving forward. That’s why some of the most valuable advice I’ve received relate to the very challenges of making tough product calls.

 

–Steven Sinofsky @stevesi

This post originally appeared on a16z.com.

Written by Steven Sinofsky

April 17, 2014 at 8:30 am

You’re doing it wrong

The-main-characterSmartphones and tablets, along with apps connected to new cloud-computing platforms, are revolutionizing the workplace. We’re still early in this workplace transformation, and the tools so familiar to us will be around for quite sometime. The leaders, managers, and organizations that are using new tools sooner will quickly see how tools can drive cultural changes — developing products faster, with less bureaucracy and more focus on what’s important to the business.

If you’re trying to change how work is done, changing the tools and processes can be an eye-opening first step.

Check out a podcast on this topic hosted by Andreessen Horowitz’s Benedict Evans. Available on Soundcloud or on a16z.com.

Many of the companies I work with are creating new productivity tools, and every company starting now is using them as a first principle. Companies run their business on new software-as-a-service tools. The basics of email and calendaring infrastructure are built on the tools of the consumerization of IT. Communication and work products between members of the team and partners are using new tools that were developed from the ground up for sharing, collaboration and mobility.

Some of the exciting new tools for productivity that you can use today include: Quip,EvernoteBox and Box NotesDropboxSlackHackpadAsanaPixxa PerspectiveHaiku Deck, and more below. This list is by no means exhaustive, and new tools are showing up all the time. Some tools take familiar paradigms and pivot them for touch and mobile. Others are hybrids of existing tools that take a new view on how things can be more efficient, streamlined, or attuned to modern scenarios. All are easily used via trials for small groups and teams, even within large companies.

Tools drive cultural change

Tools have a critical yet subtle impact on how work gets done. Tools can come to define the work, as much as just making work more efficient. Early in the use of new tools there’s a combination of a huge spike in benefit, along with a temporary dip in productivity. Even with all the improvements, all tools over time can become a drag on productivity as the tools become the end, rather than the means to an end. This is just a natural evolution of systems and processes in organizations, and productivity tools are no exception. It is something to watch for as a team.

The spike comes from the new ways information is acquired, shared, created, analyzed and more. Back when the PC first entered the workplace, it was astounding to see the rapid improvements in basic things like preparing memos, making “slides,” or the ability to share information via email.

There’s a temporary dip in productivity as new individual and organizational muscles are formed and old tools and processes are replaced across the whole team. Everyone individually — and the team has a whole — feels a bit disrupted during this time. Things rapidly return to a “new normal,” and with well-chosen tools and thoughtfully-designed processes, this is an improvement.

As processes mature or age, it is not uncommon for those very gains to become burdensome. When a new lane opens on a highway, traffic moves faster for awhile, until more people discover the faster route, and then it feels like things are back where they started. Today’s most common tools and processes have reached a point where the productivity increases they once brought feel less like improvements and more like extra work that isn’t needed. All too often, the goals have long been lost, and the use of tools is on autopilot, with the reason behind the work simply “because we always did it that way.”

New tools are appearing that offer new ways to work. These new ways are not just different — this is not about fancier reports, doing the old stuff marginally faster, or bigger spreadsheets. Rather, these new tools are designed to solve problems faced by today’s mobile and continuous organization. These tools take advantage of paradigms native to phones and tablets. Data is stored on a cloud. Collaboration takes place in real time. Coordination of work is baked into the tools. Work can be accessed from a broad range of computing devices of all types. These tools all build on the modern SaaS model, so they are easy to get, work outside your firewall and come with the safety and security of cloud-native companies.

The cultural changes enabled by these tools are significant. While it is possible to think about using these tools “the same old way,” you’re likely to be disappointed. If you think a new tool that is about collaboration on short-lived documents will have feature parity with a tool for crafting printed books, then you’re likely to feel like things are missing. If you’re looking to improve your organizational effectiveness at communication, collaboration and information sharing, then you’re also going to want to change some of the assumptions about how your organization works. The fact that the new tools do some things worse and other things differently points to the disruptive innovation that these products have the potential to bring — the “Innovator’s Dilemma” is well known to describe the idea that disruptive products often feel inferior when compared to entrenched products using existing criteria.

Overcoming traps and pitfalls

Based on seeing these tools in action and noticing how organizations can re-form around new ways of working, the following list compiles some of the most common pitfalls addressed by new tools. In other words, if you find yourself doing these things, it’s time to reconsider the tools and processes on your team, and try something new.

Some of these will seem outlandish when viewed through today’s concept. As a person who worked on productivity tools for much of my career, I think back to the time when it was crazy to use a word processor for a college paper; or when I first got a job, and typing was something done by the “secretarial pool.” Even the use of email in the enterprise was first ridiculed, and many managers had assistants who would print out email and then type dictated replies (no, really!). Things change slowly, then all of a sudden there are new norms.

In our Harvard Business School class, “Digital Innovation,” we crafted a notion of “doing it wrong,” and spent a session looking at disruption in the tools of the workplace. In that spirit, “you’re doing it wrong,” if you:

  1. Spend more time summarizing or formatting a document than worrying about the actual content. Time and time again, people over-invest in the production qualities of a work product, only to realize that all that work was wasted, as most people consume it on a phone or look for the summary. This might not be new, but it is fair to say that the feature sets of existing tools and implementation (both right for when they were created, I believe) would definitely emphasize this type of activity.
  2. Aim to “complete” a document, and think your work is done when a document is done. The modern world of business and product development knows that you’re never done with a product, and that is certainly the case for documents that are steps along the way. Modern tools assume that documents continue to exist but fade in activity — the value is in getting the work out there to the cloud, and knowing that the document itself is rarely the end goal.
  3. Figure out something important with a long email thread, where the context can’t be shared and the backstory is lost. If you’re collaborating via email, you’re almost certainly losing important context, and not all the right folks are involved. A modern collaboration tool like Slack keeps everything relevant in the tool, accessible by everyone on the team from everywhere at any time, but with a full history and search.
  4. Delay doing things until someone can get on your calendar, or you’re stuck waiting on someone else’s calendar. The existence of shared calendaring created a world of matching free/busy time, which is great until two people agree to solve an important problem — two weeks from now. Modern communication tools allow for notifications, fast-paced exchange of ideas and an ability to keep things moving. Culturally, if you let a calendar become a bottleneck, you’re creating an opening for a competitor, or an opportunity for a customer or partner to remain unhappy. Don’t let calendaring become a work-prevention tool.
  5. Believe that important choices can be distilled down into a one-hour meeting. If there’s something important to keep moving on, then scheduling a meeting to “bring everyone together” is almost certainly going to result in more delays (in addition to the time to get the meeting going in the first place). The one-hour meeting for a challenging issue almost never results in a resolution, but always pushes out the solution. If you’re sharing information all along, and the right people know all that needs to be known, then the modern resolution is right there in front of you. Speaking as a person who almost always shunned meetings to avoid being a bottleneck, I think it’s worth considering that the age-old technique of having short and daily sync meetings doesn’t really address this challenge. Meetings themselves, one might argue, are increasingly questionable in a world of continuously connected teams.
  6. Bring dead trees and static numbers to the table, rather than live, onscreen data. Live data analysis was invented 20 years ago, but too many still bring snapshots of old data to meetings which then too often digress into analyzing the validity of numbers or debating the slice/view of the data, further delaying action until there’s an update. Modern tools like Tidemark and Apptio provide real-time and mobile access to information. Meetings should use live data, and more importantly, the team should share access to live data so everyone is making choices with all the available information.
  7. Use the first 30 minutes of a meeting recreating and debating the prior context that got you to a meeting in the first place. All too often, when a meeting is scheduled far in advance, things change so much that by the time everyone is in the room, the first half of the hour (after connecting projectors, going through an enterprise log-on, etc.) is spent with everyone reminding each other and attempting to agree on the context and purpose of the gathering. Why not write out a list of issues in a collaborative document like Quip, and have folks share thoughts and data in real time to first understand the issue?
  8. Track what work needs to happen for a project using analog tools. Far too many projects are still tracked via paper and pen which aren’t shared, or on whiteboards with too little information, or in a spreadsheet mailed around over and over again. Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
  9. Need to think which computer or device your work is “on.” Cloud storage from Box,DropboxOneDrive and others makes it easy (and essential) to keep your documents in the cloud. You can edit, share, comment and track your documents from any device at any time. There’s no excuse for having a document stuck on a single computer, and certainly no excuse risking the use of USB storage for important work.
  10. Use different tools to collaborate with partners than you use with fellow employees. Today’s teams are made up of vendors, contractors, partners and customers all working together. Cloud-based tools solve the problem of access and security in modern ways that treat everyone as equals in the collaboration process. There’s a huge opportunity to increase the effectiveness of work across the team by using one set of tools across organizational boundaries.

Many of these might seem far-fetched, and even heretical to some. From laptops to color printing to projectors in conference rooms to wireless networking to the Internet itself, each of those tools were introduced to skeptics who said the tools currently in use were “good enough,” and the new tools were slower, less efficient, more expensive, or just superfluous.

The teams that adopt new tools and adapt their way of working will be the most competitive and productive teams in an organization. Not every tool will work, and some will even fail. The best news is that today’s approach to consumerization makes trial easier and cheaper than at any other time.

If you’re caught in a rut, doing things the old way, the tools are out there to work in new ways and start to change the culture of your team.

–Steven Sinofsky @stevesi

This article originally appeared on <re/code>.

Written by Steven Sinofsky

April 10, 2014 at 6:00 pm

Posted in posts

Tagged with , , ,

Hiring for a job you never did or can’t do

imageOne of the most difficult stages in growing your own skillset is when you have to hire someone for a job you can’t actually do yourself. Whether you’re a founder of a new company, or just growing a company or team, at some point the skills needed for a growing organization exceed your own experience.

Admitting that you don’t really have the skills the business requires is the first, and most difficult step. This is especially true as an engineer where there’s a tendency to think we can just figure things out. It is not uncommon to go through a thought process that basically boils down to: coding must be the hardest job, so all the other jobs can be done by someone with coding skills.

Fight the fear, let go of control, and make moves towards a well-rounded organization.

Nice try.

If you’ve ever tried some simple home repairs or paint touchup you know this logic doesn’t work—you only need to spend an hour watching some cable TV DiY show and you can see how the people with skills are always unraveling the messes created by those who thought they could improvise. The software equivalent can sometimes be seen as a developer attempting to design the user interaction flow in a paint program or PowerPoint. Sure it can be done by a developer (and there are talented developers who can of course do it all), but he or she can quickly reach their limits, and so will the user interaction.

Open up your engineer’s mind to embrace the truth that every other discipline or function you will ever collaborate with has a deep set of skills experiences that you lack. Relative to engineering, the “softer” skills often pose the biggest eye-opening surprise to engineers. Until you’ve seen the magic worked by those skilled in marketing, communications, sales, business development or a host of other disciplines you might not appreciate the levels of success you can achieve by turning over the task to trained professionals.

I have seen this first-hand many times. Most recently it occurred while working with the a16z portfolio company Local Motion when it came time to do some of the early announcements around the fleet-management company. The co-founders possess engineering and design backgrounds from elite institutions, and built the product themselves, hardware and software. Both are experienced mountaineers, and so they have this engrained sense of self-sufficiency, which is valuable both for building companies and scaling mountains.

When it came time to work with the industry press to tell the story of their company, in some ways they had to suppress their self-sufficient instincts. The founders were self-aware enough to know they had not done this before and agreed to enlist the help of those who have depth and breadth of experience. The pros showed up and spent time learning the team, the business, and the story (professionals do that!). They came back with a plan, roles, responsibilities, and defined what success would look like. It was amazing to watch how the founders absorbed and learned at each step all those things which they had not personally experienced before.

This sounds easy and pretty obvious. But if you put yourself in their shoes you know that this is not just bringing in a hired gun to get some press, rather this is hiring on a new member of the team and a new founding member of the family. What is vital to keep in mind, is that this kind of work is as important as every line of code and every circuit board. The lesson of letting go and letting professionals do their work is clear: delegating is never easy for most, but is spectacularly difficult if you don’t know what the other person is going to do and when the outcome matters a whole lot. Still, you need to let the specialists into your carefully engineered world.

There are moments of terror. You’re watching people talk about your product using tools and techniques you are unfamiliar with to connect with your potential customers. Even though it is a product, you are apt to feel as though this is a discussion about yourself. You question every step. You doubt the skills of the person you hired. You are certain everything will go wrong.

It is at that point—right when you start to panic and think that unless you do this yourself things will fail—that you need to let go. You need to say “yes” to hiring a person to do the work, and then let them do their best work.

Just keep reminding yourself that you’ve never done the job before, and that you’re role is to hire someone who knows more than you. Even when you’ve wrapped your head around that, there are a few ways you can get tripped up when you are The key to success here is avoiding these mistakes when you are in the hiring process:

  1. Asking candidates to teach you. A good candidate will of course know more than you. Their interview is not a time for them to teach you what they do for a living. The interview is for you to learn the specifics of a given candidate, not the job function. The best bet is to do your homework. If you’re hiring your first sales leader then use your network and talk to some subject matter experts and learn the steps of the role ahead of time.
  2. Expecting a candidate to know or create your strategy. It is fine to expect engineering candidates to know the tools and techniques you use. You wouldn’t expect an engineering candidate to know your unannounced product, of course. It is equally challenging to expect a new marketing person to have a marketing plan for your product. Even if you ask them to brainstorm for hours, keep in mind the inputs into the process—they only know the specifics you have provided them. For example, don’t expect a marketing candidate to magically come up with the right pricing strategy for your product without a chance to really dive in. On the other hand, you can expect a candidate to walk you through in extreme detail their most recent work on a similar topic. You can get to their thought process and how they worked through the details of the problem domain.
  3. Interviewing too many folks. You will always hear stories about the best hire ever after seeing 100 people. Those stories are legendary. On the other hand, you rarely hear the stories that start with “we could not find the perfect QA leader so we waited and waited until we had a quality crisis.” Yet these latter stories happen far too often. Again, you should not compromise, but if after bringing a dozen or more people through a process you are still searching, consider the patterns you’re seeing and why this is happening. A good practice if you’ve not found right hire after going through a lot of folks is to bring in a new point of view. Consider recruiting the help of a search firm, a board member, or a subject matter advisor to get you over the first hire in a new job function. Do you need the help of a search firm? Would you benefit from the help of a board member or subject matter advisor to get you over the first hire in a new job function?

These add up to the quest for the perfect hire. When it comes to engineering you give yourself a lot of leeway because you feel you can direct a less experienced person and because you can gauge more easily what they know and don’t know. When it comes to other roles you become more reluctant to let go of a dream candidate. This almost always nets out costing you time, and in a new effort time is money. That isn’t saying to settle, but it is saying to use the same techniques of approximation you naturally use when hiring people in your comfort zone.

The most difficult part of hiring for a job you don’t know first-hand is the human side. Every growing organization needs diversity because every product and service is used by a diverse group of people. The different job functions often bring with them diversity of personality types that add to the challenges of hiring. The highly analytical developer looking to hire a strong qualitative thinker for marketing, or the highly empathetic sales leader, is often going to face a challenge just making the human connection.

This human connection is a two-way street. Embrace it. Recognize the leap each of you are taking. Realize that the interpersonal skills required to call on customers every day are just different than the interpersonal skills used when hacking. The challenge of making that human connection is one for the person doing the hiring to overcome. Often that’s the biggest opportunity for personal growth when hiring people to do a job you can’t.

 

–Steven Sinofsky (@stevesi)

 

Note: A form of this post originally appeared on FastCo.

Written by Steven Sinofsky

April 3, 2014 at 9:30 am

Posted in posts

Tagged with , ,