Learning by Shipping

products, development, management…

Archive for June 2014

Apps: Shrapnel v. Bloatware

imageMuch is being said lately about the trend to unbundle capabilities for the web and apps. Is this a new trend, a pendulum, or another stage in the evolution of providing software solutions for work and life? Are we going to learn what some would say are lessons from a past generation of software and avoid bloatware? Perhaps we will relive some of the experiences from that era and our phones and tablets will be littered with app shrapnel as our PCs once were?

My own personal experience in product choices is marked by a near constant tension over not just bundle v. unbundle from a product perspective, but also from a business perspective. Whether on development tools, Office, Windows, or internet services I’ve experienced the unbundle <> bundle dynamic. I’ve bundled, unbundled, and had the “internal” debates about what to do when, what went well or not. If you’re interested in an early debate about bundling Office you can see the Harvard case study on the choice of “best of breed v. suite” in Finding the Suite Spot ($).

The Pendulum

This HBR article does a good job of bringing forth some of the history and describing the challenges of positioning unbundle/bundle as both a binary choice and a pendulum or Krebs-like cycle of resource conservation. Marc Andreessen does a great job in these two tweetstorms of detailing the bundle/unbundle cycle on the internet and the computer history we both grew up with (http://tweetstorm.io/user/pmarca/481554165454209027 and http://tweetstorm.io/user/pmarca/481739410895941632).

There’s one maxim in business that drives so much of the back and forth or pendulum behavior we tend to see, which is that most strategies have a complementary approach (vertical v. horizontal, direct v. indirect, integrate v. distinct, first v. third-party, product org v. discipline org, quantitative v. qualitative performance evaluation, hack v. plan, etc.) So in business depending on your roots or your history, and most importantly the context you find yourself, you are going down a path of one of more of these attributes.

Over time your competition tends to pick you apart the other way or ways. Equally likely, your ecosystem builds up around you innovating in parts where you are weaker, gaining strength, and showing off new approaches to product or market. Certainly, if you’re a new company entering an established market you will not just copy the approach of the incumbent which is why new products seem to be at the other end of one of these spectrums.

Then as you get in trouble you look around and try to figure out what to do. There’s a good chance the organization will double down on the approach that has always worked—after all as Christensen says, that is the natural energy force in an organization. That happens until a big moment of change (a major competitive success, leadership change, etc.) and then you change approaches. More often than not, your choice is to do the thing you weren’t doing before. If you’re around in the workforce long enough, you start to see things as a series of these evolutionary steps.

This is business, context is everything. There’s never a right answer in absolute, only a right answer given the context.

The moments of change, of breaking the cycle or swinging back the other way, are the moments that unleash significant improvements in the work, the product, or the workplace.

History and Customers

As consumers we adopt new technologies without realizing or thinking about whether they are bundled or unbundled, and our choices and selections for one or other are highly dependent on the context at the time. There are times when bundling is essential to the distribution of technology, just as there are times when unbundling brings with it more choice, flexibility, and opportunity. Obviously the same holds for businesses buying products, only businesses have purchasing power that can make bundled things appear unbundled or vice versa.
It is worth considering a few tech examples:

  • Autos began with minimal electronics, followed by optional electronics, then increasingly elaborate integrated electronics and many now think that smartphones will be the best device for in-car electronics/apps (for example the BMW i series).
  • LinkedIn began as a network for professionals to list their credentials and connect to others professionally. Recently it has bundled more and more content-based functionality.
  • Mobile telephony used to have distinct local, long distance, text and then data plans, which have now been bundled into all-you-can-consume multi-device plans.
  • Word processing used to have optional spell-checking and mail merge which was then bundled into single products which were then subsequently bundled into suites and also now bundle cloud services. Similarly, financial spreadsheets, data analysis, and charting were previously distinct efforts that are now bundled. Today we are seeing new tools that have different feature sets and approaches, representing some unbundling and some bundling.
  • Operating systems were once highly hardware dependent, then abstracted from hardware but with optional graphical interfaces, followed by a period of bundling of OS+Graphics, followed by a bundling of OS, graphical interface, and hardware in a single package. Today with services we’re seeing different combinations of bundling and unbundling innovations.
  • Microprocessors have been on a fairly continuous bundling effort relative to peripherals, graphics, and even storage.
  • Modern smartphones are a wonder of bundling, first at the hardware level (SoC packaging) followed by hardware+software, then through all the devices that were previously distinct (GPS, still camera, video camera, pedometer, game controller, USB storage, and more).

There are countless examples depending on what level in the full consumer offering is being considered (i.e. product, price, place, promotion). Considering just these examples, one can easily see the positives and potential pitfalls of any of these.

Yet in looking these examples and others, one can make a few observations about how customers and teams approach bundling choices for products and services:

  • People like distinct products when exploring new capabilities and product teams like building single purpose tools early in product lifecycle, out of both focus and necessity/resources.
  • People like it when their favorite product adds features that previously required a separate product, especially when their favorite product is growing in usage. Product teams love to add more features to existing products when those features map to obvious needs.
  • People have some threshold for when an integrated product turns into an overwhelming product, but that “line in the sand” is impossible to define a priori and depends a great deal on how products are evolving around your product. Mobile phone plans today are great, but many are very unhappy with Cable TV bundles.
  • Competition can come from a bundle that you were previously not considering **or** competition can come from unbundling the product you make.
  • Product managers often reach a point where they can no longer solve the problem of adding new features while seeing them get used and also getting credit for innovating.
  • Macro factors can radically alter your own views of what could/should be bundled. If your business does not have a software component and your competitors add one, attempting to bundle that functionality could be quite challenging (technically, organizationally). If the platform you target (autos, spectrum, screens) undergoes a major change in capability then so too does your view of bundling or unbundling.

These examples and observations make one thing perfectly clear: whether to bundle or unbundle features depends a great deal on context and customer scenarios and so the choices require a great deal of product management thought. The path to bundle or unbundle is not linear, predictable, or reactionary but a genuine opportunity and need for solid product thought.

Strategic questions

On the one hand, considering whether to bundle or unbundle innovations might just be “do what we can that is differentiated”. In practice there are some key strategy questions that come up time and time again when talking to product folks.

  • Discoverability. The most critical strategic question to bundle or unbundle is whether the new work will be discoverable by intended customers. In a new product, the early waves of innovative features often make sense bundled. Over time, just responding to customers means you’ll be bundling in new capabilities (whether organic or competitive).
  • Usability. When faced with a new feature or business approach, the usability of this approach is a key factor in your choice. If you’re unable to develop a user experience that permits successful execution of the desired outcome, then it doesn’t really matter whether your bundled or unbundled.
  • Depth. When making the choice to bundle or unbundle you have to think through how much you plan on innovating in the spaces. If you’re setting yourself up for a long-term head to head on depth versus believing you are “checking a box” you have different choices. Incumbents often view the best path to fending off a disruptive unbundled feature as adding a checkbox to compete (to avoid the trauma of a major change in approach). Marketing often has an urgency that drives a need for market response and that can be represented as an unbundled “add-on that no one cares about” or “a checkbox that can be communicated” — that might sound cynical until you’ve been through a sales cycle losing out to a “feature as a product”.
  • Business economics. If you charge directly for your product or service (or freemium), then there will be a strong incentive to bundle more and more into your existing offering. Sales will generally prefer to add more features at the current price. Marketing will potentially advocate for a new pricing level to increase revenue. If you choose to unbundle and develop a new product, side-by-side or companion, then you’ll need to consider what your attach rate might be. A bundled solution essentially sees a 100% attach rate to your existing product whereas a whole new product brings with it the need to generate demand and subsequent purchase or usage. An advertising-based service will see increased surface area for an unbundled solution but will also dilute usage. A web-based service allows for cross-linking and easy connection between two different properties, but apps will require separate downloads and minimal cross-app connections.
  • Usage economics. It might sound strange separating out business from usage, since especially in a SaaS world they are the same thing. In practice, if you’re revenue is tied to usage directly (page views, transactions, etc.) then your design needs to factor in how you measure and drive usage of the features, bundled or unbundled. If you’re economics are not tied directly to usage you will have more strategic latitude to consider how your offering plays out bundled v. unbundled (assuming your boss lets you keep working on something no one uses).

Product management approach

Should you add that new feature or capability to your existing product or should you create a new destination (app, site)? Should you break out a feature because unbundling is the new normal or will that just break everything? Those are the core questions any PM faces as a product grows.

One tip: do not claim that one approach (bundle v. unbundle) is good for users and the other approach is only good for business. In other words, bundle v. unbundle cannot be distilled down to pro-user or anti-user, or more importantly marketing v. product. The best product people know that context is everything and that positioning a choice as A against B is counter productive—everyone is on the same team and has the same broad goals. As difficult as it is, working through these questions with as much dialog as possible and as much “walk in the other’s shoes” is absolutely critical.

There are many natural forces at play that will drive one way or another.

For example, most organic product development will tend to expand the existing product as it builds on the infrastructure and momentum already present.

Most new acquisitions will tend towards acquiring unbundled solutions, aka competition, though in the enterprise space one can expect significant calls to integrate even disparate technologies.

Part of being a good PM is to step back and go through a thoughtful process about whether to bundle or unbundle new capabilities. The following are some design choices.

  • Advertising new features in proportion to expected usage. There’s a general view to advertise a new feature in the UX in an excessively prominent manner. You want people to know you fixed or added a feature. At the unbundle extreme this means a whole new app and a trend to shrapnel. In the bundle extreme this means a big UX to drive you to a new thing. The most critical choice is really making sure that you are designing the access to the feature to be in relative proportion to how much you expect your customer base to use something.
  • Plan for “n+1” in all experience choices. As you make the choice to bundle or unbundle, know ahead of time that this will not be the first place you make this choice. If you’re adding a new app today then chances are that will become the way you solve things down the road. If you’re adding new UX access to a feature then plan on more depth in that feature or more peer features. Is the choice you are making scalable for the growth in creativity and innovation you expect?
  • Integrate or connect in one direction, not both. If you bundle or unbundle there will be a relentless push to promote the connection between elements of the product or service. Demo flows, top-level UX, even deep linking between apps. At some extreme if you bundle n items, it might not be unrealistic to go down a path where every n is connected to every other n-1 and vice versa. This is incredibly common in line of business apps/modules.
  • Bundle and innovate, don’t bundle and deprecate. If you make a choice to bundle a capability into your mainline effort, do not bundle it to make it go away. Bundle it and think of it as just as important as other things you do. This dynamic appears when your competition does something you don’t like so you hope to have a checkbox and make the competitor go away. This never happens.
  • Designing for good enough leaves you open to disruption. Closely related to deprecating while bundling is the idea that a “tie is a win”. Once you’re established you often think that you can continue to win against a competitor with an integrated implementation that is “good enough”. That might work in short-term marketing but over time, if the area is important you’ll lose.
  • Expect hardware to be relentlessly bundled. If you connect to hardware in any way, then you’ll be faced with a relentless march towards bundling. Hardware naturally bundles because of the economics of manufacturing, the surplus of transistors, and the need to reduce power and surface area. Never bet on hardware or peripherals staying unbundled for long.
  • Expanding software depth is easy, but breadth often adds more value. Engineers and product managers love to round out features, add more depth, more customization, and more incremental improvements. This is where the customer feedback loop is really clear. In terms of growing the business and attracting new customers, expansion in breadth is almost always a better approach so long as you “bundle” features that seem natural. Over indexing on depth, particularly early in a product life-cycle leaves you open to a competitor that does you plus other valuable things, no matter how much you think you’re unbundling approach is cleaner and simpler.
  • Defined categories do not remain defined for long. In enterprise products the “category” or “magic quadrant” is everything. In practice, these very definitions are always in transition. Be in the lookout for being redefined by an action of bundling or unbundling.
  • Assume sales and marketing will prefer new capability to be bundled, or maybe not. Finally to highlight how contextual this is, there is no default as to how outbound efforts will prefer you approach the problem. It is not necessarily the opposite of what you are doing or the same as a competitor. For example, if your sales force economics are such that they are strongly connected to a single product and sales motion, it will be clear that bundling will be preferred no matter what a competitor is up to. At an extreme, even an unbundled feature will be used as a closer or a discount, particularly in the enterprise. Conversely, even if your competition is highly bundled, you’re own outbound efforts might be structured such that unbundling is a competitive and sales win. You just never know. Most importantly, the first reaction isn’t the way to base your approach—spend the time to engage and debate.

To bundle or unbundle is a complex question that goes beyond the simplistic view that minimal design makes for good products. Take the time to engage broadly across the team, organization, and to project forward where you want to be as these are some of the most critical design choices you will make.

–Steven Sinofsky @stevesi

 

Written by Steven Sinofsky

June 28, 2014 at 3:00 pm

Posted in posts

Tagged with , , ,

Tanium Magic

ssLightning doesn’t often strike twice, but in the case of the father and son team of David and Orion Hindawi, founders of Tanium, Inc., that’s exactly what has happened. Tanium is a prime example of a modern enterprise software company—solving the new generation of today’s problems using skills and experience gained from being successful founders in the previous generation.

Forming the company

David Hindawi, a PhD in Operations Research from UC Berkeley is an entrepreneur who led the creation of several successful companies through the earliest days of the PC era. His early efforts focused on getting PCs connected to the “net” and keeping them running smoothly.

In 1997, David teamed up with his son Orion, then an undergraduate at UC Berkeley, to form BigFix. BigFix solved the problem of communicating with all the end-points (PCs, servers, virtual machines, and more) on enterprise networks to gather configuration data and deploy product updates. BigFix was a remarkable product for the time routinely scaling to 100,000 end-points. In 2010, IBM acquired BigFix and integrated it into the Tivoli Software portfolio marking a successful exit.

Some might have been content to rest on their collective laurels having invented the technology, built a company, and scaled a business to the most elite of enterprise success stories. Instead, David, Orion and the key architects of BigFix had an even bigger idea.

Forming Tanium came about as the team reflected on these product shortcomings. “We recognized that enterprises needed endpoint control that was much faster than they could get with existing tools, and challenged ourselves to leapfrog the state of the art, including BigFix, where basic management queries could take days.” Orion recounted, “We knew that nothing short of a 10,000 times speed improvement over the state of the art at the time would solve the problem, and we needed to fundamentally change the paradigm of systems management and end-point security to accomplish that. We are lucky to have one of the few engineering teams in enterprise management who are smart and ambitious enough to do that”.

The team, mostly members of the original BigFix engineering group and all experts with years of experience in large enterprise management, worked in their Berkeley, CA offices for almost two years before the first customers saw the early results of their new product. When seeing the product in action, it was clear to early customers that the team had in fact built a better mousetrap. Tanium was born.

Meeting Tanium @ a16z

When Orion first came to Andreessen Horowitz to meet us and introduce Tanium we had no idea what a surprise we were going to see. Collectively we are many old hands at systems management and security. Many folks at a16z share the experience of having built Opsware and my own experience at Microsoft make for an informed, and perhaps tough, audience.

Orion popped open his laptop, clicked a bookmark and navigated to Tanium’s web-based “console”. At the top of the screen, we saw a single edit control like you’d see for a search engine. He started typing in natural language questions such as “show computers where CPU > 75%” and “show computers with a process named WINWORD.EXE”. Within seconds, just like using search, a list of computers scrolled by as though it was just an existing spreadsheet or report. At this point we reached the only reasonable conclusion—­Orion was showing us a simulation of the product they hoped to build.

After all, we were all quite familiar with the state of the art for this type of telemetry (BigFix in particular represented the state of the art) and we knew that what we were seeing was just not possible.

But, the demonstration was not a simulation or edited screen capture. In fact, Tanium was running on a full scale deployment of thousands of end-points. This wasn’t even a demo scenario, but a live, production deployment—the magic of Tanium. As we learned more about Tanium and how it easily scales to 500,000 end-points (not theoretically, but in practice) and the breadth of capabilities, we were more than intrigued. We were determined to do what we could to invest in David, Orion, and team.

Redefining State of the Art

In enterprises, one team is generally responsible for securing end-points, while another is responsible for managing them (systems management). Typically, each team uses its own tools, and each is independently struggling to keep pace with modern network security threats and the scale of modern networks.

Today’s IT Pros on both security and management teams know the types of information they need from their network. With current tools these questions require careful planning, significant infrastructure, and a fine balance between what IT needs to know and the cost to the end user who is working on the computers that are being queried – if you get it wrong, you can cause slow logons and sluggish performance at inconvenient times. However, to effectively manage and secure networks and provide assurance of compliance with government and industry regulations IT Pros absolutely require information such as hardware configuration, software inventory, network usage, patch and update status, and more. In addition, today’s socially engineered security risks are often combinations of seemingly simple combinations of running programs, files or attachments on the system, and a few other clues. An IT Pro walking up to a PC or Mac could easily obtain all of this information, but for all practical purposes it is impossible for them to gather that data from the thousands of end-points they are responsible for with any level of ease or timeliness.

Getting that data at scale is typically hard and slow because almost every Systems Management tool uses a classic hub (servers) and spoke (end-points) architecture.  IT Pros deploy multiple servers running on network segments with high-end databases and significant networking hardware combined with fairly elaborate end-point runtimes. Even when this state of the art deployment is carefully tuned, the best case at very large scales can be 3 days to “compute” the answer to critical operational questions, assuming you knew ahead of time you were going to ask those questions. By this time the information would be out of date and by then the whole problem you were thinking about has probably changed. As a result most IT Pros know that best case the data is approximate, and worst case just worthless. For mission critical problems, such as compliance with HIPAA (healthcare) or PCI (electronic payment) regulations, this is more than just inconvenient for IT, it can cause a painful failure with board-level visibility.

The state of the art for Security is all about building stronger and taller walls between the enterprise network and the internet.  We’re familiar with these approaches across the basics of firewalls, more sophisticated security appliances and adaptive architectures, and of course the typical security suites that run on end-points. Unfortunately, the bad guys are wise to that game, and modern threats are created anticipating that these protections are in place—in many cases, the bad guys actually “QA” their attacks against the systems enterprises use before they release them. In addition, today’s malware is targeted to particular organizations, and is often put in place by a series of seemingly benign or undetectable actions. Malware, a bot, or a backdoor make their way onto the network leaving behind a series of benign clues—a running process, a changed file, a memory signature, or a specific network packet.  It is only taken together that a pattern emerges. It is only after the fact or with an IOC (indicator of compromise) in hand that IT Pros can potentially track down end-points that have been compromised. Unfortunately, IT is literally swamped by IOCs to investigate and there are no effective tools that support this wide range of questions and even if you could, the state of the art would give answers in days, long after the damage was done.

Even with these challenges, both of these state of the art approaches have their place in a modern network. It would be irresponsible to run a network without basic asset management or network firewalls and end-point protection such as anti-virus.  Unfortunately, for the vast majority of both threats and systems management, the needs of IT Pros are far more dynamic and complex than existing systems can provide.  This is the opportunity where Tanium adds unique value to the tools of the modern IT and Security professional.

At 16z, we love the opportunity to partner with enterprise companies that are either working to radically improve the way a given IT need is met with software or transforming the IT landscape by re-creating or re-defining the traditional categories with unique software. Tanium is magical because it is transformative across both of those measures.

Innovating Tanium

In practice, the Tanium team accomplished nothing short of a complete rethinking of how IT Pros manage, secure, and maintain the end-points in their network—every node on the network can now be interrogated, managed, updated, and secured, instantly from a browser. Literally, you can ask almost anything of an end-point from basics such as configuration, patch status, software inventory compliance, performance, reliability measures, telemetry, network activity, files, and more (basically anything you can ask of a running system) and get answers back in seconds. Not only can you ask questions, but you can take actions as well—distribute and install updates, shut down processes or executables, remove or quarantine files, and so on. All of this happens in seconds, across your entire network of end-points, across LAN segments and the WAN, from branch offices to headquarters to the data center.

Orion walked us through the magic of Tanium. It became clear very quickly that David, Orion and team have invented a completely new way to think about managing and securing a network of computers. The magic of Tanium is built out of four innovative technology pillars:

  1. Runtime. The Tanium runtime builds on the end-point management lessons of BigFix. The runtime serves as the platform for asking the end-point questions in the scripting language of your choice (VBscript, Powershell, WMI, Python, Unix Shell, and most any other language), packaging up the answers and getting them to single server/VM that coordinates the activities. The runtime also provides actions allowing you to make changes across your entire network, instantly. The end-point runtime is a couple megabytes, takes almost no CPU or RAM, and incurs nearly imperceptible network usage.
  2. LP2P Networking: End-points secured by Tanium do not drive up costly WAN traffic but instead communicate between end-points on the local area network. Expensive WAN load is vastly reduced because rather than all end-points trying to reach a single data center across the WAN, answers and actions are coordinated across an incredibly efficient linear peer-to-peer (LP2P) architecture—an innovative hybrid of mesh and peer-to-peer concepts designed and validated for the enterprise. LP2P is self-healing and architected for fault tolerance, transient end-points, and global WAN segments connected in a typical manner.
  3. Natural Language. The interface to Tanium is through a simple text box where you can use natural language to ask questions of the entire set of end-points. Just like using web search, each question gives you suggestions for follow up questions, refinements, and ways to improve your queries. You use natural language questions to generate tables, charts, time series, and other representations of your near real-time network status—instantly.
  4. Security. The entire Tanium platform was of course architected from the ground up to be secure enough for the largest enterprise and federal networks – Tanium affords IT Pros incredible power and flexibility in managing and securing end-points, and they recognize the need to ensure that power stays in the right hands. As a result, all traffic is FIPS level secured, actions are controlled and validated by signed certificates, and administrators have fine-grained control over the types of queries and actions permitted by different users within IT.

If you’re running existing state of the art tools for managing and securing your end-points, you have a fixed set of diagnostic questions that you routinely ask and then store the answers in a database for later analysis. Even if it’s a simple question like what version of OS software your computers are running, it will take a few days or more to get answers. If you have a crisis requiring new information, you likely push out an emergency logon script or dreaded background process to add a new question to the list of slowly collected answers, and days later you know the approximate answer.

As a result of the innovations above, Tanium completely upends the thinking about how this should work. By analogy, if you think about the current state of the art as a printed set of classic encyclopedias then Tanium is like having the entire internet at your disposal through a search engine. Rather than a set of fixed questions and answers, you use Tanium to explore your end-points. When new security threats arise you can immediately explore your risk by using any telemetry to diagnose your risk and then using any mechanism to take corrective actions—instantly.

A top of mind example for all of us is the outbreak of Heartbleed. As soon as your operations center  received notice of this vulnerability, there was one simple question “what variants and versions of OpenSSL are we running across all servers and VMs”. Almost no management and inventory system would have this readily available. Many would have first relied on what was believed to the “standard” images, but later would find out that isn’t enough. With Tanium, you just ask a question in natural language and within seconds you can have any level of details required on the servers and VMs running OpenSSL. You can then shut those servers down, deploy updates, or monitor actions—instantly.

Identifying and securing end-points for compliance with regulations, software licensing, or corporate policy is equally simple. When talking to Orion about Tanium, I searched my own experience for what I thought was a trick question. I wanted to know “how many end-points had attached USB memory stick and written to it recently” (a potential information leak, compliance issue, or malware vector all in one simple and common operation). Once again Tanium’s magic delivered an answer from a natural language query in just a few seconds for thousands of computers.

In addition to all of this, Tanium is also a true platform. IT Pros can utilize mature REST, SOAP, and syslog APIs to connect the results of Tanium queries to their favorite big data destination and develop time series models of their end-points, and mine the data for patterns. Because the Tanium runtime has such a minimal impact it is possible to collect thousands of independent data points continuously from hundreds of thousands of end-points, feeding the predictive analytics and big data systems that enterprises are building today with extremely valuable data. This type of analysis allows for finding points in time when the network changed, identifying malware, bots, and other exploits that we all know escape traditional firewalls and anti-virus. Using the platform, IT can also create tailored dashboards and custom actions that enable monitoring and guarantee compliance of end-points with standards.

Tanium and a16z

I could go on and on about the magic of Tanium that David, Orion, and the amazing team created. In fact when we talk about Tanium we describe it as an entrepreneur trifecta. First, David and Orion are experienced and successful entrepreneurs. Second, Tanium is a product that builds on innovative and inventive technology that could only come about from a team with years of experience and a depth of understanding of the enterprise. And third, Tanium is already a successful and profitable company with dozens of customers in massive, mission-critical and global deployments.

With this incredible story, Andreessen Horowitz could not be more excited to be leading an investment in Tanium. I’m personally super excited to be joining the Tanium Board where I will work closely with David, Orion, and the team.

–Steven Sinofsky (@stevesi, steven@a16z.com)

This post is also on a16z.

Written by Steven Sinofsky

June 22, 2014 at 3:30 pm

Posted in a16z, posts

Tagged with ,

#codecon and reflecting on generational changes

RecodeAttending the <code/conference> (#codecon) this past week turned out to be a remarkable experience, even more remarkable than I expected. The generational shift in our computing experience from desktop to mobile, from software to services, and from hundreds of millions to trillions was on display through the interviews with a dozen industry CEOs.

This post will explore this generational change through the speakers at the conference. Before diving into the details of each session, we will explore this change and the implicit context.

Generational Change

Reflecting on the interviews and demonstrations as well as the “lobby chatter” is a key part of learning by attending. I’ve always viewed this conference and predecessor D Conference as the most relevant conferences for learning about the strategic drivers of our industry. You can read my report from last year here. Writing these reports is part of the learning for me and reading the old reports lets me checkpoint on my own learning and journey.

If you move beyond the insights from any single speaker or the announcements at the event (all were widely reported by re/code and others and new this year by re/code partner CNBC), one theme just keeps coming back to me—the vast difference in tone and content between the incumbents and the challengers, between legacy and disruptors, between the old guard and the new, or whatever labels you want to use. We talk all the time about the transition of our industry from one era to another (and don’t forget the term “post-PC” was first used in this very forum) and the conference provides a microcosm expressed through leaders of these transitions taking place.

There is a vast difference in tone and content between the incumbents and the challengers, between legacy and disruptors, between the old guard and the new.

The transition is in full force. This does not mean by definition that all existing companies will lose and only new companies will win. Quite the contrary, the fact that these changes are now visible to all makes the creation, purchase, and use of new products and technologies evidence of the transition, as well as opportunity to create new plans and adjust. The mobile internet is causing the transition but also making the communication of that very transition much more transparent, which is unlike the progressive unveiling that characterized the mainframe to mini to PC transition.

Are the new companies doing enough to transition customers as well as their own business to new paradigms? How much should new companies bridge from existing solutions or should they expect a wholesale change from customers? Is there an understanding of the existing complexities of the real world?

Are the incumbents changing enough to build new products and business that reflect the new generation? Are they trying too much to “thread the needle” and incrementally step to a new context by maintaining status quo or “repotting the plants”? Is there an understanding of the complexities of existing solutions?

The puts this "generational" change out there for us to experience through the always challenging, yet always consistently even-handed questioning (interrogation) from Walt and Kara (and a great addition this year were interviews featuring seasoned members of the re/code team).

Context (is everything in business)

The attendees (in the audience) are people who have worked in the industry often times since the earliest days. The interviewers are professionals who cover deeply the industry and the subjects. It is hard to imagine creating a more informed or tougher environment. That’s the challenge.

Yet, industry leaders both line up and are obliged to appear (for the most part). Because the environment is so challenging and widely covered, leaders gain a great deal of credibility by standing up to the challenge.

Leaders gain a great deal of credibility by standing up to the challenge of appearing.

The conference takes place the same time every year, whether a company has something to announce or not. For example, last year attendees were frustrated because Apple’s Tim Cook did not announce anything. This is an unfair way to look at the “performance” of a participant. This conference has an amazing audience, but it is also an “uncontrolled” environment so announcing a new product is not without risk and not without huge upside (Disclaimer: I’ve been part of several product announcements/interviews at this forum). Apple, along with many companies, has a tried and true approach to announcing new things as we will see next week.

What is most interesting about the forum, however, is that the format and depth of the dialog allows for a strong “how did we get here” or “how are you wrestling with challenges” discussion. This is not a one-way speech or a forum where talking points go unchallenged. That is in a sense what separates the men from the boys so to speak.

When speakers prepare for the interview, especially at larger companies, folks in communications prepare talking points, responses to tough questions, anecdotes, and even jokes. This is a forum where this can take on “Presidential debate” levels of preparation. The challenge is that everyone in the audience and certainly the interviewers are all well-versed in these techniques. For the presenters, all of that over-preparation cycles through your mind during the tough questions and unpredictable questions from the audience. This is a tough environment.

When speakers choose not to say anything of depth or the answer is clearly a prepared message, you can almost feel the energy in the room drain. There is a collective sense of a missed opportunity to learn more among attendees.

When speakers choose not to say anything of depth or the answer is clearly a prepared message, you can almost feel the energy in the room drain.

Too many people focus on CEOs evading questions about the next big deal or the features/availability of the next product. I don’t think that is a way to evaluate speakers and in almost all cases the interviewers ask a question like this one time often make a joke and move on.

Reporters have an obligation to ask or they look like they are not doing their job. Speakers have an obligation to acknowledge such a forward-looking, material statement and move on. There’s a big caveat to this and where I wanted to share my own learning, my own journey. I believe when it comes to challenges and strategy, CEOs specifically and companies in general can and should do more to inform the dialog. The way I would say this is that if there is something out there that everyone knows to be a fact and the speaker knows to be a fact and everyone knows everyone knows, then talk about it. By not talking about it, the conventional wisdom becomes the reality and the conventional wisdom is often wrong and always incomplete.

I have personally experienced this in the transition from Windows Vista to Windows 7. “Everyone” knew something was up with Vista and certainly Microsoft knew, but no one was saying anything. The result was a strong desire to know the next features of Windows, which was the only thing that folks knew to ask. It served no one to talk about the features of the next product but it also served no one to pretend everything was going well. I missed a big opportunity and looked foolish in a very early interview I did with a (now) re/code reporter. I followed the tried and true approach of the incumbent which is to say nothing, redirect, and so on. See several thousand words without saying anything appear here, from 6 years ago this week.

It turns out that in a world of global instant communication, transparency, open source, platform shifts, and so on that the story about the products, the strategy, and more can come to define efforts more than folks think. This isn’t always the case because business is a social science, but by and large what distinguishes the way the PC era evolved from the way the mobile era is evolving is a vast difference in the flow of information and pace of change. Corporate communications and the leadership approach need to adapt to this era. Recognizing this one thing we did on the above transition in Windows was start blogging about the “why” of the product long before the release, which to this day was a unique level of transparency (and also a huge challenge).

The generational change taking place now is challenging large companies more than ever before. Technology companies are seeing their investments and assets have faster lifecycles and shorter lifespans. They should address head on the challenges of these timescales and commitments. Business approaches are also being challenged and everyone knows this on all sides, but not talking about the challenges means everyone just assumes how things will evolve, and collectively everyone can’t be right.

These changes are also pushing and pulling customers more than ever before. As individual consumers we invest a little bit in a new phone or tablet and maybe a gadget and services here and there. Some of these pan out and some don’t. But large companies looking to define themselves in a new era of mobility, bring your own devices, cross-organizational boundaries, and cloud need much more information and a clearer understanding of what and why things have transpired like they have. Discussing the rationale behind choices provides much more context for customers making bets and allows a much more open dialog to compare and contrast choices. This goes way beyond features and gets to the strategy, learning from the past, direction for the future–it is a fine line.

It is too easy to fall back to wanting to know the next products and features. Companies still have secrets. That’s what defines a company relative to competition. As Jeff Bezos commented recently, “sure, I’d like to know Apple’s product roadmap”. To interpret the need for openness as a public roadmap or feature list misses the point—what was missing from the incumbent perspective was a view of what has transpired over the past 5 years and with that understanding a view of what could provide more understanding of how investments are moving forward.

The real question is if incumbents are going to change enough, fast enough, and in a sense disrupt themselves and do so with a clear understanding of what has transpired in the past few years. Or will they take on all the characteristics of “Innovator’s Dilemma” and operate hoping incremental change dampens any effect of big transitions will allow them to weather the storm and return to normal.

To see how significant this transition is, I think it is best to start with Mary Meeker’s always informative “Internet Trends 2014”. The complete report is available and so is the video. There were many interesting data points—the rise of China, the conversion of smartphones from feature phones, the move of OS platforms to Silicon Valley companies, messaging, and more. One slide that sums up the transition along with the challenge showed the growth of tablets relative to PCs with the title “Tablet Units = Growing Faster Than PCs Ever Did…+52%, 2013”.

 

Tablet growth relative to PCs

Because business is a social science and because there are many ways to look at data, no doubt some will challenge this data or conclusions. In fact, IDC just revised their tablet numbers down. Some feel that Tablets are reverting to their role as “media consumption” or lightweight computing devices. That I’m writing this on a tablet (yes one with a keyboard, but one with LTE, 10 hour battery life, weighs nothing, B5 size, etc.) provides my own anecdote about where things are heading.

This growth will change. It might sputter and then increase. There’s no doubt tablets are overtaking notebooks in terms of unit volumes. They are definitely not taking over all notebook workloads. But that would be like saying the growth of email was irrelevant to word-processing because it ignores the growth of the pie and shift in total volume to the new technology. As Steve Jobs said on stage at this conference, the software will catch up. This is happening. Despite what people might think, large numbers of attendees had their tablets at the conference and they were being “productive”.

Just as mainframe companies attempted to point out the shortcomings of PCs as servers, pointing out the shortcomings of tablets is not helpful, especially as tablets continue to gain more and more features of laptops while maintaining their unique characteristics (lightweight, fanless, quality over time, connectivity, reliability, security, apps, etc.)

One more slide from Mary sets the context that dominated the divergence of incumbents and disruptors and that was the view of the market size of each generation of computing, “Each New Computing Cycle = >10x > Installed Base Than Previous Cycle.”

Each New Computing Cycle = >10x > Installed Base Than Previous Cycle

“More than just phones” might lump too many devices into the last data point for some wishing to make the point that things are not changing so much. Let’s be clear—many mainframes still run the most critical systems of the world (I was in a briefing with an insurance company last week that wanted to hire me because I happened to know PL/1!). Today’s laptops have massive utility that isn’t being replaced overnight and probably won’t ever be “replaced”. That’s the Innovator’s Dilemma argument that does not equip either product developers or customers to innovate and prosper during these cycle changes.

Once you get beyond the specifics of what is coming next, which no one should be obliged to answer at #codecon, the dialog that gets to the heart of what is going on is worth having. What was missed? What was learned? What was tried? What did you think of what was tried? What is being done differently? How are big technology changes being thought of in isolation? Relative to existing investments? What point of view does a company have? What led the new company to be formed? What is different about investments being made? How do customers cope with change?

These questions and how they were answered made for quite a contrast between incumbents and disruptors. If you’re interested in per-speaker reports or the full interviews for any of them, please see the re/code site. My intent is not to summarize the sessions but to reflect on the sessions through this lens of forward leaning versus backward looking.

Incumbents

The incumbents of Microsoft, Intel, Comcast and Wal-Mart had a common theme which is that they each face significant challenges in the technology platforms and business models that brought them wildly successful. At the same time, each in my view missed an opportunity to say how they intend to change. In a sense, each asked us to leap to a future with them in leadership but without the detail to support that assertion.

It is key to understand that it is incredibly important for an industry to have large and healthy players operating at scale. In many ways, the startups we love serve as disruptive R&D for larger players and a healthy M&A pipeline is critical for all as evidenced by some of the recent mega-deals and dozens of smaller ones all aimed at the long term evolution of core products.

It is incredibly important for an industry to have large and healthy players operating at scale

Yet, many investments, particularly in hardware and manufacturing, require billions of dollars that can only be made by large companies. Incremental improvements we come to take for granted such as doubling of capacity, improved batteries, thinner devices, more pixels, massive data centers, and so on can only come from huge scale and well-functioning large companies.

At the same time, one look at Meeker’s slide above and one can’t help but notice that these large companies come to define the cycles she represents. Is that a convenient way we recall changes or were strategic changes part of a causal relationship? Don’t be so quick to judge. There’s a significant amount of subtlety and nuance.

Let’s look at some of the specific speakers.

Microsoft’s Satya Nadella and Intel’s Brian Krzanich both sit in the hot seat (the red chairs that define the #codecon set) with the same question so it is worth considering them together—what happened with respect to mobile and tablets. Satya talked about wishing to have taken the bet to build hardware all the way, sooner. Intel talked about the challenges in manufacturing at 14nm, not having the right product relative to power and the need to do better at 10nm. Mossberg kicked off Brian’s interview with the observation that he’s using a laptop half as frequently and using ARM based products a great deal. In a moment of candor, Brian talked about how many at Intel wished that the march towards mobility would have stopped at Ultrabooks and that Intel lacked the right parts to do tablets, which many at Intel did not think tablets would break out beyond consumption. I felt Brian’s comments showed a good acknowledgement about why things didn’t happen. At the same time, collectively the view of a strategy in the near to medium term didn’t come through. In eerily similar approaches, both Intel and Microsoft looked to a future beyond phones and tablets to an internet of things or more personal computing as where they will see greater success. I left both of these sessions feeling there was more to be told about where things are right now and what will happen over the next year or two (again not the features but the strategy—Microsoft and tablets small and large, Intel and mobile or even Chrome and Android). It isn’t that nothing was said, it was that everyone knows where things are today and the speakers know everyone knows, and the upside to keeping things close to the vest seems minimal and equates to “go with the disruptors” at some level.

One must admit that the challenge faced by Wal-Mart’s Doug McMillon is even greater in this audience which has few Walmart regulars (note, I shop at Walmart). In particular, many in this crowd are on the leading edge of home delivery and uber-for-everything and so visiting stores is already a thing of the past. That said, so much of what was said about online commerce felt too much like an expected incumbent response. For example, the idea that the lines are blurring between ecommerce and retail or that it is really hard to measure ecommerce if a person looked up an item on their mobile device before coming to the store (I wondered if there really was a metric that tried to give credit internally to the ecommerce division if someone did that). Ultimately, Doug said “physical still matters and digital makes it more valuable”. Maybe, except the last morning of the show I ordered a wall mount for the Sonos speaker we received at the show (yes elite gifts are part of the elite show) and it beat me home. Yes that is a luxury good and more, but to put forward the notion that ecommerce is still an add-on to physical stores seemed tricky for me.

Comcast’s Brian Roberts not only faces the challenge of cord cutters represented in the audience or the prospects of dealing with questions on net neutrality, but also just the fact that a lot of people have a lot of less than positive feelings about the products and services Comcast offers. When you look at Comcast as an incumbent and consider things like Netflix, Hulu, cord cutting, and more as the disruptive force it is very tough to see the dialog Brian led as satisfying. My feeling was that there is a strong response to keep everything as it is, while putting forward a notion that things are improving.  There was a long demonstration of the X1 cable box. Yet in the same session when questioned about net neutrality, Brian said that it is too bad that Netflix should pay a cost of doing business as he has to pay for cableboxes. I think that they love the cablebox (evidence, it seems to be an incredible headache to get cablecards and very costly to switch to TiVo and the rent for cable boxes is pretty high). The fact that they spent 10 minutes doing a demo on the new platform seemed to indicate that—yet the platform has none of the elements of a modern platform relative to apps or openness as was asked by an attendee. The responses to questions about net neutrality seemed to show a strong desire to avoid change while at the same time not acknowledging a changing world and changing needs of what is going on relative to connectivity. The overall dialog around Netflix seemed harsh to me and it failed to consider just how much more pleasant (and modern) Netflix is as a consumer than the X1 experience shown. Disclaimer: I have had really significant problems with Comcast in our new place and having never used them before; this is my first time as a customer. As I have no choice for video or broadband, one could say it is challenging for me to be totally objective.

Each also stuck to revealing little, defending the status quo, and offering a view of the future that is the same but better.

Each of these CEOs and companies have enormously challenging jobs and situations. Having shareholders demanding consistent quarter by quarter results, customers who do not really want change from these service providers but seek change elsewhere, and massive organizations to change all make for the potential of no-win interviews. Yet, each also stuck to revealing little, defending the status quo, and offering a view of the future that is the same but better. My own experience and learning would offer than when facing massive disruptive challenges, engaging in the dialog serves all parties better even though the normal school of thought for the incumbent is to double-down, stick to talking points, and only reveal challenges through the lens of opportunity.

Disruptors

Several CEOs represented the leading edge of disruption. It is super easy to be a fan of disruption and to look at all that is going well with these leaders just as it is easy to look at all the challenges the incumbents face. At the same time, these disruptions are also representative of a new level of frankness and openness about what they face or have faced.

More than the great work these leaders represent, I think it is important to look at how each is communicating and participating in a dialog. One might suggest that when these leaders are under pressure or face challenges of being disrupted they will start to take on the characteristics demonstrated above. I don’t think that is the case, simply because several of these leaders have already faced (or are facing) these challenges in their business. While clearly disruptors have less to lose, it is important not to lose sight of the fact that some of these represent large public companies (not mega cap, but large) and all represent very large customer bases from consumer to enterprise.

It was exciting to see these leaders head to the future, demonstrate a unique point of view, and engage in a two-way dialog about where things are going

For me, it was exciting to watch these interviews and how these leaders took on their own challenges. It was also exciting to see these leaders head to the future, demonstrate a unique point of view, and engage in a two-way dialog about where things are going.

Let’s look at some of these speakers.

Uber’s Travis Kalanick is arguably the most used and mission critical service for the attendees. The love for the service runs deep. Equally deep is the love for how Uber is taking on the government in the regulation of taxis and ride sharing (along with Lyft, an a16z portfolio company). At the same time, Travis faces a lot of questions about his aggressive style and reputation. He didn’t hold back, characterizing the task ahead at Uber as “a political campaign, and the candidate is Uber and the opponent is an asshole named Taxi.” OK, probably a bit colorful. What I loved was how he embraced even the disruption to his own business. After seeing a truly autonomous car from Google the night before we heard the CEO of Uber telling us that self-driving cars are the future, not drivers. Considering that Uber is a marketplace for drivers, this embrace of your own disruption is great to see.

Most people expected a characteristically polite interview by Softbank’s Masayoshi Son-san, but were treated to candor and aggressiveness, though in a very polite way. This would be consistent with the amazing success Softbank and Yahoo BB had in Japan ten plus years ago bringing amazing broadband and low prices to a market easily dominate by the goliaths like NTT (the most visible building from the Shinjuku train station is the DoCoMo tower). Son-san told the story of starting Yahoo BB and “how they had: No experience, No technology, No capital. Just anger.” This was a true disruptor story, much like Uber’s story of realigning city government only at a national scale. While it was not so challenging to be candid about WiMax, Son-san was super clear about the failed technological approach. He was clear about the intention to go after broadband in the US with the same zeal he went after it in Japan.

Salesforce and Workday (Marc Benioff / Aneel Bhusri) together offered an incredibly clear view of disruption at the enterprise software level. If there’s one interview to watch, I would suggest this one because it has so much relevance to how software is made and brought to market from two CEOs who made and brought to market software in a previous generation. These are CEOs learning from their experience who have also engaged the marketplace differently as disruptors. There were many statements that are starting to seem less and less “bold” but nevertheless remain monumentally disruptive: “in a few years no one will run business software on premises”, “I run the company from a smartphone”, “if you’re going to build a cloud app you need to start from a clean sheet of paper—there’s no way around it”, “incumbents are holding on to the past and basically trying to monetize it”, “90% of the company can do all of HR on a smartphone” and so on. There were many profound elements of the dialog that revealed the depth of the strategic and technological shift these leaders are both creating and have experienced. For example, there was a description of competing with an incumbent like SAP who would go to a customer, negotiate a $40M deal to “upgrade” and then wait two years to get the latest features or start to use a SaaS model and the new features just show up. Yes there’s a ton of complexity in there and yes it is horribly disruptive to how businesses operate, but so was the introduction of the PC, client/server (upon which that $40M upgrade was based) and more. Finally, the discussion about being in a “post-server” world resonated with me as I just don’t see it as viable for companies to be building out their own data centers and this session provided a lot of evidence as to what these vendors are doing to make that a realistic assertion. From a format perspective I love the adjacency of these two and wish a couple of the incumbents were paired together.

Dropbox’s Drew Houston brought innovation, competition, and regulatory oversight into focus with his interview. This is another service that many people in the room not only use but rely on and that brings with it a degree of comfort and also a challenge in that the audience knows a lot about the services represented. Not content to simply reiterate what was previously known and said about the company, Drew talked about the genuine frustration he represents as a cloud provider learning about the revelation that the NSA tapped into cloud based services. It would have been easy to lay low but instead made the quip that the “NSA doesn’t send a muffin basket and say welcome”.

Netflix’s Reed Hastings represents learning and the learning from disruption incredibly well and can also be chronicled in his own appearances in the hot seat. Sometimes we forget that Netflix has been a public company for 12 years, to the day of this interview! For many of us it seems like ancient history that we used to get plastic discs in the mail and then return them Monday morning. Netflix is famously known for having disrupted itself and not with grace while on a path to streaming and today’s Orange is the New Black. I found the discussion looking backwards to missed opportunities and disruption absolutely fascinating. Reed talked about how the team would discuss “managing to the point of feeling like your skin crawled” and making decisions that were unbelievably difficult. While given the success right now, perhaps it is less difficult to look backwards at the challenges faced and mistakes made. It was amazing to hear this level of candor. Reed was even candid about something he said just a short time ago about the high price of Netflix stock which he said at the time was too high and represented a euphoria. In contrast to Comcast, Reed was much clearer about the net neutrality issues are playing out—he used a great example of Comcast trying to charge at both ends (both for the consumer and the internet service) by talking about the flow of money through the system. He offered an operational view of “strong net neutrality”. Putting aside the specifics of the issue, the tone of looking forward, candor about the past, expression of a clear point of view, and a view of delivering new products and services along with the inherent risks and challenges comes across as modern and consistent with a new style of leadership.

What comes next?

It might be too easy to read this and conclude big companies are legacy and being disrupted and new companies disrupt, but that would ignore two things.

First, this is a moment in time. While some would say disruption is akin to physics and must happen, there are dominant companies that reinvent themselves. Few even recall that IBM was close to bankruptcy when it reinvented itself from one dominant company to another, albeit in a very different way. And that reinvention progressed through nearly 20 years and returned 7X the broad stock market overall during that time.

Second, companies that disrupt are themselves prone to disruption down the road. We haven’t seen this dynamic play out yet for the companies here (though Netflix might be one). There is also a great deal of learning about how to reinvent and avoid the risk of being locked into a strategy and execution. Google doing the unthinkable of shutting down services or Facebook acquiring very large scale indirect competitors or technology complements are examples of a new generation of leaders acting differently relative to the potential disruption of core businesses.

Nothing is quite inevitable in business, but the potential to fall into familiar patterns is high.

Nothing is quite inevitable in business, but the potential to fall into familiar patterns is high. This past week at #codecon demonstrated the challenges and approaches to the core risk of the technology industry. In technology, the only thing you really do is monetize the work of the past and deliver innovation to the future. How leaders approach this reality is an evolving skill and #codecon allows us all to witness this evolution firsthand.

–Steven (@stevesi)

Written by Steven Sinofsky

June 1, 2014 at 11:30 am

Posted in posts, recode

Tagged with , ,

%d bloggers like this: