Learning by Shipping

products, development, management…

CES 2015 Recap for Makers and Product Managers

X CES2015 - 32CES 2015 was another amazing show. Walking around the show one can only look with wonder about the amazing technologies being invented and turned into products. Few things are as energizing or re-energizing as systematically walking the booths and soaking it all in. I love CES as a reminder of the amazing opportunity to work in this industry.

Taking a moment to share what I walk away with is always helpful to me—writing is thinking. Every day we have the chance to talk to new companies about products under development and ideas being considered and CES provides a great cross-industry context about what is going on. This is especially important because of the tendency to look too much to the massive companies that might dominate our collective point of view. My experience has been that spending energy on what is going on CES unlocks potential opportunities by forcing you to think about problems and solutions from different perspectives.

While this post goes through products, there are many better sources for the full breadth of the show. I try to focus on the broader themes that I walk away with after spending a couple of days letting everything sort of bake for a bit. This year I wanted to touch on these 5 major themes and also include a traditional view of some of the more “fun” observations:

  • Batteries, wires, simplicity
  • Displays popping up everywhere
  • Cameras improving with Moore’s law
  • Sensors sensing, but early (and it’s all about the data)
  • Connectivity gaining ubiquity
  • Fun Products

Ever the product manager (PM) I try to summarize each of these sections with some top-line PM Learning to put the post into action.

Click on images for larger version. All photos by me unless noted.

Batteries, wireless, simplicity

PM Learning: Of course optimize your experiences to minimize impact on battery life, but don’t assume your competitors will be doing the same. Think about the iPhone OS and built in apps navigating that fine line. In you’re making new hardware, assume standard connectors for charging betting on Type-C and HDMI.

The best place to start with CES is batteries and wires, because that’s what will follow you around the entire show—everyone walks the show floor in search of outlets or with an auxiliary battery and cable hanging off their phone. Batteries created the portable consumer electronics revolution, but we’re also tethered to them far too often. The good news is that progress is real and continues to be made.

Behind the scenes a great deal of progress is being made on power management with chipsets even wireless ones. On display at the show were Bluetooth keyboards can go a year on a single charge or wireless headphones are good for days of normal usage.

Progress is also being made on battery technology that is making it possible for smaller, lighter, and faster charging batteries. While these are not dramatic 2 or 3X improvements, they are real.

The first product I saw was an LG cordless vacuum that had 70 minutes of usage and the cleaning power passing the classic bowling ball suction test. Truly something that makes everything easier.

1 B CES2015 - 01

Batteries are an important part of transportation and Panasonic is the leading manufacturer right now of large-scale batteries for transport. On display was the GoGoRo urban scooter. This is not just a battery-operated scooter that can go 95 km/h and is cloud connected with GPS locator maps. It can go 100km on a pair of batteries. All that would be enough. But the batteries can be swapped out in seconds and you’re on the go. The company plans to build a network of charge stations to go with a business model of subscription. I love this whole concept.

2 B CES2015 - 10

Panasonic also makes batteries for the Tesla so here is a gratuitous picture of the gratuitous Tesla Model X on display.

3 B CES2015 - 09

While all consumer electronics have aimed for simplicity since the first blinking 12:00 on a VCR, simplicity has been elusive due to the myriad of cables, connectors, remotes, and adapters. Normally a CES trip report would include the latest in cable management, high tech cables, or programmable remotes. Well, this year it is fair to say that these whole categories have basically been subsumed in a wave of welcome simplicity.

Cables, to the degree they are needed, have mostly been standardized on HDMI for video and USB for charging and peripherals. With the forthcoming USB Type-C even USB will be standardized. The Apple connectors are obviously all over though easily adapted to micro-USB for now (note to makers of third party batteries—margins are tight, but using a MFI logo and an Apple cable end would be welcome). When you do need cables they are getting better. It was great to see an awesome fiber-optic cable from Corning that worked for USB (also displayport). It makes the cable much thinner and more flexible along with increasing the signal travel distance since it uses active powered ends. HDMI in the works.

4 B CES2015 - 23

While most attention went into Smart Watches with too many features, Casio’s latest iteration offered a new combination of better battery life and low power radios. The new watch uses solar charging along with a GPS receiver (and also the low power radio waves) to set the time based on location. And it is not even huge.

5 B CES2015 - 13

Bringing this theme is no wires and improved batteries to a new extreme, the wireless earbuds from Bragi are aggressive in the feature set by incorporating not just BT for audio but a microphone for talking and sensors for heart rate (though not likely very reliable) and temperature (not sure of the use as a practical matter). But certainly worth looking at when they are available. Photo by Bragi.

6 B Bragi_Dash_InUse_02

Displays popping up everywhere

PM Learning: Curved is here. Too much energy is going into this. Expect to find new scenarios (like signage) and thus new opportunities. Resolution at 4K and beyond is going to be normal very quickly and with a price premium for a very short time. Pay close attention to web page design on high resolution and high DPI (assets). Many opportunities will exist for new screens that will run one app in a fixed function manner for line of business or in consumer settings—these are replacing static signs or unmanageable PCs. We’re on the verge of broadly deployed augmented reality and totally soft control screen layouts, starting with cars.

More than anything, CES continues to be the show about TV.

Curved screens are getting a lot of attention and a lot of skepticism, some of which is warranted. Putting them in an historical context, each generation of screen innovation has been greeted in a similar manner. Whether too expensive, too big, too incremental, or just not useful the reasons a new screen technology wasn’t going to take off have been plentiful. While curved seems weird to most of us (and frankly even maker is trying too hard to justify it, as seen in the pseudo scientific Samsung depictions below) it has compelling utility in a number of scenarios. Skeptics might be underestimating the architectural enthusiasm for the new screens as well.

8 S CES2015 - 27 7 S CES2015 - 26

The most immediate scenario is one that could be called the “Bloomberg desktop” and here you can see it on display. It is very compelling as a single user, a “mission control” station, or as a group monitoring station.

9 S CES2015 - 02

Signage is also incredibly important and the architectural use of curved screens as seen below will become relatively commonplace because of the value in having interactive and programmable displays for advertising and information.

10 S CES2015 - 17

Speaking of signage, for years we’ve seen the gradual migration of printed signs to signage driven by PCs to even one year where all the screens were simply JPEGs being played in those ever-present photo frames. This year saw a number of compelling new signage products that combined new multi-screen layouts with web-based or app-based cloud platforms for creating dynamic layouts, incorporating data, and managing a collection of screens. Below we can see an example of an active menu display and the tool for managing it. Following that is a complex multi-screen 4K layout (narrow bezel) and associated tool.

11 S CES2015 - 28 12 S CES2015 - 08 13 S CES2015 - 07

For home or entertainment, there were dozens of cinematic 21:9 4K curved screens at massive sizes. Maybe this transition will be slower (as the replacement cycle for TVs is slow anyway) due to the need for new thoughts on where to put these. This year at least was showing some wall mounting options.

14 S CES2015 - 06

Curved screens are also making their way into small devices. Last year saw the LG Flex and an update was available this year. Samsung introduced a Galaxy Note Edge with a single curved edge. They went to great lengths in the software to use this as an additional notification band. I’m a bit skeptical of this as it was difficult to use without thinking hard about where to put your hand (at last in a booth minute of use).

15 S CES2015 - 29

I don’t want to gloss over 4K, but suffice it to say every screen was 4K or higher. I saw a lot of skeptical coverage about not being able to see the difference or “how far away are you”. Let’s all just move on. The pixels are here and pretty soon it will just be as difficult to buy an HD display as it is to buy 512MB SIMMs or 80GB HDDs. That’s just how manufacturing at scale works. These screens will soon be cheaper than the ones they are replacing. Moore’s law applies to pixels too. For the skeptics, this exhibit showed how resolution works.

16 S CES2015 - 18

Screens are everywhere and that’s the key learning this year. There were some awesome augmented reality displays that have been talked about for a long time but are quickly becoming practical and cost-effective. Below is a Panasonic setup that can be used to cosmetics either in store or in salon. It was really amazing to see.

17 S CES2015 - 11

Continuing with augmented or head’s up displays, this was an amazing dashboard in a concept car from Toyota that showed off a full dash of soft-controls and integrated augmented screens.

18 S CES2015 - 24

At a practical level, Sharp and Toshiba were both showing off ready-made dashboard screens that will make it into new cars as OEM components or as aftermarket parts.

19 S CES2015 - 14

Cameras improving with Moore’s law

PM Learning: Cameras continue to gain more resolution but this year also showed a much clearer focus (ha) on improving photos as they are captured or improving video by making it smarter. Cameras are not just for image capture but also becoming sensors in their own right and integrated into sensing applications, though this is just starting. My favorite advance continues to be the march towards more high dynamic range as a default capture mode.

Digital cameras made their debut in the early 1990’s with 1MP still images that were broadly mocked by show attendees and reviews. Few predicted how Moore’s law would rapidly improve image quality while flash memory would become cost effective for these large CCDs and then mobile phones would make these sensors ubiquitous. Just amazing to think about.

High Dynamic Range started off as a DSLR trick and then something you could turn on and is now an Auto feature on most phones. In cameras it is still a bit of a trick. There are complexities in capturing moving images with HDR that can be overcome. Some find the look of HDR images to be “artificial” but in reality they are closer to the human eye range—this feels a bit like the debate during the first music CDs v. vinyl. Since the human eye has anywhere from 2 to 5 times the range of today’s sensors it only makes sense to see this more and more integrated into the core capture scenario. Below is a Panasonic 4K professional video camera with HDR built in.

20 H CES2015 - 12

Facility security is a key driver of camera technology because of the need for wide views, low light, and varying outdoor conditions. A company that specializes in time-lapse imaging (for example construction sites) introduced a time-lapsed HDR camera.

21 H CES2015 - 46

Low light usually kicks in infrared cameras in security settings. For many the loss of color has always been odd. Toshiba was showing off the first 720P infrared camera that generates a color image even in 0 Lux. This is done using software to map to a colorized palette. You can see a traditional infrared and the color version side by side in a cool interactive booth.

22 H CES2015 - 16

In thinking about cameras as ubiquitous, this very clever camera+LED bulb combination really struck me. Not only is it a standard PAR LED bulb, but it adds a Wi-Fi web camera. Lots of potential uses for this.

23 H CES2015 - 35

DSLRs still rule for professional use and their capabilities are still incredible (and should be for what you carry around). Nikon surprised even their folks in the booth by announcing their first Phase Fresnel lens with a 300mm f4. Cannon has a 400mm lens (their “DO” designation). These lenses result in remarkable (better) image quality and immense size and weight reduction. Seen below, is the classic 300mm f4 and the new “PF” version. Add to cart :-)

24 H CES2015 - 22

Finally, Nikon repeated their display of 360-degree stop action Matrix-like photography. It is really am amazing demo with dozens of cameras snapping a single image providing a full walk around. Just love the technology.

25 H CES2015 - 21

Sensors sensing, but early (and it is all about data!)

PM Learning: We are just starting on sensors. While many sensors are remarkably useful today, the products are still first generation and I believe we are in for an exponential level of improvement. For these reasons, I continue to believe that the wearable sensors out there today are interesting for narrow use cases but still at the early part of the adoption curve. Innovation will continue but for the time being it is important to watch (or drive the exponential) changes. Three main factors will contribute to this:

  1. Today’s sensors are usually taking one measurement (and often that is a proxy for what you want). These are then made into a single purpose product. The future will be more direct measurements as sensors get better and better. There’s much to be invented, for example, for heart rate, blood sugar, blood pressure, and so on.
  2. Sensors are rapidly improving in silos but will just as rapidly begin to be incorporated into aggregate units to save space, battery life, and more. There are obvious physical challenges to overcome (not every sensor can be in the same place or in contact with the same part of a body or device).
  3. Data is really the most important element and key differentiator of a sensor. It is not the absolute measurement but the way the measurement is put in context. The best way to think of this is that GPS was very useful but even more useful when combined with maps and even more useful when those maps add local data such as traffic or information on a destination.

Many are still working to bring gesture recognition to different scenarios. There remains some skepticism, perhaps rooted in the gamer world, but for many cases it can work extremely well. These capabilities can be built into cameras or depending on the amount of recognition into graphics chipsets. I saw two new and neat uses of gesture recognition. First, this LG phone was using a gesture to signal the start of a self-timer for taking selfies (just hold out your hand, recognize, squeeze, then timer starts). This was no selfie-stick (which I now carry around all the time due to the a16z selfie-stick investments) but interesting.

26 N CES2015 - 05

This next demonstration was showing gestures used in front of an automobile screen. There were a lot of potential gestures shown in this proof of concept but still there are interesting possibilities.

27 N CES2015 - 15

The incorporation of image recognition into the camera turns a camera into a sensor to be used for a variety of uses. This was a camera that ended up looking like the TV show Person of Interest.

28 N CES2015 - 19

There were quite a few products demonstrating eye tracking. This is not a new technology but it has become very cheap very quickly. What used to take very specialized cameras can now be done with off the shelf parts and some processing. What are missing are use cases beyond software usability labs and medical diagnostics :-)

29 N CES2015 - 36 30 N CES2015 - 38

This take on eye tracking called the Jins Meme integrated eye tracking and other sensors into hipster glasses. Again the scenarios aren’t quite there yet but it is very interesting. They even package this up in multi-packs for schools and research.

31 N CES2015 - 33

There were many products attempting to sense things in the home. I feel most of these will need to find better integration with other scenarios rather than being point solutions but they are all very interesting to study and will still find initial use cases. This is how innovation happens.

One of the more elaborate sensors is called Mother. It packages up a number of sensors that connect wireless to a base station. There are temperature and motion sensors among them. You just place these near where you want to know something (these little chips). Then they have a nice app that translates sensing events into notifications.

32 N CES2015 - 31

There were even sensors for shoes and socks. If you’ve ever had foot issues you know the need to attempt to replicate your pain while being monitored by a high-speed camera or even fluoroscope/x-ray. These sensors, such as this one in a sock, have immediately interesting medical use under physician supervision. Like many of the sensors, I feel this is a best practice use case and don’t think the home-use case is quite right yet because of the lack of accessible scientific data.

33 N CES2015 - 37

The Lillypad floats around in your pool and takes measurements of the water and wirelessly sends them to an app. It also measures UV light as a clever bonus.

34 N CES2015 - 34

Speaking of pools, this was such a clever sensor. It is a Bluetooth radio that you pair with your phone. You get kids to wear this around a pool. When the kid is submerged it will notify you. You can get notified immediately or after a set time (I learned the national standard for under water distress is 25 seconds). The big trick—there’s no technology here; just that Bluetooth doesn’t travel under water. Awesome!

35 N CES2015 - 125

In this previous post, the notion of ingredients versus products at CES was discussed. To emphasize what this means in practice, this montage below is from a vendor that literally packaged up every point-sensor into a “product”. This allows for a suite of products, which is great in a catalog but awfully complex for a consumer. There were a dozen manufacturers displaying a similar range of single-sensor products. I don’t know if this is sustainable.

36 N Screen Shot 2015-01-11 at 11.29.10 AM

Connectivity gaining ubiquity

PM Learning: Duh, everything will be connected. But unlike previous years, this is now in full execution mode. The biggest challenge is what “things” get connected to what things or networks. When do you put smarts somewhere? Where does data go? What data is used?

Everything is going to be connected. This has been talked about for a long time, but is really here now. The cost of connectivity is so low and, at least in the developing world, assuming either Wi-Fi or WWAN (via add-on plans) is rational and economical. This will introduce a lot of complexity for hardware makers who traditionally have not thought about software. It will make for room for new players that can re-think scenarios and where to put the value. Some devices will improve quickly. Others will struggle to find a purpose to connect. We’ve seen the benefits of remote thermostats and monitoring cameras. On the other hand, remote controlled clothes washers (that can’t load the clothes from the basket or get the clothes into the dryer) might be still searching. I would add that this dual load washer from LG is very clever.

37 C CES2015 - 03 38 C CES2015 - 04

Many products were demonstrating their “Works with Nest”. This is a nice API and and it is attracting a lot of attention since like any platform is saves the device makers from doing a lot of heavy lifting in terms of software. While many of the demonstrations were interesting there can still be a little bit of a gimmick aspect to it (washing machines). This alarm clock was interesting to me. While many of us just use phones now (which can control nest) this clock uses voice recognition for alarm functions. When connected to a Nest it can also be used to change the temperature or to alter the home/away settings of the thermostat.

39 C CES2015 - 45

A company called Cannon Security relatively new security safe company (most are very old) and I loved this “connected” safe. It isn’t connected the way I thought (an app to open it or alert you of a break in). Instead it is a safe that also has a network cable and two USB ports. So one use might be to store a network connected drive in the safe and use it for backup. You could also keep something in the safe charging via USB. Pretty cool. The jack pack is in the lower right of the image.

40 C CES2015 - 42

My favorite product of the whole show, saving the best for last, is not yet released. But talk about a magic collection of connectivity and data…wow. These founders set out to solve the problem of getting packages delivered to your house. Most communities prevent you from getting a delivery box out front and in many places you can’t have something left on your doorstep and expect it to remain. This product, called “Track PIN” solves the problem. Here’s what it does:

  1. Insert a small module inline in the tree wires that control your garage door.
  2. Add a battery operated PIN box to the front of your garage somewhere.
  3. When you receive a package tracking number email just forward it to trackpin.com (sort of like the way TripIt works).
  4. THEN, when the delivery person shows up (UPS, FedEx, USPS, and more) they will automatically know in their handheld what code to punch. Upon punching the code your garage door opens a short amount to slide the package in. No signature required. The PIN is invalidated. The driver is happy. You are happy. Amazon is happy. And the cloud did all the work.

I know it sounds somewhat mundane, but these folks really seem to have developed a cool solution. It beats bothering the neighbors.

41 C CES2015 - 40

Fun Products

Every CES has a few fun products that you just want to call attention to without snark or anything, just because we all know product development is not a science and one has to try a lot of things to get to the right product.

Power Pole. This is my contribution to selfies. This one even has its own power source.

42 F CES2015 - 25

Emergency jump starter/laptop charger/power source. This was a perfectly fine product. The fun part was seeing the exact same product with different logos in 5 different booths. Amazing placement by the ODM.

43 F CES2015 - 41

USB Charger. This is the best non-commercial USB charger I’ve seen. It even includes a way out of spec “high voltage port.

44 F CES2015 - 43

Fake TV. This is a home security system that flashes multi-colored LED lights that trick a burglar into thinking you are home watching TV. Best part about it was that when I took the picture the person staffing the booth said “Don’t worry the Wi-Fi Drone version is coming in late 2015”. Gotta love that!!

45 F CES2015 - 44

Surface influence. And finally, I’ve been known to be a fan of Microsoft Surface but I guess I was not alone. The Typo keyboard attempts to bring a Microsoft TypeCover to the iPad and the Remix Ultra-Tablet is a rather uncanny resemblance to Surface 2 running an Android skin (developed by several former Google employees).

F CES2015 - 47 F CES2015 - 39

Phew. That’s CES 2015 in a nutshell.

Steven Sinofsky (@stevesi)

X CES2015 - 48

Written by Steven Sinofsky

January 11, 2015 at 10:00 pm

CES: Ingredients Not Just Products

Screen Shot 2015-01-10 at 10.21.19 PMCES is an incredibly exciting and energizing show to attend. Sometimes, if you track some of the real-time coverage you might get a sense of disappointment at the lack of breakthrough products or the seemingly endless repetition from many companies making the same thing. There’s a good reason for all this repetition and it is how CES represents our healthy industry working well.

CES is best viewed not as a display of new products to run out and buy but as a display of ingredients for future products. It is great to go to CES and see the latest TVs, displays, or in-car systems. By and large there is little news in these in-market products and categories. It is also great to see the forward-looking vision presentations from the big companies. Similarly, these are good directionally but often don’t represent what you can act on reliably.

Taking an ingredients view, one (along with 140,000 others) can look across the over 2 million feet of 3,600 exhibitors for where things are heading (CES is one of the top trade shows globally, with CeBIT, Photokina, and Computex all vying for top ranking depending on how you count).

If you take a product view, CES can get repetitive or boring rather quickly. I probably saw a dozen selfie-sticks. After a while, every curved 4K TV looks the same. And certainly, there’s a limit to how many IP cameras the market can support. After a few decades you learn to quickly spot the me-too and not dwell on the repetition.

It is worth a brief description of why CES is filled with so many me-too (and often poorly executed) products.

Consider the trio of partners it takes to bring a product to market:

  • Component suppliers. These are the companies that make a specific sensor, memory, screen, chipset, CCD, radio, etc.
  • Manufacturers. These are the companies that pull together all the components and packaging needed to make a product. These are OEMs or ODMs in the consumer electronics industry.
  • Brands and Channels. These are the consumer-visible manifestation of products and can be the chain of retailers or a retail brand.

At any one time, a new component, an innovation or invention, is close to ready to be in the market. An example might be a new heart rate sensor. In order to get the cost of the component low enough for consumer products, the component supplier searches out for a manufacturer to make a device.

While every supplier dreams of landing a major company making millions of units as a first customer that never happens. Instead, there’s a whole industry of companies that will take a component and build what you might think of as a product with a 1:1 mapping of that new component. So a low-cost CCD gets turned into a webcam with simple Wi-Fi integration (and often some commodity level software). The companies that make these are constantly looking to make new products and will gladly make a limited production run and sell at a relatively low margin for a short time. These initial orders help the component makers scale up manufacturing and improve the component through iteration.

At the same time there are retailers and brand names that are always looking to leverage their brand with additional products. These brand names often take the complete product from the manufacturer with some limited amount of branding and customization. This is why you can often see almost identical products with different names. Many know that a few vendors make most LED displays, yet the number of TV brands is quite high. There’s a small amount of customization that takes place in this step. These companies also work off relatively low margins and expect to invest in a limited way. For new categories, while the component companies get to scale out parts, the brands and channels get a sense of the next big thing with limited investments.

So while CES might have a ton of non-differentiated “products”, what you are really seeing is the supply chain at work. In fact it is working extremely well as this whole process is getting more and more optimized. The component manufacturers are now making proof of concepts that almost encroach onto the manufacturers and some brands are going straight to component makers. For the tech enthusiast these might be undifferentiated or even poor products, but for many they serve the purpose at least in the short-term.

Today, some things we take for granted that at one time seemed to swarm the CES show floor with dozens of low quality builds and me-too products include: cameras, flash memory, media playback devices, webcams, Wi-Fi routers, hard drive cages, even tablets and PCs. I recall one CES where I literally thought the entire industry had shifted to making USB memory sticks as there must have been 100 booths showing the latest in 128MB sticks. Walking away, the only thing I could conclude was just how cheap and available these were going to be soon. Without the massive wave of consumer me-too digital cameras that once ruled the show floor, we would not have today’s GoPro and Dropcam.

An astute observer can pick out the me-too products and get a sense for what ingredients will be available and where they are on the price / maturity curve. One can also gauge the suppliers who are doing the most innovative integrations and manufacturing.

Sometimes the whole industry gets it wrong. The most recent example of this would be 3D TV, which just doesn’t seem to be catching on.

Other times the whole industry gets excited about something but others take that direction and pivot it to much more interesting and innovative products. An example of this would be the run of “media boxes” to attach to your TV which went from playing content stored on your home network and local hard drives to stateless, streaming players like Google Chromecast, Amazon Fire and Apple TV. Without those first media boxes, it isn’t clear we would have seen the next generation which took that technology and re-thought it in the context of the internet and cloud.

Finally, the reality is that most of the manufacturers tend to take a new component and build out a purpose-built device to surround that component. So they might take a camera sensor and add a camera body and just make a point-and-shoot. They might take new flash storage and turn it into portable storage. They might take a new display and just make complete monitor. Rarely will the first generation of devices attempt to do multiple things or take a multi-year approach to integrated product development—not on those margins and timelines.

Some technologies this year that reflect first generation products and are likely to be brought to scale or further integrated with other components include: curved displays, high resolution/high DPI displays, human and environmental sensors, and HDR imaging. Sensors will be the most interesting as they will clearly be drawn into the SoC and/or integrated with each other. Obviously, everyone can expect Wi-Fi and broadband connectivity to continue to get smaller and easier and of course CPUs will continue to shrink, draw less power, and get faster.

So when you read the stories about CES saying there are too many junky products or so many of the exact same thing, don’t think of that as a negative. Instead, think about how that might be the next low-price, high-scale ingredient that will be integrated into your product or another product.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

January 10, 2015 at 10:00 pm

Posted in posts

Tagged with , ,

Why Remote Engineering Is So Difficult!?#@%

video-conferencing-headerI have spent a lot of time trying to manage work so it is successful outside of a single location. I’ve had mixed results and have found only three patterns which are described below. Before that, two quick points.

First, this topic has come up this time related to the Paul Graham post on the other 95% of developers and then Matt Mullenweg’s thoughtful critique of that (also discussed on Hacker News). I think the idea of remote work is related to but not central to immigration reform and a position one might have on that. In fact, 15 years ago when immigration reform was all but hopeless many companies (including where I worked) spent countless dollars and hours trying to “offshore” work to India and China with decidedly poor results. I even went and lived in China for a while to see how to make this work. Below the patterns/lessons subsume this past experience.

Second, I would just say this is business and business is a social science, so that means there are not rules or laws of nature. Anything that works in one situation might fail to work in another. Something that failed to work for you might be the perfect solution elsewhere. That said, it is always worth sharing experiences in the hopes of pattern matching.

The first pattern is good to know, just not scalable or readily reproducible. That is when you have a co-located and functioning team and members need to move away for some reason then remote work can continue pretty much as it has before. This assumes that the nature of the work, the code, the project all continue on a pretty similar path. Any major disruption—such as more scale, change in tools, change in product architecture, change in what is sold, etc.—and things quickly gravitate to the less functional “norm”. The reality is in this case that these success stories are often individuals and small teams that come to the project with a fixed notion of how to work.

The second pattern that works is when a project is based on externally defined architectural boundaries. In this case little knowledge is required that span the seam between components. What I mean by externally defined is that the API between the major pieces, separated by geography, is immutable and not defined by the team. It is critical that the API not be under the control of the team because if it is then this case is really the next pattern. An example of this might be a team that is responsible for implementing industry standard components that plug in via industry standard APIs. It might be the team that delivers a large code base from an open source project that is included in the company’s product. This works fine. The general challenge is that this remote work is often not particularly rewarding over time. Historically, for me, this is what ended up being delivered via remote “outsourced” efforts.

The third pattern that works is that those working remotely have projects that have essentially no short term or long term connection to each other. This is pretty counter-intuitive. It is also why startups are often the first places to see remote work as challenging, simply because most startups only work on things that are connected. So it is no surprise that for the most part startups tend to want to work together in one location.

In larger companies it is not uncommon for totally unrelated projects to be in different locations. They might as well be at separate companies.

The challenge there is that there are often corporate strategies that become critical to a broad set of products. So very quickly things turn into a need for collaboration. Since most large, existing products, tend to naturally resist corporate mandates the need for high bandwidth collaboration increases. In fact, unlike a voluntary pull from a repository, a corporate strategy is almost always much harder and much more of a negotiation through a design process than it is a code resuse. That further requires very high bandwidth.

It is also not uncommon for what was once a single product to get rolled into an existing product. So while something might be separate for a while, it later becomes part of some larger whole. This is very common in big companies because what is a “product” often gets defined not by code base or architecture but by what is being sold. A great example for me is how PowerPoint was once a totally separate product until one day it was really only part of a suite of products, Office. From that decision forward we had a “remote” team for a major leg of our product (and one born out of an acquisition at that).

That leaves trying to figure out how a single product can be split across multiple geographies. The funny thing is that you can see this challenge even in one product medium sized companies when the building space occupied spans floors. Amazingly enough even a single staircase or elevator ride has the equivalent impact as a freeway commute. So the idea of working across geographies is far more common than people think.

Overall the big challenge in geography is communication. There just can’t be enough of it at the right bandwidth at the right time. I love all the tools we have. Those work miracles. As many comments from personal experience have talked about on the HN thread, they don’t quite replace what is needed. This post isn’t about that debate—I’m optimistic that these tools will continue to improve dramatically. One shouldn’t under estimate the impact of time zones as well. Even just coast to coast in the US can dramatically alter things.

The core challenge with remote work is not how it is defined right here and now. In fact that is often very easy. It usually only takes a single in person meeting to define how things should be split up. Then the collaboration tools can help to nurture the work and project. It is often the case that this work is very successful for the initial run of the project. The challenge is not the short term, but what happens next.

This makes geography a bit more of a big company thing (where often there are resources to work on multiple products or to fund multiple locations for work). The startup or single product small company has elements of each of these of course.

It is worth considering typical ways of dividing up the work:

  • Alignment by date. The most brute force way of dividing work is that each set of remote people work on different schedules. We all know that once people have different delivery dates it becomes highly likely that the need (or ability) to coordinate on a routine basis is reduced. This type of work can go on until there are surprises or there is a challenge in delivering something that turns out to be connected or the same and should have been on the same schedule to begin with.
  • Alignment by API. One of the most common places that remote work can be divided is to say that locations communicate by APIs. This works up until the API either isn’t right or needs to be reworked. The challenge here is that as a product you’re betting that your API design is robust enough that groups can remotely work at their own pace or velocity. The core question is why would you want to constrain yourself in this way? The second question is how to balance resources on each side of the API. If one side is stretched for resources and the other side isn’t (or both sides are) then geography prevents you from load balancing. Once you start having people in one geography on each side of the API you end up breaking your own remote work algorithm and you need to figure out the way to get the equivalent of in-person communication.
  • Alignment by architecture. While closely related to API, there is also a case where remote work is layered in the same way the architecture is. Again, this works well at the start of a project. Over time this tends to decay. As we all know, as projects progress the architecture will change and be refactored or just redone (especially at both early stages and later in life). If the geography is then wrong, figuring out how to properly architect the code while also overlaying geography and thus skillsets and code knowledge becomes extremely difficult. A very common approach to geography and architecture is the have the app in one geo and the service in another. This just forces a lot of dialog at the app/service seam which I think most people agree is also where much of the innovation and customer experience resides (as well as performance efforts).
  • Alignment by code. Another way to align is at the lowest level which is basically at the code or module level (or language or tool). Basically geography defines who owns what code based on the modules that a given location creates or maintains. This has a great deal of appeal to programmers. It also is the approach that requires the highest bandwidth communication since modules communicate across non-public APIs and often are not architectural boundaries (the first cases). This again can work in the short term but probably collapses the most in short order. You can often see first signs of this failing when given files become exceedingly large or code is obviously in the wrong place, simply because of module ownership.

If I had to sum up all of these in one challenge, it is that however you find you can divide the work across geography at a point in time, it simply isn’t sustainable. The very model you use to keep work geographically efficient are globally sub-optimal for the evolution of your code. It is a constraint that creates unnecessary tradeoffs.

On big projects over time, what you really want is to create centers of excellence in a technology and those centers are also geographies. This always sounds very appealing (IBM created this notion in their Labs). As we all know, however, the definition of what technologies are used where is always changing. A great example would be to consider how your 2015 projects would work if you could tap into a center of excellence in machine learning, but quickly realize that machine learning is going to be the core of your new product? Do you disband the machine learning team? Does the machine learning team now work on every new product in the company? Does the company just move all new products to the machine learning team? How do you geo-scale that sort of effort? That’s why the time element is tricky. Ultimately a center of excellence is how you can brand a location and keep people broadly aware of the work going on. It is easier said than done though. The IME at Microsoft was such a project.

Many say that agility can address this. You simply rethink the boundaries and ownership at points in time. The challenge is in a constant shipping mode that you don’t have that luxury. Engineers are not fully fungible and certainly careers and human desire for ownership and sense of completion are not either. It is easy to imagine and hard to implement agility of work ownership over time.

This has been a post on what things are hard about remote work, at least based on my experience. Of course if you have no option (for whatever reason) then this post can help you look at what can be done over time to help with the challenges that will arise.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

December 30, 2014 at 3:30 pm

Posted in posts

Tagged with , ,

Essay: Workplace Trends, Choices and Technologies for 2015

Picture1What’s in store for 2015 when it comes to technology advances in the workplace?

Originally appeared on <re/code> December 18, 2014.

This next year will see these technologies broadly deployed, but with that deployment will come challenges and choices to make. This sets up 2015 to be a year of intense activity and important choices — how far forward to leap, and how to transition from a world we all know and are working in comfortably. In today’s context of the primacy of smartphone and tablet devices, robust cross-organization cloud services and the changing nature of productivity — all combined with the acute needs of enterprise security — lead to dramatic change in the definition of the enterprise computing platform, starting this year.

Amazing 2014

This past year has seen an incredible — and exponentialdiffusion of technologies. Who would have thought at the start of the year we would end the year surrounded by:

  • Smartphone/supercomputers, some costing less than $50 contract-free, in the hands of almost two billion people
  • Free (essentially) or unlimited cloud storage for individuals and businesses
  • Tablets outselling laptops
  • 4G LTE speeds from a single worldwide device in most of the developed world
  • Amazing pixel densities on large-screen displays, introduced without a premium price
  • Streaming 4K video
  • Apple’s iPhone 6 Plus “phablet” sold very well (we think) and is now perfectly normal to use
  • SaaS/cloud services scaling to tens of millions of business subscribers
  • Major cloud platforms putting millions of servers in their data centers
  • Shared transportation is on a path to substitute for traditional taxis, and in many cases, private car ownership
  • Mobile payments finally arrived at scale in the U.S. and are routine in some of the world’s least developed economies

These and many more advances went from introduction to deployment, especially among technology leaders and early adopters, thus creating a “new normal.” In terms of Geoffrey Moore’s seminal work from 1991, “Crossing the Chasm,” these technologies have been adopted by technical visionaries and are now crossing the chasm to the broader population.

In the real™ world, technology diffusion takes time (deployment, change, etc.), so we have not yet seen the full impact of any of the above. Moving forward to that future — not just making changes for the sake of change — requires a point of view and making trade-offs. This post has in mind the pragmatists (in Moore’s terminology) who want to accelerate and get the benefits from technology transition. Early visionary adopters have already made their moves. Pragmatists often face the real work in bringing the technology to the next stage of adoption, but often also face their own tendency toward skepticism of step-function changes, along with trade-offs in how to move forward.

Viewpoint 2015

Even with many hard choices and challenges, for me, the coming year is a year of extreme optimism for what will be accomplished and how big a difference a year will make. Looking at the directions firmly seeded in 2014, the following represent strategies and choices for 2015 that demand an execution-oriented point of view:

  • Enterprise cloud comes to everyone
  • Email isn’t dead, just wounded, but kill off attachments with prejudice
  • Productivity breaks from legacy work products and workflows
  • Tablets make a “surprise comeback”
  • Mobile device management aims to get it right
  • Hybrid cloud ROI isn’t there, and the complexity is huge
  • Cross-platform really (still) won’t work
  • Massive security breaches challenge the enterprise platform

Enterprise cloud comes to everyone.

When it comes to cloud services for typical information workflows, bottom-up adoption, enterprise pilots and trials defined 2014. The debate over on-premise versus cloud will mostly fade as the pragmatists see that legacy “on-prem” or hosted on-prem software can no longer innovate fast enough or connect to the wide array of services available. Cloud architecture is different, and new software is required to benefit from moving to the cloud. The defensibility of holding an enterprise back or attempts to find plug-replacements for existing legacy systems proved weak, and the demand from business unit leaders and employees for mobile access, cross-product integration, enterprise-spanning collaboration and the inherent flexibility of cloud architecture is too great.

The most substantial development in 2015 will be enterprises defaulting to multi-tenant, public-cloud solutions recognizing that the perceived risks or performance and scale challenges are far less than any existing on-prem or hosted solution or upgrade of the same. The biggest drivers will prove to be the need for primarily mobile access, cross-enterprise collaboration and even security. The biggest risk will be enterprises that continue to shut off or regulate access to solutions, especially by preventing use of enterprise email credentials or devices.

The biggest enterprise opportunity will be integrating leading offerings with enterprise sign-on and namespace to permit easy bottom-up usage across the enterprise, with minimal friction. Because of the rapid switch to cloud, we will see legacy on-prem providers relabel or rebrand hosted legacy solutions as cloud. The attributes of cloud “native” will be key purchase criteria, more than legacy compatibility.

Email isn’t dead — just wounded — but kill off attachments with prejudice.

So much has been said and written about the negatives of email and the need for it to go away. Yet it keeps coming back. The truth is, it never went away, but it is changing dramatically in how it is used. Anyone that interacts with millennials knows that email is viewed the way Gen-Xers might view a written letter, as an overly formal means of communication. Long threads, attachments and elaborate formatting are archaic, confusing and counter to collaboration. Messaging services and apps trump email for all but the most formal or regulated communication, with no single service dominant, as context matters. In emerging markets, email will never attain the same status as developed markets. Today, receiving links to documents is still suboptimal, with gaps to be closed and features to be created, but that should not slow progress this year.

Using cloud-based documents supports an organization knowing where the single, true copy resides, without concern that the asset will proliferate. Mobile devices can use more secure viewers to see, print and annotate documents, without making copies unnecessarily. The idea of having a local copy of attachments (or mail), or even just an inbox of attachments, is proving to be a security nightmare. Out of that reality, many startups are providing incredibly innovative scaleable solutions that can be deployed now based on using cloud solutions,.

Services like DocSend can track usage of high-value documents. Textio can analyze cloud-based documents without having to extract them from a mail store, or try to locate them on file shares. Quip edits documents and basic spreadsheets, and integrates contextual messaging avoiding both mail and attachments while safely spanning org boundaries.

This year, casting technologies will allow links to be sent to displays via cloud services for documents, as video is today. The leading enterprises will rapidly move away from managing a sea of attachments and collaborating in endless email threads. The cultural change is significant and not to be underestimated, but the benefits are now tangible and needed, and solutions exist. The opportunity for new solutions from startups continues this year, with deployments going big. Save email for introduction, announcements and other one-to-many communications.

Productivity breaks from legacy work products and workflows.

The gold standard for creating business work products is not going anywhere this year, or for 10 more years. The gold standard for business work products, however, is rapidly changing. Nothing will ever be better than Office at creating Office work products. What has significantly changed, in part driven by mobile and in part driven by a generational change in communication approach, is the very definition of work products that matter the most. Gone are the days where the enterprise productivity ninja was the person who could make the richest document or presentation. The workflow of static information, in large, report-based documents making endless rounds as attachments, is looking more and more like a Selectric-created report stuffed in an interoffice envelope.

Today’s enterprise productivity ninja is someone who can get answers on their tablet while on a conference call from an offsite. They focus their energy on the cloud-based tools that have the most up-to-date data, and they get the answers and don’t fret about presentation. They share quickly knowing that content matters more than presentation because of the ephemeral nature of business information. The opportunity for the enterprise is on the back end, and moving to real-time, cloud-based solutions that forgo the traditional delays and laborious ETL efforts of dragging massive amounts of data onto client PCs for analysis. The risk is in seeing cloud solutions as anything but the definitive source of data and as workgroup or side solutions, so integrating with the primary sources of transaction data will provide a great opportunity to the organization.

Tablets make a “surprise comeback”

Some thought 2014 was the year tablets faded. Many debated the long replacement cycle or weak competitive position of tablet between phablet and laptop. The reality is that tablets will outsell laptops this year. Some discount all the cheap Android tablets barely used at home, but then one must discount the laptops that go unused in analogous scenarios. Regardless, one thing distinguished 2014 with respect to tablets, defined as iPads: You see them in the hands of business people everywhere, from the coffee shop to the airport to the conference to the boardroom. On those iPads, there are enterprise apps, email and browsing (and now Office), doing enterprise work.

The big change in 2015 will be (and I am guessing like everyone else) the introduction of a new iPad, and likely first-party keyboard attachments and/or (at least) iOS software enhancements for improved “productivity.” A tablet properly defined is not just a form factor, but is a hardware platform (ARM) and a modern/mobile operating system (iOS, Android, Windows Phone/Windows on ARM). Those characteristics, being a big phone, come with the attributes of security, reliability, performance, connectivity, robustness, app store, thinness, light weight; and above all, those attributes remain constant over time.

Laptops will have their place for another decade or more, but they will become stationary desktop tools used for profession-defining tools (Excel in finance, Photoshop in design, AutoCAD in architecture, and many more). Work will happen first on mobile platforms, for both team agility and organizational security. The scenario that will resonate will be a larger-screened modern-OS tablet with a keyboard and a phone/phablet as a second screen used in concert, as shown by Apple’s Continuity. The most significant opportunity for those making apps will be to design tablet- and phablet-optimized experiences and assume the app is the primary use case.

Mobile device management aims to get it right.

From the enterprise IT perspective, the transition from managing PCs to managing mobile devices (phones and tablets) is both a blessing and a curse. The faster that IT can get out of managing PCs, the better. The core challenge is that in the modern threat environment, it has become essentially impossible to maintain the integrity of a PC over time. Technical challenges, or even impossibility, mean that 2015 could literally see pressure to reduce PCs in use.

If you doubt this, consider the Sony breach and the potential impact it will have on the view of traditionally architected computing. The rise of tablets for productivity is, therefore, a blessing. Over time, any device in widespread use is eventually a target. Therefore, mobile presents the same risk as the bad actors find new techniques to exploit mobile. The curse, and therefore the opportunity, is that our industry has not yet created the right model for mobile device management. We have MDM, sandboxing and user profiles. All of these are so far not entirely well-received by users, and most IT feels they are not yet there, but for the wrong reasons. IT should not feel the need to reintroduce the PC approach to device security (stateful, log-on scripts, arbitrary code inserted all over the device, etc.).

This leads to a lot of opportunity in a critical area for 2015. First, a golden rule is required: Do not impact the performance (battery life, connectivity) or usability of the device. It isn’t more secure for the company to issue two phones — one the person wants to use, and the other they have to use. Like any such solution, people will simply work around the limitation or postpone work as long as possible. This dynamic is what causes people to travel with iPads and leave the laptop at home (along with weight, chargers, two-factor readers and more).

The best bet is to avoid using or emphasizing management solutions that work better on Android, simply because Android allows more hooks and invasive software in the OS. That’s quite typical in the broad MDM/security space right now and is quite counterintuitive. The existence of this level of flexibility enabling more control is itself a potential for security challenges, and the invasive approach to management will almost certainly impact performance, compatibility and usability just as such solutions have on PCs. As tempting as it is, it is neither viable nor more secure long term. Many are frustrated by the lack of iOS “management,” yet at the same time one would be hard-pressed to argue that the full Android stack is more secure. There will be an explosion in enterprise-managed mobile devices this year, especially as tablets are deployed to replace PCs in scenarios, and with that, a big opportunity for startups to get mobile management right.

Hybrid cloud ROI isn’t there, and the complexity is huge.

In times of great change, pragmatists eager to adopt technologies crossing the chasm may choose to seek solutions that bridge the old and new ways of doing things. For cloud computing, the two methods seeing a lot of attention are to virtualize an existing data center, or to architect what is known as a hybrid cloud or hybrid public/private (some mixture of data center and cloud).

History clearly shows that betting on bridge solutions is the fastest way to slow down your efforts and build technical debt that reduces ROI in both the short- and long-term. The reason should be apparent, which is that the architecture that represents the new solution is substantially different — so different, in fact, that to connect old and new means your architectural and engineering efforts will be spent on the seam rather than the functionality. There’s an incredibly strong desire to get more out of existing investments or to find rationale for requiring use of existing implementations, but practically speaking, efforts in that direction will feel good for a short while, and then will leave the product or organization further behind.

As an enterprise, the pragmatic thing to do is go public cloud and operate existing infrastructure as legacy, without trying to sprinkle cloud on it or spend energy trying to deeply integrate with a cloud solution. The transition to client-server, GUI or Web all provide ample evidence in failed bridge solutions, a long tail of “wish we hadn’t done that” and few successes worth the effort. As a startup, it will be tempting to work to land customers who will pay you to be a bridge, but that will only serve to keep you behind your competitors who are skipping a hybrid solution. This is a big bet to make in 2015, and one that will be the subject of many debates.

Cross-platform really (still) won’t work.

It has been quite a year for those who had to decide whether to build for iOS first or Android first. At the start of 2014, the conventional wisdom shifted to “Android First,” though this never got beyond a discussion with most startups. With the release of Android “L” and iOS 8, the divergence in platform strategy is clear, and that reinforced my view of the downsides of cross-platform. My view was, and remains, that cross-platform is a losing proposition. It has really never worked in our industry except as an objection-handler. Even today, almost no software is a reasonable combination of cross-platform, consistent with the native platform, and equally “good” across platforms.

As we start 2015 it is abundantly clear that the right approach is to focus on platform optimized/exploitive apps, leading with iOS and with a parallel and synchronized team on Android. Android fragmentation is technically real, but lost in the debate is the reality that the highly fragmented low-end phones also almost never acquire apps nor do they represent the full Google stack of platform services. So the strategy is to focus on flagship Android, such as Nexus, Samsung and Moto (though one must note that the delay there of “L” was more than a month even on Moto) or to focus on a distribution of Android from a specific OEM that has some critical mass, and is aimed at customers who will actively acquire apps.

To be clear, we are in a fully sustainable two-ecosystem world. But given the current state of engagement, platform readiness and devices, 2015 will see innovation first and best on iOS. If you’re building your app and working on core code to share, one should be cautious how that goal ends up defining your engineering strategy. Typically, once core code is in place, it selects for tools and languages as well as overall abstractions, and what system services are used. These have a tendency to block platform-native innovation, or to constrain where code goes. Those prove to be limitations, as platforms further evolve and as your feature set expands. The strategy for cross-platform apps also applies to cross-platform cloud. Trying to abstract yourself away from a cloud platform will further complicate your cloud strategy, not simplify it. The proof points and experience are exactly the same as on the client.

Massive security breaches challenge the enterprise platform.

2014 will go down as the “year of the massive security breach.” Target, eBay, J.P. Morgan, Home Depot, Nieman Marcus, P.F. Chang’s, Michaels, Goodwill and, finally, Sony were just some of the major breaches this year. This next year will be defined by how enterprises respond to the breach.

First, the biggest risks are endpoints. Endpoints as defined by today’s technology are likely vulnerable in just about all circumstances, and show no signs of abating. Second, the on-prem data-center infrastructure suffers this same limitation. Together, the two make for a very challenging situation. The reason is not because today’s infrastructure is poorly designed or managed, but because of the combination of an architecture designed for another era and a sophistication level of nation-state opponents that exceeds IT’s ability to detect, isolate and remediate. As fatalistic as it sounds, this is a new world. Former DHS Secretary Tom Ridge said in an interview, “[T]here are two types of companies: Those that know they have been hacked by a foreign government and those that have been hacked and don’t know it yet.”

The challenge for 2015 in this year of adapting to new technologies is managing through the change. The good news is that there are tools and approaches that can make a huge difference. This post picked many trends that taken together are about this theme of securing a modern enterprise. If you use public cloud services on next-generation platforms you aren’t guaranteed security, but it is highly likely that the team has assembled more talent and has an existential focus on security that is very difficult for most enterprises to duplicate. If you use cloud services rather than local or LAN storage for documents, not only do you gain many features, but you gain a level of security you otherwise lack. Not only is this counterintuitive, it is challenging to internalize on many dimensions. It is also the only line of sight to a solution.

As endpoints, the combination of a modern mobile OS and apps is a new level of security and quality. The most innovative and forward-looking solutions in security will be found in startups taking new approaches to these challenges. Even looking at basics, deploying enterprise-wide single-sign-on with mobile-phone-based two-factor would be a substantial and immediate win that accrues to both legacy solutions and cloud solutions.

Technologies to watch in 2015

Above represents some challenges in the extreme, but also a huge opportunity to cross the chasm into a mobile and cloud-centric company or enterprise. Even with all that is going on to get that work done, this will also be a year where some new technologies will make their appearance or begin to wind their way through early adopters. The following are just some technologies I will be watching for (particularly at the Consumer Electronics Show in January):

Beacon. To some, beacon is still a solution searching for a problem, but I think we are on the cusp of some incredibly innovative solutions. I have been playing with beacons and encourage startups that have any potential to use location to do the same. In terms of enterprise productivity, beacons plus a conference room or auditorium is one area where some incredibly innovative tools can be developed.

4K and beyond. Moore’s law applied to pixels has been incredible. Apple’s 5K iMac topped off a year where we saw 4K displays for hundreds of dollars. In mobile, pixel density will increase (to the degree that battery life, OS and hardware can keep up) and for desktop and wall, screen size will continue to increase. Wall-sized displays, wireless transmission and hopefully touch will introduce a whole new range of potential solutions for collaboration, signage and education.

Tablet keyboards. I am definitely biased in this regard, but I am looking forward to seeing a strong combination of tablets, keyboards and mobile OS enhancements. If you’re developing tablet apps, I’d make sure you’re testing them out with keyboards, as well. The idea that a laptop clamshell form factor can be a mobile OS is going to be normal by the end of the year. The need to convert between “tablet mode” and “laptop mode” isn’t a critical feature for productivity, especially for large screen size. Physical keys will define a clamshell, and make converting to a “tablet” awkward. Innovative touch-based covers could make a resurgence for smaller tablet form factors.


The following is a set of “everyday” things you can do, starting immediately. They are easy. They almost certainly require a behavior change. They will make a difference.


Payments. Apple Pay arrived in 2014 and will have a huge impact on how we view payments. Yet the feature set and usage are still maturing. The transformation of payments will take a long time but happen much faster than many think or hope. I am optimistic about traditional bank accounts, credit cards, currencies all being transformed by the block chain and mobile. Because of the immense infrastructure in the developed world, it is likely the developing world will be leaders in payment and banking.

APIs. One of the most interesting differentiators of cloud services is the way APIs are offered and consumed. Every cloud service offers APIs that are easily consumed at the right abstraction levels. In the old days, a client-server API would look like SQL tables. Today, this same API works the way you think about developing custom apps, time to solution is greatly reduced, and integration with other services is straightforward. I’ll be on the lookout for services with cool APIs and services that take advantage of APIs used by other services.

Machine-learning services. Artificial intelligence has always been five years away. I can safely say that has been the case at least for my entire programming lifetime, starting with, “Would you like to play a game?” Things have changed dramatically over the past year. We now see ML as a service, even from IBM. The ability to easily get to large corpora and to efficiently compute training data in cloud-scale servers is a gift. While it is likely that everything will be marketed using ML terms, the real win will be for those building products to just use the services and deliver customer benefit from them. I’m keeping an eye on opportunities for machine learning to improve products.

On-demand. On-demand is redefining our economy. In many places, a few people still view on-demand as a “spoiled San Francisco” thing. As you think about it, on-demand and same-day delivery bring a new level of efficiency, reduction in traffic, pollution, congestion, infrastructure and more. It is one of those things that is totally counterintuitive until you experience it, and until you start to think about the true costs of consumer-facing storefronts and supply chain. On-demand will be viewed as a macro-efficient necessity, not a super-luxury convenience.

From the coffee shop to the boardroom, 2015 will be a year of big leaps for everyone, as we tap into the new normal and execute on a foundation of new services, new paradigms and new platforms.

Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

December 28, 2014 at 7:00 pm

Posted in recode

Tagged with ,

Why Sony’s Breach Matters

Image of Star Trek Enterprise getting attacked without shields.This past year has seen more wide-spread, massive-scale, and damaging computer system breaches than any time in history. The Sony breach is just the latest—not the first or most creative or even the most destructive computer system breach. It matters because it is a defining moment and turning point to significant and disruptive changes to enterprise and business computing.

The dramatic nature of today’s breaches impacts the enterprise computing infrastructure at both the endpoint and server infrastructure points. This is a good news and bad news situation.

The bad news is that we have likely reached the limits as to how much the existing infrastructure can be protected. One should not dismiss the Sony breach because of their simplistic security architecture (a file Personal passwords.xls with passwords in it is entertaining but not the real issue). The bad news continues with the reality of the FBI assertion of the role of a nation state in the attack or at the very least a level of sophistication that exceeded that of a multi-national corporation.

The good news is that several billion people are already actively using cloud services and mobile devices. With these new approaches to computing, we have new mechanisms for security and the next generation of enterprise computing. Unlike previous transitions, we already have the next generation handy and a cleaner start available. It is important to consider that no one was “training” on using a smartphone—no courses, no videos, no tutorials. People are just using phones and tablets to do work. That’s a strong foundation.

In order to better understand why this breach and this moment in time is so important, I think it is worth taking a trip through some personal history of breaches and reactions. This provides context as to why today we are at a moment of disruption.

Security tipping points in the past

All of us today are familiar with the patchwork of a security architecture that we experience on a daily basis. From multiple passwords, firewalls, VPN, anti-virus software, admin permissions, inability to install software, and more we experience the speed-bumps put in place to thwart future attacks through some vector. To put things in context, it seemed worthwhile to talk about a couple of these speed-bumps. With this context we can then see why we’ve reached a defining moment.

For anyone less inclined to tech or details: below, I describe three technologies that were–each at its own moment in time–considered crucial by a healthy population of business users: MS-DOS TSRs, Word macros, and Outlook automation. The context around them changed over time, driving technology changes–like the speed bumps I list above that previously would have been dismissed as too disruptive.

Starting as a programmer at Microsoft in 1989 meant I was entering a world of MS-DOS (Windows 3.0 hadn’t shipped and everyone was mostly focused on OS/2). If one was a University hire into the Apps group (yes we called it that) you spent the summer in “Apps Development College” as a training program. I loved it. One thing I had to do though was learn all about viruses.

You have to keep in mind that back then most PCs weren’t connected to each other by networking, even in the workplace. The way you got a virus was by someone giving you a program via floppy (or downloading via 300b from a BBS) that was infected. Viruses on DOS were primarily implemented using a perfectly legitimate programming technique called “Terminate and Stay Resident” program, or TSR. TSRs provided many useful tools to the DOS environment. My favorite was Borland Sidekick was I had spent summers installing on all the first-time PCs at the Cold War defense contractor where I spent my summers. Unfortunately, a TSR virus once installed could trap keystrokes or interfere with screen or disk I/O.

I was struck at the time how a relatively useful and official operating system function could be used to do bad things. So we spent a couple of weeks looking at bad TSRs and how they worked. I loved Sidekick and so did millions. But the cost of having this gaping TSR hole was too high. With Windows (protect mode) and OS/2 TSRs were no longer allowed. It turned out to cause quite an uproar as many people had come to rely on TSRs for things like dialing their phone (really), recording notes, calendaring, and more. My lesson was that the pain and challenges caused were worse than breaking the workflow, even if that was all 20M people using business PCs at the time.

With the advent of Windows and email, businesses had a good run of both improved productivity and a world pretty much free of viruses. With Windows more and more businesses had begun to deploy Microsoft Word as well as to connect employees with email. Emailing documents around came to replace floppy disks.

Then in late 1996, seemingly all at once everyone started opening Word documents to a mysterious alert like the one below.

Image of a Word macro dialog showing the concept virus as described in the text.

This annoying but benign development was actually a virus. The Word Concept virus (technically a worm, which at the time was a big debate) was spreading wildly. It attached itself to an incredibly useful feature of Word called the AutoOpen macro. Basically Word had a snazzy macro language that could do anything automatically that you could do in Word just sitting in front typing (more on this later). AutoOpen allowed these macros to run as soon as you opened a document. You’d receive a document with Concept code in AutoOpen and upon opening the document it would infect the default (and incredibly useful) template Normal.dot and then from then on every document you opened or created was subsequently infected. When you mailed a document or placed it on a file server, everyone opening that document would become infected the same way. This mechanism would become very useful for future viruses.

Looking at this on the team we were rather consternated. Here was a core business use case. For example, AutoOpen would trigger all sorts of business processes such as creating a standard document with the right formats and metadata or checking for certain conditions in a document management system. These capabilities were key to Word winning in the marketplace. Yet clearly something had to be done.

We debated just removing AutoOpen but settled on beginning a long path towards a combination of warning messages and trust levels for Macros to maintain business processes and competitive advantages. One could argue with that choice but the utility was real and alternatives looked really bad. This lesson would come into play in a short time.

The problem we had was that these code changes needed to be deployed. There was no auto update and most companies were not yet on the internet. We issued the patch which you could order on CD or download from an FTP site. We remanufactured the product and released a “point release” and so on (all these details are easily searched and the exact specifics are not important). The damage was done and for a long time “Concept removal” was itself a cottage industry.

Fast forward a couple of years and one weekend in 1999 I was at home and my phone rang (kids, that is the strange device connected to the wall that your parents have). I picked up my AT&T Cordless phone like Jerry used to have and on the other end of the phone was a reporter. She got my number from a PR contact who she woke up. Hyperventilating, all I could make out was that she was asking me about “Melissa”. I didn’t know a Melissa and was pretty confused. I couldn’t check my email because I only had one phone line (kids, ask your parents about that problem). I hung up the phone and promised to call back, which I did.

I connected to work and downloaded my email. Upon doing so I became not only an observer but a participant in this fiasco. My inbox was filled with messages from friends with the subject line “Here is the document you asked for…don’t show anyone else :)”. Every friend from high school and college as well as Microsoft had sent me this same mail. Welcome to the world of the Melissa virus.

This virus was a document that took advantage of a number of important business capabilities in Word and Outlook. Upon opening the attached document the first thing it managed to do was manage turn off Word’s new security setting that was previously added when protecting against Concept. Long story. Of course it didn’t really matter because vast numbers of IT Pros had already disabled this feature (disabling it was possible as part of the feature) in order to keep line of business systems working. A lot of lessons there that inform the next set of choices.

In addition, the Macro in that attachment then used the incredibly useful Outlook extensibility capabilities known as the VBA object model to enumerate your address book and automatically send mail to the first 50 contacts. I know to most of you the idea that this behavior being useful is akin to lighting up a cigar in the middle of a pitch meeting, but believe it or not this capability was exactly what businesses wanted. With Outlook’s extensibility we gained all sorts of mini-CRM systems, time organizers, email management, and more. Whole books were written about these features.

Once again we worked on a weekend trying to figure out how to tradeoff functionality that not only was useful but was baked into how businesses worked. We valued compatibility and our commitment to customers immensely but at the same time this was causing real damage.

The next day was Monday and the headline on USA Today was about how this virus had spread to estimates of 20% of all PCs and was going to cost billions of dollars to address (I can’t find the actual headline but this will do). I don’t know about you, but waking up feeling like I caused something like that (taking ownership and accountability as managers do) was very difficult. But it also made the next choices more reasonable.

We immediately architected and implemented a solution (I say we—I mean literally the whole Outlook team of about 125 engineers focused on this). We introduced the Outlook E-mail Security Update. This update essentially turned off the Outlook object model; would no longer open a vast majority of attachment types at all; and would always prompt for all attachments. We would also update all the apps to harden the macro security work. These changes were Draconian and unprecedented.

Thinking back to the uproar over breaking Sidekick in Windows 3, this uproar was unprecedented. Enterprise customers were on the phone immediately. We were doing white papers. We were working with third parties who built and thrived on Outlook extensibility. We were arming consultants to rebuild workflows and add-ins. While we might have “caused” billions in damage with our oversight (in hindsight) it seemed like were doing more damage. Was the cure worse than the disease?

Prevent, rather than cure?

Fast forward through Slammer, Blaster, ILOVEYOU, and on and on. Continue through internet zone, view only mode for attachments, Windows XP SP2 and more. The pattern is clear. We had well-intentioned capabilities that when strung together in novel ways went from enterprise asset to global liability with catastrophic side effects.

Each step in the process above resulted in another speed-bump or diversion. Through the rise of the internet and the wide spread use of the massively more secure NT OS kernel, vast improvements have been made to computing. But the bad actors are just bad actors. They aren’t going away. They adapt. Now they are supported by nation states or global criminal operations. Whether it is for terror, political gain or financial gain, there is a great deal to be gained. Today’s critical infrastructure is powered by systems that have major security challenges. Trillions of dollars of infrastructure is out there and there’s risk in many ways.

My personal view is that there is no longer an ability to add more speed-bumps and even if there was it would not address the changing environment. The road is covered with bumps and cones, but it is still there. The modern enterprise PC and Server infrastructures have been infiltrated with tools, processes, and settings to reduce the risk in today’s environment. Unfortunately in the process they have become so complex and hard to manage that few can really know these systems. Those using these systems are rapidly moving to phones and tablets just to avoid the complexity, unpredictability, and performance challenges faced in even basic work.

That is why we are at a defining moment.

What is wrong with the approach or architecture?

One could make a list a mile long of the specific issues faced with computing today. One could debate whether System A is more or less susceptible than System B. The reality is whether you’re talking Windows, OS X, Linux on desktop or client, they are for all practical purposes equivalent: an Intel-based OS architected in the 1980’s and with capabilities packaged at the user level for that era.

It is entirely possible to configure an environment that is as secure as possible. The question is really would it work like what you had hoped and would it be maintainable in the face of routine computing tasks by average people. I proudly say I was never infected, except for Melissa and that time I used WiFi in China and that USB stick and so on. That is the challenge.

In the broadest sense, there are three core challenges with this architecture which includes not just the OS, but the hardware, peripherals, and apps across the platform. As any security expert will tell you, a system is only secure as the weakest link.

Surface area of knobs and dials for end-users or IT. For 20 years, software was defined by how it could be broadly tweaked, deeply customized, or personalized at every level. The original TSRs were catching the most basic of keystroke (ALT keys) and providing much desired capabilities. The model for development was such that even when adding new security features, most every protection could be turned off (like Macro security). Those that think this is about clients, should consider what a typical enterprise server or app is engineered to do The majority of engineering effort in most enterprise server OS and apps goes into ways to customize or hook the app with custom code or unique configurations. Even the basics of logging on to a PC are all about changing the behavior of a PC with an execution engine, under the guise of security. The very nature of managing a server or end point is about turning knobs and dials. What ports to open? What apps to run? What permissions? Firewall rules? Protocols? And on and on. This surface area, much designed to optimize and create business value, is also surface area for bad actors. It is not any one thing, but the way a series of extensions can be strung together for ill effect. Today’s surface area across the entire architectural stack is immense and well-beyond any scope or capability for audit, management, or even inventory. Certainly no single security engineer can navigate it effectively.

Risk of execution engines. The history of computing is one of placing execution engines inside every program. Macro languages, runtimes, and more—execution engine on top of programs/execution engines. Macros or custom “code” defines the generation. Apps all had the ability to call custom code and to tap directly into native OS services. Having some sort of execution engine and ability to communicate across running programs was not just a feature but a business and competitive necessity. All of this was implemented at the lowest and most flexible, level. Few would have thought that providing such a valuable service, one in use and deployed by so many, would prove to be used for such negative purposes. Today’s platforms have an almost uncountable number of execution engines. In fact, many tools put in place to address security are themselves engines and those too have been targeted (anti-virus, router front ends, and more have all been recently the target of one of many steps in exploits). Today’s mobile apps can’t even make it through the app store approval process with an execution engine. See Steve Jobs Thoughts on Flash.

Vector of social. Technology can only go so far. As with everything, there’s always a solid role for humans to make mistakes or to be tricked into making mistakes. Who wouldn’t open a document that says “Don’t open”? With a hundred passwords, who wouldn’t write them down somewhere? Who wouldn’t open an email from a close college friend? Who wants the inconvenience of using SMS to sign on to a service? Why wouldn’t you use the USB memory stick given to you at a Global Summit of world leaders or connect to the WiFi at an international business class hotel? There are many things where taking humans out of the equation is going to make the world safer and better (cars, planes, manufacturing) to free up resources for other endeavors. Using computing to communicate, collaborate, and create, however, is not on a path to be human-free.

There are other ways to describe the current state of challenges and certainly the list of potential mitigations is ever-growing. When I think of the experience over the past 20 years of escalations, my view is that these are the fundamental challenges to the platform. More speed-bumps will do nothing to help.

Why are we in much better shape?

Well if you made it this far you probably think I painted a rather dystopian view of computing. In a sense I am just thinking back to that weekend phone call about my new friend Melissa. I can empathize with those professionals at Sony, Target, Home Depot, Neiman Marcus, and the untold others who have spent weekends on breaches. I can also empathize with the changes that are about to take place.

It is a good idea to go through and put in more speed bumps and triple check that your IT house is in order. It is unfortunate that most IT professionals will be doing so this holiday season. That is the job and work that needs to be done. This is a short term salve.

When the dust settles we need a new approach. We need the equivalent of breaking a bunch of existing solutions in order to get to a better place. If there’s one lesson from the experiences portrayed in this post, it is that no matter how intense the disruption one creates it won’t go far enough and it will still cause an untold amount of pushback and discomfort from those that have real work to get done. Those in charge or with self-declared technical skill will ask for exceptions because they can be trusted or will act differently than the masses. It only takes one hole in a system and so exceptions are a mistake. I definitely have been wrong personally in that regard.

All is not lost however. We are on the verge of a new generation of computing that was designed from the ground up to be more secure, more robust, more manageable, more usable, and simply better. To be clear, this is absolutely positively not a new state of zero risk. We are simply moving the barriers to a new road. This new road will level the playing field and begin a new war with bad actors. That’s just how this goes. We can’t rid the world of bad actors but we can disrupt them for a while.

New OS and App architectures. Today’s modern operating systems designed for mobile running on ARM decidedly resets some of the most basic attack vectors. We can all bemoan app store (or app store approval) or app sandboxing. We can complain about “App would like access to your Photos”. These architectural changes are significant barriers to bad actors. One day you can open a maliciously crafted photo attachment and have a buffer overrun that then plants some code on a PC to do whatever it wants (simplified description). And then the next day that same flow on a modern mobile OS just doesn’t work. Sure lots of speed-bumps, code reviews, and more have been put in place but the same sequence keeps happening because 20 years and 100’s of millions of lines of code can’t get fixed, ever. A previous post detailed a great deal more about this topic.

Cloud services designed for API access of data. The cloud is so much more than hosting existing servers and server products. In fact, hosting an existing server app or OS is essentially a speed-bump and not a significant win for security. Moving existing servers to be VMs in a public or “private” cloud adds a complexity for you and a minimal bump for bad actors. Why is that? The challenge is all that extensibility and customizability is still there. Worse, those customers moving to a hosted world for their existing capabilities are asking to maintain parity. Modern cloud-native products designed from the ground up have a whole different view of extensibility and customization from the start. Rather than hooks and execution engines, the focus is on data and API customization. The surface area is much less from the very start. For some this might seem like too subtle a difference and certainly some will claim that moving to the cloud is a valid hardening step. For example, in a cloud environment you don’t have access to “all the files” for an organization by using easy drag and drop end-user tools from an end-point. My view is that now is a perfect time to reduce complexity rather than simply hide it by a level of indirection. This is enormously uncomfortable for IT that prided itself on a combination of excellent work and customization and configuration with a business need.

Cloud native companies and products. When engineers moved to writing Windows programs from DOS programs whole brain patterns needed to be rewired. This same thing is true when you move from client and server apps to mobile and cloud services. You simply do everything in a different way. This different way happens to be designed from the start with a whole different approach to security and isolation. This native view extends not just to how features are exposed but to how products are built of course. Developers don’t assume access to random files or OS hooks simply because those don’t exist. More importantly, the notion that a modern OS is all about extensibility or arbitrary code execution on the client or about customization at the implementation level are foreign to the modern engineer. Everyone has moved up the stack and as a result the surface area dramatically reduced and complexity removed. It is also a reality that the cloud companies are going to be security first in terms of everything they do and in their ability to hire and maintain the most sophisticated cyber security groups. With these companies, security is an existential quality of the whole company and that is felt by every single person in the entire company. I know this is a heretical statement, but when you look at the companies that have been breached these are some of the largest companies with the most sophisticated and expensive security teams in non-technology businesses. Will a major cloud vendor be breached? It is difficult to say that it won’t happen. The odds are so much more in cloud-native providers than even the most excellent enterprise.

New authentication and infrastructure models. Imagine a world of ubiquitous two factor authentication and password changing verified by SMS to a device with location awareness and potentially biometrics and even simple PINs. That’s the default today, not some mechanism requiring a dongle, VPN, and a 10 minute logon script. Imagine a world where firewalls are crafted based on software that knows the reachability of apps and nodes and not on 10’s of thousands of rules managed by hand and essentially untouchable even during a breach. That’s where infrastructure is heading. This is the tip of the iceberg but things in this world of basic networking identity and infrastructure are being dramatically changed by software and cloud services—beyond just apps and servers.

Every major change in business computing that came about because of a major breach or disruption of services caused a difficult or even painful transition to a new normal. At each step business processes and workflow were broken. People complained. IT was squeezed. But after the disruption the work began to develop new approaches.

Today’s mobile world of apps and cloud services is already in place. It is not a plug-in substitute for what we have been using for 20 or more years but it is also better in so many ways. Collaboration, mobility, flexibility, ease of deployment and more are vastly improved. Sharing, formatting, emailing and more will change. It will be painful. With that challenge will come a renewed sense of control and opportunity. Like the 15 or so years from TSRs to Melissa, my bet is we will have a period of time free of bad actors, at least in the old ways, for enterprises that make the changes needed.

—Steven Sinofsky (@stevesi)

# # # # #

Written by Steven Sinofsky

December 21, 2014 at 10:00 pm

Posted in posts

Tagged with ,

Startups aren’t features (of products or companies)

Checklist with pen isolated on whiteCompanies often pay very close attention to new products from startups as they launched and ponder their impact on their scale, mainstream work. Almost all of the time the competitive risk was deemed minimal. Then one day the impact is significant.

In fact up until such a point most pundits and observers likely said that the startup will get overrun or crushed by a big company in the adjacent space. By this time it is often too late for the incumbent and what was a product challenge now looks like an opportunity to take on the challenges of venture integration.

Why is this dynamic so often repeated? Why does the advantage tilt to startups when it comes to innovation, particularly innovation that disrupts the traditional category definition or go to market of a product?

Much of the challenge described here is rooted in how we discuss technology disruption. Incumbents are faced with “disruption” on a daily basis and from all constituencies. To a great degree as an incumbent the sky is always falling. For every product that truly disrupts there are likely hundreds of products, technologies, marketing campaigns, pricing strategies and more that some were certain would be last straw for an incumbent.

Because statistically new ideas are not likely to disrupt and new companies are likely to fail, incumbents become experts at defining away the challenges and risks posed by a new entrant into the market. Incumbents view the risk of wild swings in strategy or execution as much higher risk than odds of a 1 in 100 chance a new technology upending the near term business. Factoring in any reasonable timeline and the incumbent has every incentive to side with statistics.

To answer “why startups aren’t features” this post looks at the three elements of a startup that competes with an incumbent: incumbent’s reaction, challenges faced by the incumbent, and the advantages of the startup.

Reaction

When a startup enters a space thought (by the incumbent or conventional wisdom) to be occupied by an incumbent there are series of reasonably predictable reactions that take place. The more entrenched the incumbent the more reasoned and bullet proof the logic appears to be. Remember, most technologies fail to take hold and most startups don’t grow into significant competitors. I’ve personally reacted to this situation as both a startup and as the incumbent.

Doesn’t solve a problem customers have. The first reaction is to just declare a product as not solving a customer problem. This is sort of the ultimate “in the bubble” reaction because the reality is that the incumbent’s existing customers almost certainly don’t have the specific problem being solved because they too live in the very same context. In a world where enterprises were comfortable sending PPT/PDFs over dedicated lines to replicated file servers, web technologies didn’t solve a problem anyone had (this is a real example I experienced in evangelizing web technology).

Just a feature. The first reaction to most startups is that whatever is being done is a feature of an existing product. Perhaps the most famous of all of these was Steve Jobs declaring Dropbox to be “a feature not a product”. Across the spectrum from enterprise to consumer this reaction is routine. Every major communication service, for example, enabled the exchange of photos (AIM, Messenger, MMS, Facebook, and more). Yet, from Instagram to Snapchat some incredibly innovative and valuable startups have been created that to some do nothing more than slight variations in sharing photos. In collaboration, email, app development, storage and more enterprise startups continue to innovate in ways that solve problems in uniquely valuable ways all while incumbents feel like they “already do that”. So while something might be a feature of an existing product, it is almost certainly not a feature exactly like one in an existing product or likely to become one.

Only a month’s work. One asset incumbents have is an existing engineering infrastructure and user experience. So when a new “feature” becomes interesting in the marketplace and discussions turn to “getting something done” the conclusion is usually that the work is about a month. Often this is based on estimate for how much effort the startup put into the work. However, the incumbent has all sorts of constraints that turn that month into many months: globalization, code reviews, security audits, training customer support, developing marketing plans, enterprise customer roadmaps, not to mention all the coordination and scheduling adjustments. On top of all of that, we all know that it is far easier to add a new feature to a new code base than to add something to a large and complex code base. So rarely is something a month’s work in reality.

Challenges

One thing worth doing as a startup (or as a customer of an incumbent) is considering why the challenges continue even if the incumbent spins up an effort to compete.

Just one feature. If you take at face value that the startup is doing just a feature then it is almost certainly the case that it will be packaged and communicated as such. The feature will get implemented as an add-on, an extra click or checkbox, and communicated to customers as part of the existing materials. In other words, the feature is an objection handler.

Takes a long time to integrate. At the enterprise level, the most critical part of any new feature or innovation is how it integrates with existing efforts. In that regard, the early feedback about the execution will always push for more integration with existing solutions. This will slow down the release of the efforts and tend to pile on more and more engineering work that is outside the domain of what the competitor is doing.

Doesn’t fit with broad value proposition. The other side of “just one feature” is that the go to market execution sees the new feature as somehow conflicting with the existing value proposition. This means that while people seem to be seeing great value in a solution the very existence of the solution runs counter to the core value proposition of the existing products. If you think about all those photo sharing applications, the whole idea was to collect all your photos, enable you to later share them or order prints or mugs. Along comes disappearing photos and that doesn’t fit at all with what you do. At the enterprise level, consider how the enterprise world was all about compliance and containing information while faced with file sharing that is all about beyond the firewall. Faced with reconciling these positioning elements, the incumbent will choose to sell against the startup’s scenario rather than embrace it.

Advantages

Startups also have some advantages in this dynamic that are readily exploitable. Most of the time when a new idea is taking hold one can see how the startup is maximizing the value they bring along one of these dimensions.

Depth versus breadth. Because the incumbent often views something new as a feature of an existing product, the startup has an opportunity to innovate much more deeply in the space. In any scenario becomes interesting, the flywheel of innovation that comes from usage creates many opportunities to improve the scenario. So while the early days might look like a feature, a startup is committed to the full depth of a scenario and only that scenario. They don’t have any pressure to maintain something that already exists or spend energy elsewhere. In a world where customers want the app to offer a full stack solution or expect a tool to complete the scenario without integrating something else, this turns out to be a huge advantage.

Single release effort. The startup is focused on one line of development. There’s no coordination, no schedules to align, no longer term marketing plans to reconcile and so on. Incumbents will often try to change plans but more often than not the reactions are in whitepapers (for enterprise) or beta releases (for consumer). While it might seem obvious, this is where the clarity, focus, and scale of the startup can be most advantageous.

Clear and recognizable value proposition/identity. The biggest challenge incumbents face when adding a new capability to their product/product line is where to put it so it will get noticed. There’s already enormous surface area in the product, the marketing, and also in the business/pricing. Even the basics of telling customers that you’ve done something new is difficult and calling attention to a specific feature it often ends up as a supporting point on the third pillar. Ironically, those arguing to compete more directly are often faced with internal pressures that amount to “don’t validate the competitor that much”. This means even if the feature exists in the incumbent’s product, it is probably really difficult to know that and equally difficult to find. The startup perspective is that the company comes to stand for the entire end-to-end scenario and over time when customers’ needs turn to that feature or scenario, there is total clarity in where to get the app or service.

Even with all of these challenges, this dynamic continues: initially dismissing startup products, later attempting to build what they do, and in general difficulty in reacting to inherent advantages of a startup. One needs to look long and hard for a story where an incumbent organically competed and won against a startup in a category or feature area.

Secret Weapon

More often than not the new categories of products come about because there is a change in the computing landscape at a fundamental level. This change can be the business model, for example the change to software as a service. It could also be the architecture, such as a move to cloud. There could also be a discontinuity in the core computing platform, such as the switch to graphical interface, the web, or mobile.

There’s a more subtle change which is when an underlying technology change is simply too difficult for incumbents to do in an additive fashion. The best way to think about this is if an incumbent has products in many spaces but a new product arises that contains a little bit of two of the incumbent’s products. In order to effectively compete, the incumbent first must go through a process of deciding which team takes the lead in competing. Then they must address innovator’s dilemma challenges and allocate resources in this new area. Then they must execute both the technology plans and go to market plans. While all of this is happening, the startup unburdened by any of these races ahead creating a more robust and full featured solution.

At first this might seem a bit crazy. As you think about it though, modern software is almost always a combination of widely reused elements: messaging, communicating, editing, rendering, photos, identity, storage, API / customization, payments, markets, and so on. Most new products represent bundles or mash-ups of these ingredients. The secret sauce is the precise choice of elements and of course the execution. Few startups choose to compete head-on with existing products. As we know, the next big thing is not a reimplementation of the current big thing.

The secret weapon in startups competing with large scale incumbents is to create a product that spans the engineering organization, takes a counter-intuitive architectural approach, or lands in the middle of the different elements of a go to market strategy. While it might sound like a master plan to do this on purpose, it is amazing how often entrepreneurs simply see the need for new products as a blending of existing solutions, a revisiting of legacy architectural assumptions, and/or emphasis on different parts of the solution.

—Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

November 17, 2014 at 12:00 pm

Posted in posts

Tagged with , ,

Management Clichés That Work

management mugManaging product development and management in general are ripe with clichés. By definition of course a cliché is something that is true, but unoriginal. I like a good cliché because it reminds you that much of management practice boils down to things you need to do but often forget or fail to do often enough.

The following 15 clichés might prove helpful and worth making sure you’re really doing the things in product development that need to get done on a daily basis. Some of these are my own wording of other’s thoughts expressed differently. There’s definitely a personal story behind each of these

Promise and deliver. People love to play expectations games and that is always bad for collaboration internal to a team, with your manager, or externally with customers. The cliché “under promise and over deliver” is one that people often use with pride. If you’re working with another group or with customers, the work of “setting expectations” should not be a game. It is a commitment. Tell folks what you believe, with the best of intentions, what you will do and do everything to deliver that. Over time this is far more valuable to everyone to be known as someone that gets done what you say.

Make sure bad news travels fast. Things will absolutely go wrong. In a healthy team as soon as things go wrong that information should be surfaced. Trying to hide or obscure bad news creates an environment of distrust or lack of transparency. This is especially noticeable on team when the good news is always visible but for some reason less good news lacks visibility. Avoid “crying wolf” of course by making sure you are broadly transparent in the work you do.

Writing is thinking. We’re all faced with complex choices in what to do or how to go about what will get done. While some people are great at spontaneously debating, most people are not and most people are not great at contributing in a structured way on the fly. So when faced with something complex, spend the time to think about some structure write down sentences, think about it some more, and then share it. Even if you don’t send around the writing, almost everyone achieves more clarity by writing. If you don’t then don’t blame writer’s block, but consider that maybe you haven’t formulated your point of view, yet.

Practice transparency within your team. There’s really no reason to keep something from everyone on the team. If you know something and know others want to know, either you can share what you know or others will just make up their own idea of what is going on. Sharing this broad base of knowledge within a team creates a shared context which is incredibly valuable.

Without a point of view there is no point. In our world of A/B testing, MVPs, and iteration we can sometimes lose sight of why a product and company can/should exist. The reason is that a company brings together people to go after a problem space with a unique point of view. Companies are not built to simply take requests and get those implemented or to throw out a couple of ideas and move forward with the ones that get traction. You can do that as work for hire or consulting, but not if you’re building a new product. It is important to maintain a unique point of view as a “north star” when deciding what to do, when, and why.

Know your dilithium crystals. Closely related to your point of view as a team is knowing what makes your team unique relative to competition or other related efforts. Apple uses the term “magic” a lot and what is fascinating is how with magic you can never quite identify the specifics but there is a general feeling about what is great. In Star Trek the magic was dilithium crystals–if you ever needed to call out the ingredient that made things work, that was it. What is your secret (or as Thiel says, what do you believe that no one else does)? It could be branding, implementation, business model, or more.

Don’t ask for information or reports unless they help those you ask to do their jobs. If you’re a manager you have the authority to ask your team for all sorts of reports, slides, analysis, and more. Strong managers don’t exercise that authority. Instead, lead your team to figure out what information helps them to do their job and use that information. As a manager your job isn’t a superset of your team, but the reflection of your team.

Don’t keep two sets of books. We keep track of lots of things in product development: features, budgets, traffic, revenue, dev schedules, to do lists, and more. Never keep two versions of a tracking list or of some report/analysis. If you’re talking with the team about something and you have a different view of things than they do, then you’ll spend all your time reconciling and debating which data is correct. Keeping a separate set of books is also an exercise in opacity which never helps the broader team collaboration.

Showdowns are boring and nobody wins. People on teams will disagree. The worst thing for a team dynamic is to get to a major confrontation. When that happens and things become a win/lose situation, no one wins and everyone loses. Once it starts to look like battle lines are being draw, the strongest members of the team will start to find ways to avoid what seems like an inevitable showdown. (Source: This is a line from the film “Wall Street”.)

Never vote on anything. On paper, when a team has to make a decision it seems great to have a vote. If you’re doing anything at all interesting then there’s almost certainty that at least one person will have a different view. So the question is if you’re voting do you expect a majority rule, 2/3rds, consensus, are some votes more equal? Ultimately once you have a vote then the choice is one where the people that disagree are not singled out and probably isolated. My own history is that any choice that was ever voted on didn’t even stick. Leadership is  about anticipating and bringing people along to avoid these binary moments. It is also about taking a stand and having a point of view if you happen to reach such a point.

When presenting the boss with n alternatives he/she will always choose option n+1. If you’re asked to come up with a solution to a problem or you run across a problem you have to solve but need buy in from others, you’re taking a huge risk by presenting alternatives. My view is that you present a solution and everything else is an alternative–whether you put it down on paper or not. A table of pros/cons or a list of options like a menu almost universally gets a response of trying to create an alternative that combines attributes that can’t be combined. I love choices that are cost/quality, cheap/profitable, small/fast and then the meeting concludes in search of the alternative that delivers both.

Nothing is ever decided at a meeting so don’t try. If you reach a point where you’re going to decide a big controversial thing at a meeting then there’s a good chance you’re not really going to decide. Even if you do decide you’re likely to end up with an alliterative you didn’t think of beforehand and thus is not as thought through or as possible as you believed it to be by the end of the meeting. At the very least you’re not going to enroll everyone in the decision which means there is more work to do be done. The best thing to do is not to avoid a decision making meeting but figure out how you can keep things moving forward every day to avoid these moments of truth.

Work on things that are important not urgent. Because of mobile tools like email, twitter, SMS, and notifications of all kinds from all sorts of apps have a way of dominating your attention. In times of stress or uncertainty, we all gravitate to working on what we think we can accomplish. It is easier to work towards inbox zero than to actually dive in and talk to everyone on the team about how they are handling things or to walk into that customer situation. President Eisenhower and later Stephen Covey developed amazing tools for helping you to isolate work that is important rather than urgent.

Products don’t ship with a list of features you thought you’d do but didn’t. The most stressful list of any product development effort is the list of things you have to cut because you’re running out of time or resources. I don’t like to keep that list and never did, for two reasons. First, it just makes you feel bad. The list of things you’re not doing is infinitely long–it is literally everything else. There’s no reason to remind yourself of that. Second, whatever you think you will do as soon as you can will change dramatically once customers start using the product you do end up delivering to them. When you do deliver a product it is what you made and you’re not obligated to market or communicate all the things you thought of but didn’t get done!

If you’re interesting someone won’t agree with what you said. Whether you’re writing a blog, internal email, talking to a group, or speaking to the press you are under pressure. You have to get across a unique point of view and be heard. The challenge is that if you only say things everyone believes to already be the case, then you’re not furthering the dialog. The reality is that if you are trying to change things or move a dialog forward, some will not agree with you. Of course you will learn and there’s a good chance you we wrong and that gives you a chance be interesting in new ways. Being interesting is not the same as being offensive, contrarian, cynical, or just negative. It is about articulating a point of view that acknowledges a complex and dynamic environment that does not lend itself to simple truths. Do make sure you have the right mechanisms in place to learn just how wrong you were and with how many people.

Like for example, if you write a post of 15 management tips, most people won’t agree with all of them :)
–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

October 23, 2014 at 12:00 pm

Posted in posts

Tagged with