For a just over a year, Tanium Corporation has been impressing enterprise customers with its special brand of Tanium magic — the ability to instantly learn anything you need to know about the PCs, servers, VMs, and embedded devices such as ATMs and Point of Sale devices on your network. About nine months ago Andreessen Horowitz was offered the opportunity to partner with Tanium and the founders David and Orion Hindawi, and we could not be more impressed with the progress and growth of the company. This week Tanium is adding some more magic to an amazing product.
Growing and Scaling
The Tanium team has been hard at work on the platform and in creating a great company. It is worth sharing a little bit about the progress they have made in less than a year:
- Tanium is deployed on over 10,000,000 endpoints, with individual customers managing hundreds of thousands of endpoints.
- Tanium is in broad deployment in over half the Fortune 100.
- Tanium is rapidly growing (and hiring) with a particular focus on expanding internationally.
- Even with growth on every metric, Tanium has stayed a cash-generating and profitable business.
Tanium’s product magic is matched by the team’s amazing leadership and execution.
Reimagining Systems Management and Endpoint Security
When customers first see Tanium, they are blown away by the speed at which IT can learn what is going on with the endpoints on the network. Tanium’s capability to navigate, interrogate, and act on even the largest enterprise network in seconds is the magic that fires up customers –networks comprised of millions of endpoints made up of PCs, Servers, VMs, and embedded devices. This 15-second capability is the foundation of Tanium magic and is unprecedented for large scale environments.
Traditionally, enterprises deploy Systems Management (SM) platforms to control their environments. Prior to Tanium, even the state-of-the-art tools require immense investment in agents, logon scripts, policies, signature files, databases, dedicated infrastructure (servers and networking), and more, just to provide base level information. These tools frustrate end-users and CIOs alike by choking endpoints, burdening networks, and offering up information that is approximate at best and at worse irrelevant, because it is outdated.
Tanium surpasses the state-of-the-art in systems management, which you’d expect from founders whose previous company built the leading tools of this generation before being acquired by IBM. Not content to stop there, Tanium’s ambition is much greater than improving on their previous solution, even if it is already “10,000 times better.”
That ambition is based on an important observation regarding today’s challenges in enterprise security, particularly the realities faced by the nature of attacks. Malicious attacks are no longer brute force attempts to penetrate the network firewall or simply blunt viruses or malware that indiscriminately seize endpoints. We’re all aware that today’s attacks are multi-step, socially enabled or engineered, and by definition circumvent network-based and traditional end-point protection. We’ve seen that in all the recent breaches across Target, Home Depot, JP Morgan, Sony and more, including Anthem most recently.
In every case, once a breach becomes known, the most critical job of the security team is to scope the breach, identify compromised endpoints, and shut them down. Traditionally security teams relied on network-based management solutions since those have the fastest and most familiar tools. In practice, quickly identifying all the endpoints with an unpatched OpenSSL version or all that match a known indication of compromise, for example, look much less like network security efforts and more like endpoint challenges, historically the domain of systems management. The problem is that systems management tools were designed for an era when most of their work took place logging on or during off-hours “sweeps” of assets, with results gathered over the course of weeks.
CIOs recognize that having a systems management team using one set of tools that can barely keep up with traditional demands and having a security team using tools that are only focused on the network edge isn’t ideal by any measure. Systems management is now an integral part of incident detection and response. Conversely, security and protection require full knowledge and control of end-points. Neither set of existing tools deployed in most environments is up to the task.
Tanium has been working with customers from the CIO and CISO and throughout the management and response teams in enterprises to deploy Tanium as a frontline and first response platform that reimagines the traditional categories of systems management and endpoint security. In a world of unprecedented security risks, BYO devices, and ever-changing software needs nothing short of a rethinking of the tools and approaches is required.
Tanium is a new generation of security and systems management capabilities that meets two modern criteria:
- Provide 15-second information on all endpoints. Open your browser, type in a natural language query, and know instantly every endpoint that meets a particular criteria or indication of compromise, IOC, (for example, running a certain process, recently modified system state matching a pattern, particular network traffic, or literally anything you can imagine asking the endpoint). Aside from instant information, the key new capability is being able to learn about any aspect of the running system even if it is something unforeseen or unplanned. Results are real-time, live, and refreshable instantly.
- Remedy problematic situations immediately. Given the set of endpoints matching the criteria, take action immediately by shutting down endpoints, modifying the system configurations, quarantining devices, alerting users, or patching the appropriate modules, all in seconds rather than days. Aside from being able to immediately deploy the remedy, the key new capability is being able to implement any possible remedy across all endpoints, even within the largest networks in the world using minimal infrastructure.
The most innovative products are those that provide new ways of thinking about problems or new approaches that break down the traditional category boundaries. Tanium is such a platform, and that is why enterprises are so enthusiastic about what Tanium provides.
Shipping New Capabilities
This week Tanium is releasing some significant new capabilities that further the vision of a new category of product that serves the needs of both systems management and security professionals.
Tanium IOC Detect. Open to a wide variety of highly-regarded third-party threat intelligence data and indicators of compromise templates, Tanium takes this data and continuously seeks to identify endpoints at risk in real-time. Tanium is able to match the widest possible range of system attributes and patterns without downloading client-side databases or signature files. Security operations no longer needs to sift through all of the intelligence feeds manually or script signatures to feed into legacy systems management tools. Instead, Tanium makes it possible to detect and remediate threats immediately at massive scale.
Tanium Patch. Tanium transforms a process that’s error-prone and time-consuming with the ability to deploy patches across hundreds of thousands of endpoints in seconds, with 99%+ reliability and no scripting required by the IT team. Using two of Tanium’s key architectural elements, the communications layer and the data transport layer, patches are deployed and installed with unprecedented speed and unrivaled minimal impact on network infrastructure. Since many security breaches require updates to endpoints to truly remedy them, Tanium brings together the needs of both security and management processes.
Tanium Connect. Tanium integrates its 15-second data into third-party security and management tools to make those tools more accurate and actionable. For example, Tanium’s ability to quickly see anomalies on endpoints can be used to create alerts in security information and event management (SIEM) systems. Traditionally this data would be impossible to collect or would be routed through existing systems management infrastructures, which are labor intensive and high-latency data sources. Tanium Connect provides the security operations data required to ascertain the threat and, because the data is only seconds old, the team knows it is worthy of investigation.
These are just a few of the improvements to Tanium’s 6.5 platform available this week.
Tanium’s magic innovation uniquely positions the company at the modern crossroads of systems management and security tools. Tanium’s platform reimagines these categories, while seamlessly working with existing infrastructure, and adds a new level of value and capability to forward-leaning IT teams.
Given this superb team, amazing growth, and unparalleled innovation, we could not be more happy than to lead a new round of investment in this wonderful company. Andreessen Horowitz is incredibly excited to be partnering with David, Orion, and the Tanium team, and I could not be more thrilled with continued service on Tanium’s Board of Directors.
Note: This post also appeared on http://a16z.com/blog.
No one wants friction in their products. Everyone works to reduce it. Yet it sneaks in everywhere. We collectively praise a service, app, or design that masterfully reduces friction. We also appreciate minimalism. We love when products are artfully distilled down to their essence. How do we achieve these broadly appreciated design goals?
Frictionless and minimalism are related but not necessarily the same. Often they are conflated which can lead to design debates that are difficult to resolve.
A design can be minimal but still have a great deal of friction. The Linux command line interface is a great example of minimal design with high friction. You can do everything through a single prompt, as long as you know what to type and when. The minimalism is wonderful, but the ability to get going comes with high friction. The Unix philosophy of small cooperating tools is wonderfully minimal (every tool does a small number of things and does them well), but the learning and skills required are high friction.
- Minimalist design is about reducing the surface area of an experience.
- Frictionless design is about reducing the energy required by an experience.
When debating a design choice, feature addition, or product direction it can help to clarify whether a point of view originates from a perspective of keeping things minimal or reducing friction. If people discussing a decision start from this common understanding, I bet a decision will be reached sooner. Essentially, is the debate about adding a step or experience fork, or is it about adding something at all?
Product managers need to choose features to add. That is what makes all of this so difficult. As great as it is to stay pure and within original intent, if you and the team don’t enhance the capabilities of your product then someone will do what you do, but with a couple of more things or a different factoring and you’ll be left in the dust.
Therefore the real design challenge is not simply maintaining minimalism, but enhancing a product without adding more friction. Let’s assume you built a product that does something very exciting and has a very low friction to usage and does so with a minimal feature set. The next efforts are not about just watching your product, but about deciding how to address shortcoming, enhance, or otherwise improve the product to grow users, revenue, and popularity. The risk with every change is not simply failing to maintain minimalism, but introducing friction that becomes counterproductive to your goals.
When you look back you will be amazed at how the surface area of the product has expanded and how your view of minimalism has changed. Finding the right expression of new features such that you can maintain a minimalist approach is a big part of the design challenge as well.
There’s an additional design challenge. The first people who use your product will likely be the most enthusiastic, often the most technical, and in general the most desirous of features that introduce friction. In other words you will get the most positive feedback by adding features that ultimately will result in a product with a lot more friction.
Product managers and designers need to find the right balance as the extremes of doing nothing (staying minimal) and listening to customers (adding features) will only accelerate your path to replacement either by a product with more features or a product with less friction.
Low-Friction Design Patterns
Assuming you’re adding features to a product, the following are six design patterns to follow, each essentially reducing friction in your product. They cause the need to learn, consider, futz, or otherwise not race through the product to get something done.
- Decide on a default rather than options
- Create one path to a feature or task
- Offer personalization rather than customization
- Stick with changes you make
- Build features, not futzers
- Guess correctly all the time
Decide on a default rather than options. Everything is a choice. Any choice can be A/B tested or debated as to whether it works or not. The more testing you do the more likely you are to find a cohorts of people who prefer different approaches. The natural tendency will be to add an option or setting to allow people to choose their preference or worse you might interrupt their flow to ask preference. Make a choice. Take a stand. Every option is friction in the system (and code to maintain). When we added the wheel to the mouse in Office 97 there was a split in the team over whether the wheel should scroll down or whether it should zoom in/out. From the very first release there was an option to appease the part of the team that felt zoom was more natural. Even worse, the Word team went and did a ton of work to make zoom performant since it was fairly unnatural at the time.
Create one path to a feature or task. You add a new feature all is good—you’re in X in your product and then you can do Z. Then someone points out that there are times when you are doing Y in your product and you also want to do Z. Where there was once one path to get to a feature you now think about adding a second path. Maybe that sounds easy enough. Then a few iterations down the road and you have 5 different ways to get to Z. This whole design process leads to shortcuts, floating buttons, context menus, and more. Again all of which are favored by your early adopters and add friction for everyone else, and also add code. Pick the flow and sequence and stick with it. The most famous debate of all between Windows and Mac was over right click and it still rages. But the design energy to populate context menus and the cognitive load over knowing what you can or cannot do from there is real. How many people have right clicked on a file in the Windows desktop and clicked “Send” only to be launched into some Outlook configuration dialog when it would have been frictionless to always know that insert attachment in mail works and nothing will fail.
Offer personalization rather than customization. Early adopters of a product love to customize and tweak. That’s the nature of being a tech enthusiast. The theory is that customization makes a product easier to use because every use case is different enough that the time and effort saved by customization is worth it and important. In managing a product over time, customization becomes an engineering impossibility to maintain. When you want to change behavior or add a feature but it isn’t there or moved you introduce an engineering impossibility. The ability in Office to reorganize all the toolbars and menus seemed super cool at the time. Then we wanted to introduce a new scaleable structure that would work across resolutions and input devices (the ribbon). The problem was not just the upgrade but the reality that the friction introduced in using Office by never knowing where the menus might be (at the extreme, one could open a document that would rearrange the UX) was so high the product was unusable. Enterprise customers were rearranging the product such that people couldn’t take courses or buy books on how to use Office. The constraint led to the addition of a single place for personalization (Quick Access Toolbar) which ultimately allowed for a much lower friction design overall by enabling personalized efficiency without tweaking the whole experience.
Stick with changes you make. The ultimate design choice is when you change how a feature used by lots of customers works. You are choosing to deliberately upend their flow and add friction. At the same time the job of designing a product is moving it forward to new scenarios and capabilities and sometimes that means revisiting a design choice perhaps one that is the standard. It takes guts to do this, especially because you’re not always right. Often the path is to introduce a “compatibility mode” or a way to turn your new product into the old and comfortable product. This introduces three problems. First, you have to decide what the default will be (see the first rule above). Second, you have to decide if/how to enhance the old way of doing things while you’re also adding new things. Third, you have to decide when down the road you remove the old way, but in reality that will be never because you already told customers you value it enough to keep it around. But adding compatibility mode seems so easy and customer friendly! Ultimately you’re creating a technical debt that you can never dig out of. At the same time, failing to make big changes like this almost certainly means your product will be surpassed in the marketplace. See this HBS case on the Office 2007 Ribbon design http://www.hbs.edu/faculty/Pages/item.aspx?num=34113 ($).
Build features, not futzers. Tools for creativity are well-known to have elaborate palettes for formatting, effects, and other composition controls. Often these are built on amazing “engines” that manage shapes, text, or image data. Historically, tools of creativity have prided themselves on exposing the full range of capabilities enabled by these engines. These vast palettes of features and capabilities came to define how products and compete in the marketplace. In today’s world of mobility, touch interfaces, and timely/continuous productivity people do not necessarily want to spend time futzing with all the knobs and dials and seek to minimize time from idea to presentation—call this the Instagram effect. Yet even today we see too many tools that are about debugging your work, which is vastly different than getting work done. When a person needs a chart, a table, a diagram or an image how can you enable them to build that out of high-level concepts rather than the primitives that your engine supports? I was recently talking to the founder of an analytics company struggling with customer input on tweaking visualization which was adding complexity and taking engineering time away from adding whole new classes of visualization (like maps or donut charts). You’ll receive a lot of input from early customers to enable slightly different options or adjustments which will both challenge minimalism and add friction to your product without growing the breadth of scenarios your product enables. Staying focused on delivering features will enable your product to do more.
Guess correctly all the time. Many of the latest features, especially those based on machine learning or statistical models involve taking action based on guessing what comes next. These types of features are magical, when they work. The challenge is they don’t always work and that drives a friction-filled user experience. As you expand your product to these areas you’re going to want to find the right balance of how much to add and when, and patience with guessing too much too soon is a good practice. For better or worse, customers tend to love features that guess right 100% of the time and even if you’re wrong only 1% of the time, that 1% feels like a much higher error rate. Since we know we’re going to be learning and iterating in this regard, a best practice is to consider how frictionless you can make incorrect guesses. In other words, how much energy is required to skip a suggestion, undo an action, or otherwise keep the flow going and not stop to correct what the software thought was right but wasn’t. Let’s just call this, lessons from “bullets and numbering” in Word :-)
Finally, a word of caution on what happens as you expand your customer base when it comes to adding features. Anything you want to do in a product can be “obvious” either from usage data or from customer input. The challenge in product management is to create a core set of principles or beliefs about how you want to move the product forward that allow you to maintain the essential nature of your product while adding new features. The tension between maintaining existing customers via stability or incremental improvements versus keeping pace with where the marketplace is heading is the classic design challenge in technology products.
It shouldn’t be much of a surprise, but a great deal of product bloat comes from adding the obvious feature or directly listening to customers, or by failing to stick with design patterns. Ironically, efforts to enhance products for today’s customers are often the very features that add friction, reduce minimalism, and lead to overall bloat.
Bauhaus to Bloatware
This march from Bauhaus to Bloatware is well-known in our industry. It is part of a cycle that is very difficult to avoid. It is not without irony that your best and most engaged customers are often those pushing you to move faster down this path. Most every product in every segment starts minimal and adds features over time. At each juncture in the evolution of the product there is a tension over whether additions are the right marketplace response or simply bloat.
This march (and tension) continues until some complete rethinking introduces a new minimal product addressing most of the same need but from a different perspective. The cycle then starts again. Operating systems, databases, instruction sets, peripheral connection, laptops, interfaces, word processors, and anything you can name has gone through this cycle.
This re-evolution or reimagination of a product is key to the long term viability of any technology.
By adhering to a set of design principles you are able to expand the breadth of use cases your product serves while working to avoid simply adding more friction to the core use cases.
After publication three typos were fixed and the example of personalization clarified.
One of the biggest changes for an early-stage and growing company is when hiring transitions from technical/product founders to the first sales or marketing hires. It is an exciting time of course but also one that can be very stressful. As much as that can be the case, there are a few patterns and practices one can follow to successfully cross that chasm or at the very least reduce the risk to the same as any technical hire.
It goes without saying that the challenge is rooted in learning how to recognize and evaluate talented people that possess talents and skills that you do not have and really can’t relate to from an experience level. Quite a few roles in companies are going to be “close” or adjacent to your own skill set, speaking from the perspective of a technical founder. If you’re an engineer then QA or product management aren’t far off from what you do on a daily basis. If you tilt towards product management, you’re interactions with designers are perfectly natural. In fact for technical founders the spectrum from design to product management to engineering and then QA all feel like your wheelhouse.
Branching out further to sales, marketing, communications, business development, customer service, operations, supply chain, manufacturing, finance, and more can get uncomfortable very quickly. I remember the first time I had to interview a marketing person and I realized I didn’t even know what questions I should ask to do the interview. Yet, I had worked with marketing closely for many years. Fortunately, I had a candidate pipeline and an interview loop of experienced interviewers to draw from. That’s not alway the case with a startup’s first hires.
The following are four challenges worth considering and a step you can take to mitigate the challenges if you find yourself in this spot.
Look only within your network. When sourcing your first potential sales or marketing hire, you might tend to tap into your network the same way you would for an engineering hire. You might have a very broad network but it might not be a first person network. For example with engineering you might know people from the same school program or projects you worked on or are deeply familiar with. But with sales and marketing you probably lack that much common context and your network might reflect people you came across with in work or projects, but not necessarily worked with in the same way you would have with technical hires. You might be worried about taking too much time to source candidates or concerned that you will burn a lot of time on introductions and people you don’t “know” well. Approach. The first step in a breakthrough hire process is to make sure you cast a wide net and tap into other networks. This process itself is an investment in the future as you will broaden your network in a new domain.
Define the job by way you know from the outside. Walk a mile in other’s shoes is an age-old expression and is very fitting for your first sales or marketing hire. Your initial job description for a job you never done might be taken from another company or might be based on your view of what the job needs to get done. The challenge is that your view of what needs to get done is informed by your own “outsider” view of what a job you haven’t done before might mean. Being a sales or marketing person is vastly different from what it looks like from the outside, looking in. If you haven’t done the job you tend to think of the job through the lens of outputs rather than the journey from input to output. Most jobs are icebergs and the real work is the 90% under water. Until you’ve watched and worked with an enterprise sale end to end or developed and executed on a consumer go to market plan, your view of what the job looks like might be a sales presentation or SEO. Getting to those deliverables is a whole different set of motions. Approach. Find a way to have a few “what do you do” conversations with senior people in the job function. Maybe take some time to ask them to define for you what they think the steps would be to get to the outcome you are looking for, rather than to discuss the outcome. These “what would it take” conversations will help you to craft a skills assessment and talent fit.
Hire too senior or too junior. Gauging the seniority of a candidate and matching that to the requirement for the role are often quite tricky early on. In the conversations I’ve had I tend to see founders head to one extreme or another. Some founders take the outcome or deliverable they might want (white paper, quota) and work backwards to find a hire to “execute” on that. Some take the other extreme and act on the basis of not knowing so bringing in a senior person to build out the whole system. The reality is that for a new company you often are best off with someone in the middle. Why is that? If you hire to too junior the person will need supervision on a whole range of details you haven’t done before. This gets back to defining the job based on what you know—your solution set will be informed only by the experience you have had. If you hire someone too senior then they will immediately want to hire in the next round of management. You will quickly find that one hire translates into three and you’re scaling faster than you’re growing. I once talked to a company that was under ten engineers and hired a very senior marketing leader with domain experience who then subsequently spent $200K on consulting to develop a “marketing plan”. Yikes. Approach. Building on the knowledge you gained by casting a wide net and by taking the time to learn the full scope of work required, aim for the right level of hire that will “do the work” while “scaling the team”.
Base feedback on too small a circle. Once you have a robust job description and candidate flow and ways to evaluate, it is not uncommon to “fall back” on a small circle of people to get feedback on and evaluate the candidate. You might not want to take up time of too many people or you might think that it is tricky for too many people to evaluate a candidate. At the other end you might want these first hires to be a consensus based choice among a group that collectively is still learning these multi-disciplinary ropes. Culture fit is always a huge part of hiring, especially early on, but you’re also concerned about bringing in a whole new culture (a “sales culture” or “business culture”) and that contributes to the desire to keep things contained. Approach. Getting feedback from at least one trusted senior person with experience and success making these specific hires is critical. You can tap into your board or investors or network, but be sure to lean on those supporting you for some validation and verification.
One interesting note is that these challenges and approaches aren’t unique to startups. It turns out these follow similar patterns in large companies as well as you rise from engineering/product to business or general management. While you might think in a big company the support network insulates you from these challenges, I’ve seen (and experienced personally) all of the above.
The first sales or marketing hires can be pretty stressful for any technologist. Branching out to hire and manage those that rely more than you on the other side of their brain is a big opportunity for growth and development not only for the company but for you personally. It is a great time to step back and seek out support, advice, and counsel.
The “Internet of Things” or IoT is cool. I know this because everyone tells everyone else how cool it is. Ask anyone and they will give you their own definition of what IoT means and why it is cool. That’s proof we are using a buzzword or are in a hype-cycle.
Much is at stake to benefit from, contribute to, or even control this next, next-generation of computing. If a company benefitted from 300 million PCs a year, that’s quite cool. If another company benefitted from 1 billion smartphones a year, then that’s pretty cool.
You know what is really cool, benefitting from 75 billion devices. That certainly explains the enthusiasm for the catch phrase.
Missing out on this wave is uncool. Just take a look at the CNBC screen shot to the left. That’s what we talked about in the Digital Innovation class at HBS last week and what motivated this post.
In an effort to quantify the opportunity, claim leadership, or just be included amongst those who “get it” we are all collectively missing the fact that we really don’t know how this will play out in any micro sense. It is safe to say everything will be connected to the internet. That’s about it. As Benedict Evans says, counting connected devices is a lot like counting how many electric motors are in your home. In the first days this was cool. Today, that seems silly. Benedict’s excellent post also goes into details asking many good questions about what being connected might mean and here I enhance our in-class discussion.
One way to view the history of “devices” is through two generations in the 20th century. For the first 50 years we had “analog motor” devices that replaced manual mechanical devices. This was the age of convenience brought by motors of all kinds from giant gas motors that produced electricity to tiny DC motors that powered household gadgets and everything in between. People very quickly learned the benefits of using motors to enhance manual effort. Though if you don’t think it was a generational shift, consider the reactions to the first labor saving home appliances (see Disney’s Carousel of Progress).
The next 50 years was about “digital electronics” which began with the diode, then the transistor, and then the microprocessor. What is amazing about this transition is how many decades past before the full transformation took place. Early on electronics replaced analog variants. Often these were viewed as luxuries at best, or inferior “gadgets” at worse. I recall my father debating with a car dealer the merits of “electronic fuel injection”. Many of us reading this certainly recall (or still believe) the debate over the quality of digital music relative to analog LP and cassette. Interestingly, the benefit we all experience today of size, weight, power consumption, portability, and more took years to gain acceptance. We used to think about “repairing” a VCR and how awful it was that you could not repair a DVD player. Go figure. The key innovation insight is that the benefits of electronics took decades to play out and were not readily apparent to all at the start.
We find ourselves at the start of a generation of invention where everything is connected. We are at the early stages where we are connecting things that we can connect, just like we added motors to replace the human turning the crank on a knitting loom. Some inventions have the magic of the portable radio—freedom and portability. Some seem as gimmicky as that blender.
Here are a few things we all know and love today that have already been transformed by “first generation” connectivity:
For the next few years, thousands of innovators will embark on the idea maze (Chris Dixon summarizes Balaji Srinivasan’s lecture). This is not just about product-market fit, but about much more basic questions. Every generational change in technology introduces a phase of crazy inventing, and that is where we are today with IoT.
This means that for the next couple of years most every product or invention, at first glance, might seem super cool (to some) and crazy to most everyone else. Then after a little use or analysis, more sober minds will prevail. The journey through the idea maze and engineering realities will continue.
This also means that every “thing” introduced will be met with skepticism of the broader, less tech-enthused, market (like our diverse classroom). Every introduction will seem more expensive, more complex, more superfluous than what is currently in use. In fact it is likely that even the ancillary benefits of being connected will be lost on most everyone.
That almost reads like the definition of innovator’s dilemma. Nothing sums this up more than how people talk about smart “watches”, connected thermostats, or robots. One either immediately sees the utility of strapping to your wrist a sub-optimal smartphone you have to charge midday or you ask why you can’t just look at your phone’s lock screen for the time. One looks at Nest thermostat and asks why paying 10X for the luxury of having a professional HVAC installer get stumped or having to “train” something you set and forget is such a good idea.
We find ourselves in the midst of a generational change in the technology base upon which everything is built. It used to be that owning an “electric” or “electronic” thing sounded modern and cool, well because they were so unique. That’s why adding “connected” or “smart” to a product is going to sound about as silly as saying “transistor radio” or “electronic oven”.
Every thing will be connected. The thing is we, collectively, have neither mastered connecting a thing without some downside (cost, weight, complexity) nor even figured out what we would do when something is connected. What are the equivalents of size, weight, reliability, ease of manufacturing, and more when it comes to connectivity? Today we do the “obvious” such as use the cloud for remote relay, access, storage. We write an app to control something over WiFi rather than build in a physical user interface. We collect and analyze data to inform usage or future products. There is more to come. How will devices be connected to each other? How will third parties improve the usage of things and just make them better? Where do we put the “smarts” in a thing when we have thousands of things? How might we find we are safer, healthier, faster, and even just happier?
We just don’t know yet. What we do know is that a lot of entrepreneurs and innovators across companies are going to try things out and incrementally get us to a new connected world, which in a few years will just be the world.
The Internet of Things is not about the things or even the platform the same way we thought about motors or microprocessors. The big winners in IoT will be thinking about an entirely different future, not just connecting to things we already use today in ways we already use them.
CES 2015 was another amazing show. Walking around the show one can only look with wonder about the amazing technologies being invented and turned into products. Few things are as energizing or re-energizing as systematically walking the booths and soaking it all in. I love CES as a reminder of the amazing opportunity to work in this industry.
Taking a moment to share what I walk away with is always helpful to me—writing is thinking. Every day we have the chance to talk to new companies about products under development and ideas being considered and CES provides a great cross-industry context about what is going on. This is especially important because of the tendency to look too much to the massive companies that might dominate our collective point of view. My experience has been that spending energy on what is going on CES unlocks potential opportunities by forcing you to think about problems and solutions from different perspectives.
While this post goes through products, there are many better sources for the full breadth of the show. I try to focus on the broader themes that I walk away with after spending a couple of days letting everything sort of bake for a bit. This year I wanted to touch on these 5 major themes and also include a traditional view of some of the more “fun” observations:
- Batteries, wires, simplicity
- Displays popping up everywhere
- Cameras improving with Moore’s law
- Sensors sensing, but early (and it’s all about the data)
- Connectivity gaining ubiquity
- Fun Products
Ever the product manager (PM) I try to summarize each of these sections with some top-line PM Learning to put the post into action.
Click on images for larger version. All photos by me unless noted.
Batteries, wireless, simplicity
PM Learning: Of course optimize your experiences to minimize impact on battery life, but don’t assume your competitors will be doing the same. Think about the iPhone OS and built in apps navigating that fine line. In you’re making new hardware, assume standard connectors for charging betting on Type-C and HDMI.
The best place to start with CES is batteries and wires, because that’s what will follow you around the entire show—everyone walks the show floor in search of outlets or with an auxiliary battery and cable hanging off their phone. Batteries created the portable consumer electronics revolution, but we’re also tethered to them far too often. The good news is that progress is real and continues to be made.
Behind the scenes a great deal of progress is being made on power management with chipsets even wireless ones. On display at the show were Bluetooth keyboards can go a year on a single charge or wireless headphones are good for days of normal usage.
Progress is also being made on battery technology that is making it possible for smaller, lighter, and faster charging batteries. While these are not dramatic 2 or 3X improvements, they are real.
The first product I saw was an LG cordless vacuum that had 70 minutes of usage and the cleaning power passing the classic bowling ball suction test. Truly something that makes everything easier.
Batteries are an important part of transportation and Panasonic is the leading manufacturer right now of large-scale batteries for transport. On display was the GoGoRo urban scooter. This is not just a battery-operated scooter that can go 95 km/h and is cloud connected with GPS locator maps. It can go 100km on a pair of batteries. All that would be enough. But the batteries can be swapped out in seconds and you’re on the go. The company plans to build a network of charge stations to go with a business model of subscription. I love this whole concept.
Panasonic also makes batteries for the Tesla so here is a gratuitous picture of the gratuitous Tesla Model X on display.
While all consumer electronics have aimed for simplicity since the first blinking 12:00 on a VCR, simplicity has been elusive due to the myriad of cables, connectors, remotes, and adapters. Normally a CES trip report would include the latest in cable management, high tech cables, or programmable remotes. Well, this year it is fair to say that these whole categories have basically been subsumed in a wave of welcome simplicity.
Cables, to the degree they are needed, have mostly been standardized on HDMI for video and USB for charging and peripherals. With the forthcoming USB Type-C even USB will be standardized. The Apple connectors are obviously all over though easily adapted to micro-USB for now (note to makers of third party batteries—margins are tight, but using a MFI logo and an Apple cable end would be welcome). When you do need cables they are getting better. It was great to see an awesome fiber-optic cable from Corning that worked for USB (also displayport). It makes the cable much thinner and more flexible along with increasing the signal travel distance since it uses active powered ends. HDMI in the works.
While most attention went into Smart Watches with too many features, Casio’s latest iteration offered a new combination of better battery life and low power radios. The new watch uses solar charging along with a GPS receiver (and also the low power radio waves) to set the time based on location. And it is not even huge.
Bringing this theme is no wires and improved batteries to a new extreme, the wireless earbuds from Bragi are aggressive in the feature set by incorporating not just BT for audio but a microphone for talking and sensors for heart rate (though not likely very reliable) and temperature (not sure of the use as a practical matter). But certainly worth looking at when they are available. Photo by Bragi.
Displays popping up everywhere
PM Learning: Curved is here. Too much energy is going into this. Expect to find new scenarios (like signage) and thus new opportunities. Resolution at 4K and beyond is going to be normal very quickly and with a price premium for a very short time. Pay close attention to web page design on high resolution and high DPI (assets). Many opportunities will exist for new screens that will run one app in a fixed function manner for line of business or in consumer settings—these are replacing static signs or unmanageable PCs. We’re on the verge of broadly deployed augmented reality and totally soft control screen layouts, starting with cars.
More than anything, CES continues to be the show about TV.
Curved screens are getting a lot of attention and a lot of skepticism, some of which is warranted. Putting them in an historical context, each generation of screen innovation has been greeted in a similar manner. Whether too expensive, too big, too incremental, or just not useful the reasons a new screen technology wasn’t going to take off have been plentiful. While curved seems weird to most of us (and frankly even maker is trying too hard to justify it, as seen in the pseudo scientific Samsung depictions below) it has compelling utility in a number of scenarios. Skeptics might be underestimating the architectural enthusiasm for the new screens as well.
The most immediate scenario is one that could be called the “Bloomberg desktop” and here you can see it on display. It is very compelling as a single user, a “mission control” station, or as a group monitoring station.
Signage is also incredibly important and the architectural use of curved screens as seen below will become relatively commonplace because of the value in having interactive and programmable displays for advertising and information.
Speaking of signage, for years we’ve seen the gradual migration of printed signs to signage driven by PCs to even one year where all the screens were simply JPEGs being played in those ever-present photo frames. This year saw a number of compelling new signage products that combined new multi-screen layouts with web-based or app-based cloud platforms for creating dynamic layouts, incorporating data, and managing a collection of screens. Below we can see an example of an active menu display and the tool for managing it. Following that is a complex multi-screen 4K layout (narrow bezel) and associated tool.
For home or entertainment, there were dozens of cinematic 21:9 4K curved screens at massive sizes. Maybe this transition will be slower (as the replacement cycle for TVs is slow anyway) due to the need for new thoughts on where to put these. This year at least was showing some wall mounting options.
Curved screens are also making their way into small devices. Last year saw the LG Flex and an update was available this year. Samsung introduced a Galaxy Note Edge with a single curved edge. They went to great lengths in the software to use this as an additional notification band. I’m a bit skeptical of this as it was difficult to use without thinking hard about where to put your hand (at last in a booth minute of use).
I don’t want to gloss over 4K, but suffice it to say every screen was 4K or higher. I saw a lot of skeptical coverage about not being able to see the difference or “how far away are you”. Let’s all just move on. The pixels are here and pretty soon it will just be as difficult to buy an HD display as it is to buy 512MB SIMMs or 80GB HDDs. That’s just how manufacturing at scale works. These screens will soon be cheaper than the ones they are replacing. Moore’s law applies to pixels too. For the skeptics, this exhibit showed how resolution works.
Screens are everywhere and that’s the key learning this year. There were some awesome augmented reality displays that have been talked about for a long time but are quickly becoming practical and cost-effective. Below is a Panasonic setup that can be used to cosmetics either in store or in salon. It was really amazing to see.
Continuing with augmented or head’s up displays, this was an amazing dashboard in a concept car from Toyota that showed off a full dash of soft-controls and integrated augmented screens.
At a practical level, Sharp and Toshiba were both showing off ready-made dashboard screens that will make it into new cars as OEM components or as aftermarket parts.
Cameras improving with Moore’s law
PM Learning: Cameras continue to gain more resolution but this year also showed a much clearer focus (ha) on improving photos as they are captured or improving video by making it smarter. Cameras are not just for image capture but also becoming sensors in their own right and integrated into sensing applications, though this is just starting. My favorite advance continues to be the march towards more high dynamic range as a default capture mode.
Digital cameras made their debut in the early 1990’s with 1MP still images that were broadly mocked by show attendees and reviews. Few predicted how Moore’s law would rapidly improve image quality while flash memory would become cost effective for these large CCDs and then mobile phones would make these sensors ubiquitous. Just amazing to think about.
High Dynamic Range started off as a DSLR trick and then something you could turn on and is now an Auto feature on most phones. In cameras it is still a bit of a trick. There are complexities in capturing moving images with HDR that can be overcome. Some find the look of HDR images to be “artificial” but in reality they are closer to the human eye range—this feels a bit like the debate during the first music CDs v. vinyl. Since the human eye has anywhere from 2 to 5 times the range of today’s sensors it only makes sense to see this more and more integrated into the core capture scenario. Below is a Panasonic 4K professional video camera with HDR built in.
Facility security is a key driver of camera technology because of the need for wide views, low light, and varying outdoor conditions. A company that specializes in time-lapse imaging (for example construction sites) introduced a time-lapsed HDR camera.
Low light usually kicks in infrared cameras in security settings. For many the loss of color has always been odd. Toshiba was showing off the first 720P infrared camera that generates a color image even in 0 Lux. This is done using software to map to a colorized palette. You can see a traditional infrared and the color version side by side in a cool interactive booth.
In thinking about cameras as ubiquitous, this very clever camera+LED bulb combination really struck me. Not only is it a standard PAR LED bulb, but it adds a Wi-Fi web camera. Lots of potential uses for this.
DSLRs still rule for professional use and their capabilities are still incredible (and should be for what you carry around). Nikon surprised even their folks in the booth by announcing their first Phase Fresnel lens with a 300mm f4. Cannon has a 400mm lens (their “DO” designation). These lenses result in remarkable (better) image quality and immense size and weight reduction. Seen below, is the classic 300mm f4 and the new “PF” version. Add to cart :-)
Finally, Nikon repeated their display of 360-degree stop action Matrix-like photography. It is really am amazing demo with dozens of cameras snapping a single image providing a full walk around. Just love the technology.
Sensors sensing, but early (and it is all about data!)
PM Learning: We are just starting on sensors. While many sensors are remarkably useful today, the products are still first generation and I believe we are in for an exponential level of improvement. For these reasons, I continue to believe that the wearable sensors out there today are interesting for narrow use cases but still at the early part of the adoption curve. Innovation will continue but for the time being it is important to watch (or drive the exponential) changes. Three main factors will contribute to this:
- Today’s sensors are usually taking one measurement (and often that is a proxy for what you want). These are then made into a single purpose product. The future will be more direct measurements as sensors get better and better. There’s much to be invented, for example, for heart rate, blood sugar, blood pressure, and so on.
- Sensors are rapidly improving in silos but will just as rapidly begin to be incorporated into aggregate units to save space, battery life, and more. There are obvious physical challenges to overcome (not every sensor can be in the same place or in contact with the same part of a body or device).
- Data is really the most important element and key differentiator of a sensor. It is not the absolute measurement but the way the measurement is put in context. The best way to think of this is that GPS was very useful but even more useful when combined with maps and even more useful when those maps add local data such as traffic or information on a destination.
Many are still working to bring gesture recognition to different scenarios. There remains some skepticism, perhaps rooted in the gamer world, but for many cases it can work extremely well. These capabilities can be built into cameras or depending on the amount of recognition into graphics chipsets. I saw two new and neat uses of gesture recognition. First, this LG phone was using a gesture to signal the start of a self-timer for taking selfies (just hold out your hand, recognize, squeeze, then timer starts). This was no selfie-stick (which I now carry around all the time due to the a16z selfie-stick investments) but interesting.
This next demonstration was showing gestures used in front of an automobile screen. There were a lot of potential gestures shown in this proof of concept but still there are interesting possibilities.
The incorporation of image recognition into the camera turns a camera into a sensor to be used for a variety of uses. This was a camera that ended up looking like the TV show Person of Interest.
There were quite a few products demonstrating eye tracking. This is not a new technology but it has become very cheap very quickly. What used to take very specialized cameras can now be done with off the shelf parts and some processing. What are missing are use cases beyond software usability labs and medical diagnostics :-)
This take on eye tracking called the Jins Meme integrated eye tracking and other sensors into hipster glasses. Again the scenarios aren’t quite there yet but it is very interesting. They even package this up in multi-packs for schools and research.
There were many products attempting to sense things in the home. I feel most of these will need to find better integration with other scenarios rather than being point solutions but they are all very interesting to study and will still find initial use cases. This is how innovation happens.
One of the more elaborate sensors is called Mother. It packages up a number of sensors that connect wireless to a base station. There are temperature and motion sensors among them. You just place these near where you want to know something (these little chips). Then they have a nice app that translates sensing events into notifications.
There were even sensors for shoes and socks. If you’ve ever had foot issues you know the need to attempt to replicate your pain while being monitored by a high-speed camera or even fluoroscope/x-ray. These sensors, such as this one in a sock, have immediately interesting medical use under physician supervision. Like many of the sensors, I feel this is a best practice use case and don’t think the home-use case is quite right yet because of the lack of accessible scientific data.
The Lillypad floats around in your pool and takes measurements of the water and wirelessly sends them to an app. It also measures UV light as a clever bonus.
Speaking of pools, this was such a clever sensor. It is a Bluetooth radio that you pair with your phone. You get kids to wear this around a pool. When the kid is submerged it will notify you. You can get notified immediately or after a set time (I learned the national standard for under water distress is 25 seconds). The big trick—there’s no technology here; just that Bluetooth doesn’t travel under water. Awesome!
In this previous post, the notion of ingredients versus products at CES was discussed. To emphasize what this means in practice, this montage below is from a vendor that literally packaged up every point-sensor into a “product”. This allows for a suite of products, which is great in a catalog but awfully complex for a consumer. There were a dozen manufacturers displaying a similar range of single-sensor products. I don’t know if this is sustainable.
Connectivity gaining ubiquity
PM Learning: Duh, everything will be connected. But unlike previous years, this is now in full execution mode. The biggest challenge is what “things” get connected to what things or networks. When do you put smarts somewhere? Where does data go? What data is used?
Everything is going to be connected. This has been talked about for a long time, but is really here now. The cost of connectivity is so low and, at least in the developing world, assuming either Wi-Fi or WWAN (via add-on plans) is rational and economical. This will introduce a lot of complexity for hardware makers who traditionally have not thought about software. It will make for room for new players that can re-think scenarios and where to put the value. Some devices will improve quickly. Others will struggle to find a purpose to connect. We’ve seen the benefits of remote thermostats and monitoring cameras. On the other hand, remote controlled clothes washers (that can’t load the clothes from the basket or get the clothes into the dryer) might be still searching. I would add that this dual load washer from LG is very clever.
Many products were demonstrating their “Works with Nest”. This is a nice API and and it is attracting a lot of attention since like any platform is saves the device makers from doing a lot of heavy lifting in terms of software. While many of the demonstrations were interesting there can still be a little bit of a gimmick aspect to it (washing machines). This alarm clock was interesting to me. While many of us just use phones now (which can control nest) this clock uses voice recognition for alarm functions. When connected to a Nest it can also be used to change the temperature or to alter the home/away settings of the thermostat.
A company called Cannon Security relatively new security safe company (most are very old) and I loved this “connected” safe. It isn’t connected the way I thought (an app to open it or alert you of a break in). Instead it is a safe that also has a network cable and two USB ports. So one use might be to store a network connected drive in the safe and use it for backup. You could also keep something in the safe charging via USB. Pretty cool. The jack pack is in the lower right of the image.
My favorite product of the whole show, saving the best for last, is not yet released. But talk about a magic collection of connectivity and data…wow. These founders set out to solve the problem of getting packages delivered to your house. Most communities prevent you from getting a delivery box out front and in many places you can’t have something left on your doorstep and expect it to remain. This product, called “Track PIN” solves the problem. Here’s what it does:
- Insert a small module inline in the tree wires that control your garage door.
- Add a battery operated PIN box to the front of your garage somewhere.
- When you receive a package tracking number email just forward it to trackpin.com (sort of like the way TripIt works).
- THEN, when the delivery person shows up (UPS, FedEx, USPS, and more) they will automatically know in their handheld what code to punch. Upon punching the code your garage door opens a short amount to slide the package in. No signature required. The PIN is invalidated. The driver is happy. You are happy. Amazon is happy. And the cloud did all the work.
I know it sounds somewhat mundane, but these folks really seem to have developed a cool solution. It beats bothering the neighbors.
Every CES has a few fun products that you just want to call attention to without snark or anything, just because we all know product development is not a science and one has to try a lot of things to get to the right product.
Power Pole. This is my contribution to selfies. This one even has its own power source.
Emergency jump starter/laptop charger/power source. This was a perfectly fine product. The fun part was seeing the exact same product with different logos in 5 different booths. Amazing placement by the ODM.
USB Charger. This is the best non-commercial USB charger I’ve seen. It even includes a way out of spec “high voltage port.
Fake TV. This is a home security system that flashes multi-colored LED lights that trick a burglar into thinking you are home watching TV. Best part about it was that when I took the picture the person staffing the booth said “Don’t worry the Wi-Fi Drone version is coming in late 2015”. Gotta love that!!
Surface influence. And finally, I’ve been known to be a fan of Microsoft Surface but I guess I was not alone. The Typo keyboard attempts to bring a Microsoft TypeCover to the iPad and the Remix Ultra-Tablet is a rather uncanny resemblance to Surface 2 running an Android skin (developed by several former Google employees).
Phew. That’s CES 2015 in a nutshell.
CES is an incredibly exciting and energizing show to attend. Sometimes, if you track some of the real-time coverage you might get a sense of disappointment at the lack of breakthrough products or the seemingly endless repetition from many companies making the same thing. There’s a good reason for all this repetition and it is how CES represents our healthy industry working well.
CES is best viewed not as a display of new products to run out and buy but as a display of ingredients for future products. It is great to go to CES and see the latest TVs, displays, or in-car systems. By and large there is little news in these in-market products and categories. It is also great to see the forward-looking vision presentations from the big companies. Similarly, these are good directionally but often don’t represent what you can act on reliably.
Taking an ingredients view, one (along with 140,000 others) can look across the over 2 million feet of 3,600 exhibitors for where things are heading (CES is one of the top trade shows globally, with CeBIT, Photokina, and Computex all vying for top ranking depending on how you count).
If you take a product view, CES can get repetitive or boring rather quickly. I probably saw a dozen selfie-sticks. After a while, every curved 4K TV looks the same. And certainly, there’s a limit to how many IP cameras the market can support. After a few decades you learn to quickly spot the me-too and not dwell on the repetition.
It is worth a brief description of why CES is filled with so many me-too (and often poorly executed) products.
Consider the trio of partners it takes to bring a product to market:
- Component suppliers. These are the companies that make a specific sensor, memory, screen, chipset, CCD, radio, etc.
- Manufacturers. These are the companies that pull together all the components and packaging needed to make a product. These are OEMs or ODMs in the consumer electronics industry.
- Brands and Channels. These are the consumer-visible manifestation of products and can be the chain of retailers or a retail brand.
At any one time, a new component, an innovation or invention, is close to ready to be in the market. An example might be a new heart rate sensor. In order to get the cost of the component low enough for consumer products, the component supplier searches out for a manufacturer to make a device.
While every supplier dreams of landing a major company making millions of units as a first customer that never happens. Instead, there’s a whole industry of companies that will take a component and build what you might think of as a product with a 1:1 mapping of that new component. So a low-cost CCD gets turned into a webcam with simple Wi-Fi integration (and often some commodity level software). The companies that make these are constantly looking to make new products and will gladly make a limited production run and sell at a relatively low margin for a short time. These initial orders help the component makers scale up manufacturing and improve the component through iteration.
At the same time there are retailers and brand names that are always looking to leverage their brand with additional products. These brand names often take the complete product from the manufacturer with some limited amount of branding and customization. This is why you can often see almost identical products with different names. Many know that a few vendors make most LED displays, yet the number of TV brands is quite high. There’s a small amount of customization that takes place in this step. These companies also work off relatively low margins and expect to invest in a limited way. For new categories, while the component companies get to scale out parts, the brands and channels get a sense of the next big thing with limited investments.
So while CES might have a ton of non-differentiated “products”, what you are really seeing is the supply chain at work. In fact it is working extremely well as this whole process is getting more and more optimized. The component manufacturers are now making proof of concepts that almost encroach onto the manufacturers and some brands are going straight to component makers. For the tech enthusiast these might be undifferentiated or even poor products, but for many they serve the purpose at least in the short-term.
Today, some things we take for granted that at one time seemed to swarm the CES show floor with dozens of low quality builds and me-too products include: cameras, flash memory, media playback devices, webcams, Wi-Fi routers, hard drive cages, even tablets and PCs. I recall one CES where I literally thought the entire industry had shifted to making USB memory sticks as there must have been 100 booths showing the latest in 128MB sticks. Walking away, the only thing I could conclude was just how cheap and available these were going to be soon. Without the massive wave of consumer me-too digital cameras that once ruled the show floor, we would not have today’s GoPro and Dropcam.
An astute observer can pick out the me-too products and get a sense for what ingredients will be available and where they are on the price / maturity curve. One can also gauge the suppliers who are doing the most innovative integrations and manufacturing.
Sometimes the whole industry gets it wrong. The most recent example of this would be 3D TV, which just doesn’t seem to be catching on.
Other times the whole industry gets excited about something but others take that direction and pivot it to much more interesting and innovative products. An example of this would be the run of “media boxes” to attach to your TV which went from playing content stored on your home network and local hard drives to stateless, streaming players like Google Chromecast, Amazon Fire and Apple TV. Without those first media boxes, it isn’t clear we would have seen the next generation which took that technology and re-thought it in the context of the internet and cloud.
Finally, the reality is that most of the manufacturers tend to take a new component and build out a purpose-built device to surround that component. So they might take a camera sensor and add a camera body and just make a point-and-shoot. They might take new flash storage and turn it into portable storage. They might take a new display and just make complete monitor. Rarely will the first generation of devices attempt to do multiple things or take a multi-year approach to integrated product development—not on those margins and timelines.
Some technologies this year that reflect first generation products and are likely to be brought to scale or further integrated with other components include: curved displays, high resolution/high DPI displays, human and environmental sensors, and HDR imaging. Sensors will be the most interesting as they will clearly be drawn into the SoC and/or integrated with each other. Obviously, everyone can expect Wi-Fi and broadband connectivity to continue to get smaller and easier and of course CPUs will continue to shrink, draw less power, and get faster.
So when you read the stories about CES saying there are too many junky products or so many of the exact same thing, don’t think of that as a negative. Instead, think about how that might be the next low-price, high-scale ingredient that will be integrated into your product or another product.
I have spent a lot of time trying to manage work so it is successful outside of a single location. I’ve had mixed results and have found only three patterns which are described below. Before that, two quick points.
First, this topic has come up this time related to the Paul Graham post on the other 95% of developers and then Matt Mullenweg’s thoughtful critique of that (also discussed on Hacker News). I think the idea of remote work is related to but not central to immigration reform and a position one might have on that. In fact, 15 years ago when immigration reform was all but hopeless many companies (including where I worked) spent countless dollars and hours trying to “offshore” work to India and China with decidedly poor results. I even went and lived in China for a while to see how to make this work. Below the patterns/lessons subsume this past experience.
Second, I would just say this is business and business is a social science, so that means there are not rules or laws of nature. Anything that works in one situation might fail to work in another. Something that failed to work for you might be the perfect solution elsewhere. That said, it is always worth sharing experiences in the hopes of pattern matching.
The first pattern is good to know, just not scalable or readily reproducible. That is when you have a co-located and functioning team and members need to move away for some reason then remote work can continue pretty much as it has before. This assumes that the nature of the work, the code, the project all continue on a pretty similar path. Any major disruption—such as more scale, change in tools, change in product architecture, change in what is sold, etc.—and things quickly gravitate to the less functional “norm”. The reality is in this case that these success stories are often individuals and small teams that come to the project with a fixed notion of how to work.
The second pattern that works is when a project is based on externally defined architectural boundaries. In this case little knowledge is required that span the seam between components. What I mean by externally defined is that the API between the major pieces, separated by geography, is immutable and not defined by the team. It is critical that the API not be under the control of the team because if it is then this case is really the next pattern. An example of this might be a team that is responsible for implementing industry standard components that plug in via industry standard APIs. It might be the team that delivers a large code base from an open source project that is included in the company’s product. This works fine. The general challenge is that this remote work is often not particularly rewarding over time. Historically, for me, this is what ended up being delivered via remote “outsourced” efforts.
The third pattern that works is that those working remotely have projects that have essentially no short term or long term connection to each other. This is pretty counter-intuitive. It is also why startups are often the first places to see remote work as challenging, simply because most startups only work on things that are connected. So it is no surprise that for the most part startups tend to want to work together in one location.
In larger companies it is not uncommon for totally unrelated projects to be in different locations. They might as well be at separate companies.
The challenge there is that there are often corporate strategies that become critical to a broad set of products. So very quickly things turn into a need for collaboration. Since most large, existing products, tend to naturally resist corporate mandates the need for high bandwidth collaboration increases. In fact, unlike a voluntary pull from a repository, a corporate strategy is almost always much harder and much more of a negotiation through a design process than it is a code resuse. That further requires very high bandwidth.
It is also not uncommon for what was once a single product to get rolled into an existing product. So while something might be separate for a while, it later becomes part of some larger whole. This is very common in big companies because what is a “product” often gets defined not by code base or architecture but by what is being sold. A great example for me is how PowerPoint was once a totally separate product until one day it was really only part of a suite of products, Office. From that decision forward we had a “remote” team for a major leg of our product (and one born out of an acquisition at that).
That leaves trying to figure out how a single product can be split across multiple geographies. The funny thing is that you can see this challenge even in one product medium sized companies when the building space occupied spans floors. Amazingly enough even a single staircase or elevator ride has the equivalent impact as a freeway commute. So the idea of working across geographies is far more common than people think.
Overall the big challenge in geography is communication. There just can’t be enough of it at the right bandwidth at the right time. I love all the tools we have. Those work miracles. As many comments from personal experience have talked about on the HN thread, they don’t quite replace what is needed. This post isn’t about that debate—I’m optimistic that these tools will continue to improve dramatically. One shouldn’t under estimate the impact of time zones as well. Even just coast to coast in the US can dramatically alter things.
The core challenge with remote work is not how it is defined right here and now. In fact that is often very easy. It usually only takes a single in person meeting to define how things should be split up. Then the collaboration tools can help to nurture the work and project. It is often the case that this work is very successful for the initial run of the project. The challenge is not the short term, but what happens next.
This makes geography a bit more of a big company thing (where often there are resources to work on multiple products or to fund multiple locations for work). The startup or single product small company has elements of each of these of course.
It is worth considering typical ways of dividing up the work:
- Alignment by date. The most brute force way of dividing work is that each set of remote people work on different schedules. We all know that once people have different delivery dates it becomes highly likely that the need (or ability) to coordinate on a routine basis is reduced. This type of work can go on until there are surprises or there is a challenge in delivering something that turns out to be connected or the same and should have been on the same schedule to begin with.
- Alignment by API. One of the most common places that remote work can be divided is to say that locations communicate by APIs. This works up until the API either isn’t right or needs to be reworked. The challenge here is that as a product you’re betting that your API design is robust enough that groups can remotely work at their own pace or velocity. The core question is why would you want to constrain yourself in this way? The second question is how to balance resources on each side of the API. If one side is stretched for resources and the other side isn’t (or both sides are) then geography prevents you from load balancing. Once you start having people in one geography on each side of the API you end up breaking your own remote work algorithm and you need to figure out the way to get the equivalent of in-person communication.
- Alignment by architecture. While closely related to API, there is also a case where remote work is layered in the same way the architecture is. Again, this works well at the start of a project. Over time this tends to decay. As we all know, as projects progress the architecture will change and be refactored or just redone (especially at both early stages and later in life). If the geography is then wrong, figuring out how to properly architect the code while also overlaying geography and thus skillsets and code knowledge becomes extremely difficult. A very common approach to geography and architecture is the have the app in one geo and the service in another. This just forces a lot of dialog at the app/service seam which I think most people agree is also where much of the innovation and customer experience resides (as well as performance efforts).
- Alignment by code. Another way to align is at the lowest level which is basically at the code or module level (or language or tool). Basically geography defines who owns what code based on the modules that a given location creates or maintains. This has a great deal of appeal to programmers. It also is the approach that requires the highest bandwidth communication since modules communicate across non-public APIs and often are not architectural boundaries (the first cases). This again can work in the short term but probably collapses the most in short order. You can often see first signs of this failing when given files become exceedingly large or code is obviously in the wrong place, simply because of module ownership.
If I had to sum up all of these in one challenge, it is that however you find you can divide the work across geography at a point in time, it simply isn’t sustainable. The very model you use to keep work geographically efficient are globally sub-optimal for the evolution of your code. It is a constraint that creates unnecessary tradeoffs.
On big projects over time, what you really want is to create centers of excellence in a technology and those centers are also geographies. This always sounds very appealing (IBM created this notion in their Labs). As we all know, however, the definition of what technologies are used where is always changing. A great example would be to consider how your 2015 projects would work if you could tap into a center of excellence in machine learning, but quickly realize that machine learning is going to be the core of your new product? Do you disband the machine learning team? Does the machine learning team now work on every new product in the company? Does the company just move all new products to the machine learning team? How do you geo-scale that sort of effort? That’s why the time element is tricky. Ultimately a center of excellence is how you can brand a location and keep people broadly aware of the work going on. It is easier said than done though. The IME at Microsoft was such a project.
Many say that agility can address this. You simply rethink the boundaries and ownership at points in time. The challenge is in a constant shipping mode that you don’t have that luxury. Engineers are not fully fungible and certainly careers and human desire for ownership and sense of completion are not either. It is easy to imagine and hard to implement agility of work ownership over time.
This has been a post on what things are hard about remote work, at least based on my experience. Of course if you have no option (for whatever reason) then this post can help you look at what can be done over time to help with the challenges that will arise.