Posts Tagged ‘disruption’
With the latest pivot for Blackberry much has been said about disruption and what it can do to companies. The story, Inside the fall of BlackBerry: How the smartphone inventor failed to adapt, by Sean Silcoff, Jacquie Mcnish and Steve Ladurantaye in The Globe and Mail is a wonderful account.
Disruption has a couple of characteristics that make it fun to talk about. While it is happening even with a chorus of people claiming it is happening, it is actually very difficult to see. After it has happened the chorus of “told you so” grows even louder and more matter of fact. After the fact, everyone has a view of what could have been done to “prevent” disruption. Finally, the description of disruption tends to lose all of the details leading up to the failure as things get characterized at the broad company level or a simple characteristic (keyboard v. touch) when the situation is far more complex. Those nuances are what product folks deal with day to day and where all the learning can be found.
Like many challenges in business, there’s no easy solution and no pattern to follow. The decision moments, technology changes, and business realities are all happening to people that have the same skills and backgrounds as the chorus, but the real-world constraints of actually doing something about them.
The case of Blackberry is interesting because the breadth of disruptive forces is so great. It is not likely that a case like this will be seen again for a while–a case where a company has such an incredible position of strength in technology and business gained over a relatively short time and then essentially erased in a short time.
I loved my Blackberry. The first time I used one was before they were released (because there was integration with Outlook I was lucky enough to be using one some time in 1998–I even read the entire DOJ filing against Microsoft on one while stopped on the tarmac at JFK). Using the original 850 was a moment when you immediately felt propelled into the future. Using one felt like the first time I saw a graphical interface (Alto) or a GPS. Upon using one you just knew our technology lives would be different.
What went wrong is almost exactly the opposite of what went right and that’s what makes this such an interesting story and unbelievably difficult challenge for those involved. Even today I look at what went on and think of how galactic the challenges were for that amazing group of people that transported us all to the future with one product.
When you build a product you make a lot of assumptions about the state of the art of technology, the best business practices, and potential customer usage/behavior. Any new product that is even little bit revolutionary makes these choices at an instinctual level–no matter what news stories you read about research or surveys or whatever, I think we all know that there’s a certain gut feeling that comes into play.
This is especially the case for products that change our collective world view.
Whether made deliberately or not these assumptions play a crucial role in how a product evolves over time. I’ve never seen a new product developed where the folks wrote down a long list of assumptions. I wouldn’t even know where to start–so many of them are not even thought through and represent just an engineer or product manager “state of the art”, “best practice”, or “this is what I know”.
It turns out these assumptions, implicit or explicit, become your competitive advantage and allow you to take the market by storm.
But then along come technology advances, business model changes, or new customer behaviors and seemingly overnight your assumptions are invalidated.
In a relatively simple product (note, no product is simple to the folks making it) these assumptions might all be within the domain. Christensen famously studied the early days of the disk drive industry. To many of us these assumptions are all contained within one system or component and it is hard to see how disruption could take hold. Fast forward and we just assume solid-state storage, yet even this transition as obvious as it is to us, requires a whole new world view for people who engineer spinning disks.
In a complex product like the entirety of the Blackberry experience there are assumptions that cross hardware, software, communications networks, channel relationships, business models and more. When you bring all these together into a single picture one realizes the enormity of what was accomplished.
It is instructive to consider the many assumptions or ingredients of Blackberry success that go beyond the popular “keyboard v. touch”. In thinking about my own experience with the product, the following list just a few things that were essentially revisited by the iPhone from the perspective of the Blackberry device/team:
- Keyboard to touch. The most visible difference and most easily debated is this change. From crackberry thumbs to contests over who could type faster, your keyboard was clearly a major innovation. The move to touch would challenge you in technology, behavior, and more.
- Small (b&w) screens to large color. Closely connected with the shift to touch was a change in perspective that consuming information on a bigger screen would trump the use of the real estate for (arguably) more efficient input. Your whole notion of industrial design, supply chain, OS, and more would be challenged. As an aside, the power consumption of large screens immediately seemed like a non-starter to a team insanely focused on battery life.
- GPRS to 3G then LTE. Your heritage in radios, starting with the pager network, placed a premium on using the lowest power/bandwidth radio and focusing on efficiency therein. The iPhone, while 2G early, quickly turned around a game changing 3G device. You had been almost dragged into using the newer higher powered radios because your focus had been to treat radio usage as a premium resource.
- Minimize bandwidth to assume bandwidth is free. Your focus on reducing bytes over the wire was met with a device that just assumed bytes would be “free” or at least easily purchased. Many of the early comments on the iPhone focused on this but few assumed the way the communications companies would respond to an appetite for bandwidth. Imagine thinking how sloppy the iPhone was with bandwidth usage and how fast the battery would drain. Assuming a specific resource is high cost is often a path to disruption when someone makes a different assumption.
- No general web support v. general web support. Despite demand, the Blackberry avoided offered generalized web browsing support. The partnership with carriers also precluded this given their concern about network responsiveness and capacity. Again, few would have assumed a network buildout that would support mobile browsing the way it does today. The disruptor had the advantage of growing slowly (relatively) compared to flipping a switch on a giant installed base.
- WiFi as “present” to nearly ubiquitous. The physics of WiFi coverage (along with power consumption, chip surface area and more) assumed WiFi would be expensive and hard to find. Even with whole city WiFi projects in early 2000′s people didn’t see WiFi as a big part of the solution. Few thought about the presence of WiFi at home and new usage scenarios or that every urban setting, hotel, airport, and more would have WiFi. Even the carriers built out WiFi to offload traffic and include it for free in their plans. The elegant and seamless integration of WiFi on the iPhone became a quick advantage.
- Device update/mgmt by tethering to off air. Blackberry required tethering for some routine operations and for many the only way to integrate corporate mail was to keep a PC running all the time. The PC was an integral part of the Blackberry experience for many. While the iPhone was tethered for music and videos, the presence of WiFi and march towards PC-free experiences was an early assumption in the architecture that just took time to play out.
- Business to consumer. Your Blackberry was clearly a business device. Through much of the period of high success consumers flocked to devices like the SideKick. While there was some consumer success, you anchored in business scenarios from Exchange and Notes integration to network security. The iPhone comes along and out of the gate is aimed at consumers with a camera, MMS, and more. This disruption hits at the hardware, the software, the service integration, and even how the device is sold at carriers.
- Data center based service to broad set of cloud based services. Your connection to the enterprise was anchored in a server that business operated. This was a significant business upside as well as a key part of the value proposition for business. This server became a source for valuable business information propagated to the Blackberry (rather than use the web). The absence of an iPhone server seemed like a huge opportunity yet in fact it turned into an asset in terms of spreading the device. Instead the iPhone relied on the web (and subsequently apps) to deliver services rather than programmed and curated services.
- Deep channel partnership/revenue sharing to somewhat tense relationship. By most accounts, your Blackberry business was an incredible win-win with telcos around the world. Story after story talked of the amazing partnerships between carriers and Blackberry. At the same time, stories (and blame game) between Apple and AT&T in the US became somewhat legendary. Yet even with this tension, the iPhone was bringing very valuable customers to AT&T and unseating Blackberry customers.
- Ubiquitous channel presence to exclusives. Your global partnership strength was unmatched and yet disrupted. The iPhone launched with single carriers in limited markets, on purpose. Many viewed that as a liability, including Blackberry. Yet in hindsight this only increased the value to the selected partners and created demand from other potential partners (even with the tension).
- Revenue sharing to data plan. One of the main assets that was mostly invisible to consumers was the revenue to Blackberry for each device on the network. This was because Blackberry was running a secure email service as a major anchor of the offering. Most thought no one was going to give up this revenue, including the carrier ability to up-charge for your Blackberry. Few saw a transition to a heavily subsidized business model with high priced data plans purchased by consumers.
These are just a few and any one of these is probably debatable. The point is really the breadth of changes the iPhone introduced to the Blackberry offering and roadmap. Some of these are assumptions about the technology, some about the business model, some about the ecosystem, some about physics even!
Imagine you’ve just changed the world and everything you did to change the world–your entire world view–has been changed by a new product. Now imagine that the new product is not universally applauded and many folks not only say your product is better and more useful, but that the new product is simply inferior.
Put yourself in those shoes…
Disruption happens when a new product comes along and changes the underlying assumptions of the incumbent, as we all know.
Incumbent products and businesses respond by often downplaying the impact of a particular feature or offering. And more often than folks might notice, disruption doesn’t happen so easily. In practice, established businesses and products can withstand a few perturbations to their offering. Products can be rearchitected. Prices can be changed. Features can be added.
What happens though when nearly every assumption is challenged? What you see is a complete redefinition of your entire company. And seeing this happen in real time is both hard to see and even harder to acknowledge. Even in the case of Blackberry there was a time window of perhaps 2 years to respond–is that really enough time to re-engineer everything about your product, company, and business?
One way to look at this case is that disruption rarely happens from a single vector or attribute, even though the chorus might claim X disrupts Y because of price or a single feature, for example. We can see this in the case of something like desktop linux–being lower priced/open source are interesting attributes but it is fair to say that disruption never really happened to the degree that might have been claimed early on.
However, if you look at Linux in the data center the combination of using Linux for proprietary data center architectures and services combined with the benefit of open source/low price brought with it a much more powerful disruptive capability.
One might take away from this case and other examples, that the disruption to watch out for the most would be the one that combined multiple elements of the traditional marketing mix of product, price, place, promotion. When considering these dimensions it is also worth understanding the full breadth of assumptions, both implicit and explicit, in your product and business when defending against disruption. Likewise, if you’re intending to disrupt you want to consider the multiple dimensions of your approach in order to bypass the intrinsic defenses of incumbents.
It is not difficult to talk about disruption in our industry. As product and business leaders it is instructive to dive into a case of disruption and consider not just all the factors that contributed but how would you respond personally. Could you really lead a team through the process of creating a product that literally inverted almost every business and technology assumption that created $80B or so in market cap over a 10 year period?
In The Sun Also Rises, Hemingway wrote:
How did you go bankrupt? Two ways. Gradually, then suddenly.
That is how disruption happens.
What happens when the tools and technologies we use every day become mainstream parts of the business world? What happens when we stop leading separate “consumer” and “professional” lives when it comes to technology stacks? The result is a dramatic change in the products we use at work and as a result an upending of the canon of management practices that define how work is done.
This paper says business must embrace the consumer world and see it not as different, less functional, or less enterprise-worthy, but as the new path forward for how people will use technology platforms, how businesses will organize and execute work, and how the roles of software and hardware will evolve in business. Our industry speaks volumes of the consumerization of IT, but maybe that is not going far enough given the incredible pace of innovation and depth of usage of the consumer software world. New tools are appearing that radically alter the traditional definitions of productivity and work. Businesses failing to embrace these changes will find their employees simply working around IT at levels we have not seen even during the earliest days of the PC. Too many enterprises are either flat-out resisting these shifts or hoping for a “transition”—disruption is taking place, not only to every business, but within every business.
Continuous productivity is an era that fosters a seamless integration between consumer and business platforms. Today, tools and platforms used broadly for our non-work activities are often used for work, but under the radar. The cloud-powered smartphone and tablet, as productivity tools, are transforming the world around us along with the implied changes in how we work to be mobile and more social. We are in a new era, a paradigm shift, where there is evolutionary discontinuity, a step-function break from the past. This constantly connected, social and mobile generational shift is ushering a time period on par with the industrial production or the information society of the 20th century. Together our industry is shaping a new way to learn, work, and live with the power of software and mobile computing—an era of continuous productivity.
Continuous productivity manifests itself as an environment where the evolving tools and culture make it possible to innovate more and faster than ever, with significantly improved execution. Continuous productivity shifts our efforts from the start/stop world of episodic work and work products to one that builds on the technologies that start to answer what happens when:
- A generation of new employees has access to the collective knowledge of an entire profession and experts are easy to find and connect with.
- Collaboration takes place across organization and company boundaries with everyone connected by a social fiber that rises above the boundaries of institutions.
- Data, knowledge, analysis, and opinion are equally available to every member of a team in formats that are digital, sharable, and structured.
- People have the ability to time slice, context switch, and proactively deal with situations as they arise, shifting from a world of start/stop productivity and decision-making to one that is continuous.
Today our tools force us to hurry up and wait, then react at all hours to that email or notification of available data. Continuous productivity provides us a chance at a more balanced view of time management because we operate in a rhythm with tools to support that rhythm. Rather than feeling like you’re on call all the time waiting for progress or waiting on some person or event, you can simply be more effective as an individual, team, and organization because there are new tools and platforms that enable a new level of sanity.
Some might say this is predicting the present and that the world has already made this shift. In reality, the vast majority of organizations are facing challenges or even struggling right now with how the changes in the technology landscape will impact their efforts. What is going on is nothing short of a broad disruption—even winning organizations face an innovator’s dilemma in how to develop new products and services, organize their efforts, and communicate with customers, partners, and even within their own organizations. This disruption is driven by technology, and is not just about the products a company makes or services offered, but also about the very nature of companies.
The starting point for this revolution in the workplace is the socialplace we all experience each and every day.
We carry out our non-work (digital) lives on our mobile devices. We use global services like Facebook, Twitter, Gmail, and others to communicate. In many places in the world, local services such as Weibo, MixIt, mail.ru, and dozens of others are used routinely by well over a billion people collectively. Entertainment services from YouTube, Netflix to Spotify to Pandora and more dominate non-TV entertainment and dominate the Internet itself. Relatively new services such as Pinterest or Instagram enter the scene and are used deeply by tens of millions in relatively short times.
While almost all of these services are available on traditional laptop and desktop PCs, the incredible growth in usage from smartphones and tablets has come to represent not just the leading edge of the scenario, but the expected norm. Product design is done for these experiences first, if not exclusively. Most would say that designing for a modern OS first or exclusively is the expected way to start on a new software experience. The browser experience (on a small screen or desktop device) is the backup to a richer, more integrated, more fluid app experience.
In short, the socialplace we are all familiar with is part of the fabric of life in much of the world and only growing in importance. The generation growing up today will of course only know this world and what follows. Around the world, the economies undergoing their first information revolutions will do so with these technologies as the baseline.
Briefly, it is worth reflecting on and broadly characterizing some of the history of the workplace to help to place the dramatic changes into historic context.
The industrial revolution that defined the first half of the 20th century marked the start of modern business, typified by high-volume, large-scale organizations. Mechanization created a culture of business derived from the capabilities and needs of the time. The essence of mechanization was the factory which focused on ever-improving and repeatable output. Factories were owned by those infusing capital into the system and the culture of owner, management, and labor grew out of this reality. Management itself was very much about hierarchy. There was a clear separation between labor and management primarily focused on owners/ownership.
The information available to management was limited. Supply chains and even assembly lines themselves were operated with little telemetry or understanding of the flow of raw materials through to sales of products. Even great companies ultimately fell because they lacked the ability to gather insights across this full spectrum of work.
The problems created by the success of mechanized production were met with a solution—the introduction of the computer and the start of the information revolution. The mid-20th century would kick off a revolution in business, business marked by global and connected organizations. Knowledge created a new culture of business derived from the information gathering and analysis capabilities of first the mainframe and then the PC.
The essence of knowledge was the people-centric office which focused on ever-improving analysis and decision-making to allocate capital, develop products and services, and coordinate the work across the globe. The modern organization model of a board of directors, executives, middle management, and employees grew out of these new capabilities. Management of these knowledge-centric organizations happened through an ever-increasing network of middle-managers. The definition of work changed and most employees were not directly involved in making things, but in analyzing, coordinating, or servicing the products and services a company delivered.
The information available to management grew exponentially. Middle-management grew to spend their time researching, tabulating, reporting, and reconciling the information sources available. Information spanned from quantitative to qualitative and the successful leaders were expert or well versed in not just navigating or validating information, but in using it to effectively influence the organization as a whole. Knowledge is power in this environment. Management took over the role of resource allocation from owners and focused on decision-making as the primary effort, using knowledge and the skills of middle management to inform those choices.
A symbol of knowledge productivity might be the meeting. Meetings came to dominate the culture of organizations: meetings to decide what to meet about, meetings to confirm that people were on the same page, meetings to follow-up from other meetings, and so on. Management became very good at justifying meetings, the work that went into preparing, having, and following up from meetings. Power derived from holding meetings, creating follow-up items and more. The work products of meetings—the pre-reading memos, the presentations, the supporting analytics began to take on epic proportions. Staff organizations developed that shadowed the whole process.
The essence of these meetings was to execute on a strategy—a multi-year commitment to create value, defend against competition, and to execute. Much of the headquarters mindset of this era was devoted to strategic analysis and planning.
The very best companies became differentiated by their use of information technologies in now legendary ways such as to manage supply chain or deliver services to customers. Companies like Wal-Mart pioneered the use of technology to bring lower prices and better inventory management. Companies like the old MCI developed whole new products based entirely on the ability to write software to provide new ways of offering existing services.
Even with the broad availability of knowledge and information, companies still became trapped in the old ways of doing things, unable to adapt and change. The role of disruption as a function not just of technology development but as management decision-making showed the intricate relationship between the two. With this era of information technology came the notion of companies too big and too slow to react to changes in the marketplace even with information right there in front of collective eyes.
The impact of software, as we finished the first decade of the 21st century, is more profound than even the most optimistic software people would have predicted. As the entrepreneur and venture capitalist Marc Andreessen wrote two years ago, “software is eating the world”. Software is no longer just about the internal workings of business or a way to analyze information and execute more efficiently, but has come to define what products a business develops, offers, and serves. Software is now the product, from cars to planes to entertainment to banking and more. Every product not only has a major software component but it is also viewed and evaluated through the role of software. Software is ultimately the product, or at least a substantial part of differentiation, for every product and service.
Today’s workplace: Continuous Productivity
Today’s workplace is as different as the office was from the factory.
Today’s organizations are either themselves mobile or serving customers that are mobile, or likely both. Mobility is everywhere we look—from apps for consumers to sales people in stores and the cash registers to plane tickets. With mobility comes an unprecedented degree of freedom and flexibility—freedom from locality, limited information, and the desktop computer.
The knowledge-based organization spent much energy on connecting the dots between qualitative sampling and data sourced on what could be measured. Much went into trying get more sources of data and to seek the exact right answer to important management decisions. Today’s workplace has access to more data than ever before, but along with that came understanding that just because it came from a computer it isn’t right. Data is telemetry based on usage from all aspects of the system and goes beyond sampling and surveys. The use of data today substitutes algorithms seeking exact answers with heuristics informed by data guessing the best answer using a moment’s worth of statistical data. Today’s answers change over time as more usage generates more data. We no longer spend countless hours debating causality because what is happening is right there before our eyes.
We see this all the time in the promotion of goods on commerce sites, the use of keyword search and SEO, even the way that search itself corrects spellings or maps use a vast array of data to narrow a potentially very large set of results from queries. Technologies like speech or vision have gone from trying to compute the exact answer to using real-time data to provide contextually relevant and even more accurate guesses.
The availability of these information sources is moving from a hierarchical access model of the past to a much more collaborative and sharing-first approach. Every member of an organization should have access to the raw “feeds” that could be material to their role. Teams become the focus of collaborative work, empowered by the data to inform their decisions. We see the increasing use of “crowds” and product usage telemetry able to guide improved service and products, based not on qualitative sampling plus “judgment” but on what amounts to a census of real-world usage.
Information technology is at the heart of all of these changes, just as it was in the knowledge era. The technologies are vastly different. The mainframe was about centralized information and control. The PC era empowered people to first take mainframe data and make better use of it and later to create new, but inherently local or workgroup specific information sources. Today’s cloud-based services serve entire organizations easily and can also span the globe, organizations, and devices. This is such a fundamental shift in the availability of information that it changes everything in how information is collected, shared, and put to use. It changes everything about the tools used to create, analyze, synthesize, and share information.
Management using yesterday’s techniques can’t seem keep up with this world. People are overwhelmed by the power of their customers with all this information (such as when social networks create a backlash about an important decision, or we visit a car dealer armed with local pricing information). Within organizations, managers are constantly trying to stay ahead of the curve. The “young” employees seem to know more about what is going on because of Twitter and Facebook or just being constantly connected. Even information about the company is no longer the sole domain of management as the press are able to uncover or at least speculate about the workings of a company while employees see this speculation long before management is communicating with employees. Where people used to sit in important meetings and listen to important people guess about information, people now get real data from real sources in real-time while the meeting is taking place or even before.
This symbol of the knowledge era, the meeting, is under pressure because of the inefficiency of a meeting when compared to learning and communicating via the technology tools of today. Why wait for a meeting when everyone has the information required to move forward available on their smartphones? Why put all that work into preparing a perfect pitch for a meeting when the data is changing and is a guess anyway, likely to be further informed as the work progresses? Why slow down when competitors are speeding up?
There’s a new role for management that builds on this new level of information and employees skilled in using it. Much like those who grew up with PC “natively” were quick to assume their usage in the workplace (some might remember the novelty of when managers first began to answer their own email), those who grow up with the socialplace are using it to do work, much to the chagrin of management.
Management must assume a new type of leadership that is focused on framing the outcome, the characteristics of decisions, and the culture of the organization and much less about specific decision-making or reviewing work. The role of workplace technology has evolved significantly from theory to practice as a result of these tools. The following table contrasts the way we work between the historic norms and continuous productivity.
|Then||Now, Continuous Productivity|
|Hierarchy, top down or middle out||Network, bottom up|
|Internal committees||Internal and external teams, crowds|
|Presenting packaged and produced ideas, documents||Sharing ideas and perspectives continuously, service|
|Data based on snapshots at intervals, viewed statically||Data always real-time, viewed dynamically|
|Exact answers||Approximation and iteration|
|More users||More usage|
Today’s workplace technology, theory
Modern IT departments, fresh off the wave of PC standardization and broad homogenization of the IT infrastructure developed the tools and techniques to maintain, ne contain, the overall IT infrastructure.
A significant part of the effort involved managing the devices that access the network, primarily the PC. Management efforts ran the gamut from logon scripts, drive scanning, anti-virus software, standard (or only) software load, imaging, two-factor authentication and more. Motivating this has been the longstanding reliability and security problems of the connected laptop—the architecture’s openness so responsible for the rise of the device also created this fragility. We can see this expressed in two symbols of the challenges faced by IT: the corporate firewall and collaboration. Both of these technologies offer good theories but somewhat backfire in practice in today’s context.
With the rise of the Internet, the corporate firewall occupied a significant amount of IT effort. It also came to symbolize the barrier between employees and information resources. At some extremes, companies would routinely block known “time wasters” such as social networks and free email. Then over time as the popularity of some services grew, the firewall would be selectively opened up for business purposes. YouTube and other streaming services are examples of consumer services that transitioned to an approved part of enterprise infrastructure given the value of information available. While many companies might view Twitter as a time-wasting service, the PR departments routinely use it to track news and customer service might use it to understand problems with products so it too becomes an expected part of infrastructure. These “cracks” in the notion of enterprise v. consumer software started to appear.
Traditionally the meeting came to symbolize collaboration. The business meeting which occupied so much of the knowledge era has taken on new proportions with the spread of today’s technologies. Businesses have gone to great lengths to automate meetings and enhance them with services. In theory this works well and enables remote work and virtual teams across locations to collaborate. In practical use, for many users the implementation was burdensome and did not support the wide variety of devices or cross-organization scenarios required. The merger of meetings with the traditional tools of meetings (slides, analysis, memos) was also cumbersome as sharing these across the spectrum of devices and tools was also awkward. We are all familiar with the first 10 minutes of every meeting now turning into a technology timesink where people get connected in a variety of ways and then sync up with the “old tools” of meetings while they use new tools in the background.
Today’s workspace technology, practice
In practice, the ideal view that IT worked to achieve has been rapidly circumvented by the low-friction, high availability of a wide variety of faster-to-use, easier-to-use, more flexible, and very low-cost tools that address problems in need of solutions. Even though this is somewhat of a repeat of the introduction of PCs in the early 1990’s, this time around securing or locking down the usage of these services is far more challenging than preventing network access and isolating a device. The Internet works to make this so, by definition.
Today’s organizations face an onslaught of personally acquired tablets and smartphones that are becoming, or already are, the preferred device for accessing information and communication tools. As anyone who uses a smartphone knows, accessing your inbox from your phone quickly becomes the preferred way to deal with the bulk of email. How often do people use their phones to quickly check mail even while in front of their PC (even if the PC is not in standby or powered off)? How much faster is it to triage email on a phone than it is on your PC?
These personal devices are seen in airports, hotels, and business centers around the world. The long battery life, fast startup time, maintenance-free (relatively), and of course the wide selection of new apps for a wide array of services make these very attractive.
There is an ongoing debate about “productivity” on tablets. In nearly all ways this debate was never a debate, but just a matter of time. While many look at existing scenarios to be replicated on a tablet as a measure of success of tablets at achieving “professional productivity”, another measure is how many professionals use their tablets for their jobs and leave their laptops at home or work. By that measure, most are quick to admit that tablets (and smartphones) are a smashing success. The idea that tablets are used only for web browsing and light email seems as quaint as claiming PCs cannot do the work of mainframes—a common refrain in the 1980s. In practice, far too many laptops have become literally desktops or hometops.
While the use of tools such as AutoCAD, Creative Suite, or enterprise line of business tools will be required and require PCs for many years to come, the definition of professional productivity will come to include all the tasks that can be accomplished on smartphones and tablets. The nature of work is changing and so the reality of the tools in use are changing as well.
Perhaps the most pervasive services for work use are cloud-based storage products such as DropBox, Hightail (YouSendIt), or Box. These products are acquired easily by consumers, have straightforward browser-based interfaces and apps on all devices, and most importantly solve real problems required by modern information sharing. The basic scenario of sharing large files with a customers or partners (or even fellow employees) across heterogeneous devices and networks is easily addressed by these tools. As a result, expensive and elaborate (or often much richer) enterprise infrastructure goes unused for this most basic of business needs—sharing files. Even the ubiquitous USB memory stick is used to get around the limitations of enterprise storage products, much to the chagrin of IT departments.
Tools beyond those approved for communication are routinely used by employees on their personal devices (except of course in regulated industries). Tools such as WhatsApp or WeChat have hundreds of millions of users. A quick look at Facebook or Twitter show that for many of those actively engaged the sharing of work information, especially news about products and companies, is a very real effort that goes beyond “the eggs I had for breakfast” as social networks have sometimes been characterized. LinkedIn has become the goto place for sales people learning about customers and partners and recruiters seeking to hire (or headhunt) and is increasingly becoming a primary source of editorial content about work and the workplace. Leading strategists are routinely read by hundreds of thousands of people on LinkedIn and their views shared among the networks employees maintain of their fellow employees. It has become challenging for management to “compete” with the level and volume of discourse among employees.
The list of devices and services routinely used by workers at every level is endless. The reality appears to be that for many employees the number of hours of usage in front of approved enterprise apps on managed enterprise devices is on the decline, unless new tablets and phones have been approved. The consumerization of IT appears to be very real, just by anecdotally observing the devices in use on public transportation, airports, and hotels. Certainly the conversation among people in suits over what to bring on trips is real and rapidly tilting towards “tablet for trips”, if not already there.
The frustration people have with IT to deliver or approve the use of services is readily apparent, just as the frustration IT has with people pushing to use insecure, unapproved, and hard to manage tools and devices. Whenever IT puts in a barrier, it is just a big rock in the information river that is an organization and information just flows around it. Forward-looking IT is working diligently to get ahead of this challenge, but the models used to reign in control of PCs and servers on corporate premises will prove of limited utility.
A new approach is needed to deal with this reality.
Transition versus disruption
The biggest risks organizations face is in thinking the transition to a new way of working will be just that, a transition, rather than a disruption. While individuals within an organization, particularly those that might be in senior management, will seek to smoothly transition from one style of work to another, the bulk of employees will switch quickly. Interns, new hires, or employees looking for an edge see these changes as the new normal or the only normal they’ve ever experienced. Our own experience with PCs is proof of how quickly change can take place.
In Only the Paranoid Survive, Andy Grove discussed breaking the news to employees of a new strategy at Intel only to find out that employees had long ago concluded the need for change—much to the surprise of management. The nature of a disruptive change in management is one in which management believes they are planning a smooth transition to new methods or technologies only to find out employees have already adopted them.
Today’s technology landscape is one undergoing a disruptive change in the enterprise—the shift to cloud based services, social interaction, and mobility. There is no smooth transition that will take place. Businesses that believe people will gradually move from yesterday’s modalities of work to these new ways will be surprised to learn that people are already working in these new ways. Technologists seeking solutions that “combine the best of both worlds” or “technology bridge” solutions will only find themselves comfortably dipping their toe in the water further solidifying an old approach while competitors race past them. The nature of disruptive technologies is the relentless all or nothing that they impose as they charge forward.
While some might believe that continuing to focus on “the desktop” will enable a smoother transition to mobile (or consumer) while the rough edges are worked out or capabilities catch up to what we already have, this is precisely the innovator’s dilemma – hunkering down and hoping things will not take place as quickly as they seem to be for some. In fact, to solidify this point of view many will point to a lack of precipitous decline or the mission critical nature in traditional ways of working. The tail is very long, but innovation and competitive edge will not come from the tail. Too much focus on the tail will risk being left behind or at the very least distract from where things are rapidly heading. Compatibility with existing systems has significant value, but is unlikely to bring about more competitive offerings, better products, or step-function improvements in execution.
Culture of continuous productivity
The culture of continuous productivity enabled by new tools is literally a rewrite of the past 30 years of management doctrine. Hierarchy, top-down decision making, strategic plans, static competitors, single-sided markets, and more are almost quaint views in a world literally flattened by the presence of connectivity, mobility, and data. The impact of continuous productivity can be viewed through the organization, individuals and teams, and the role of data.
The social and mobile aspects of work, finally, gain support of digital tools and with those tools the realization of just how much of nearly all work processes are intrinsically social. The existence and paramount importance of “document creation tools” as the nature of work appear, in hindsight, to have served as a slight detour of our collective focus. Tools can now work more like we like to work, rather than forcing us to structure our work to suit the tools. Every new generation of tools comes with promises of improvements, but we’ve already seen how the newest styles of work lead to improvements in our lives outside of work. Where it used to be novel for the person with a PC to use those tools to organize a sports team or school function, now we see the reverse and we see the tools for the rest of life being used to improve our work.
This existence proof makes this revolution different. We already experience the dramatic improvements in our social and non-work “processes”. With the support and adoption of new tools, just as our non-work lives saw improvements we will see improvements in work.
The cultural changes encouraged or enabled by continuous productivity include:
- Innovate more and faster. The bottom line is that by compressing the time between meaningful interactions between members of a team, we will go from problem to solution faster. Whether solving a problem with an existing product or service or thinking up a new one, the continuous nature of communication speeds up the velocity and quality of work. We all experience the pace at which changes outside work take place compared to the slow pace of change within our workplaces.
Flatten hierarchy. The difficulty in broad communication, the formality of digital tools, and restrictions on the flow of information all fit perfectly with a strict hierarchical model of teams. Managers “knew” more than others. Information flowed down. Management informed employees. Equal access to tools and information, a continuous multi-way dialog, and the ease and bringing together relevant parties regardless of place in the organization flattens the hierarchy. But more than that, it shines a light on the ineffectiveness and irrelevancy of a hierarchy as a command structure.
- Improve execution. Execution improves because members of teams have access to the interactions and data in real-time. Gone are the days of “game of telephone” where information needed to “cascade” through an organization only to be reinterpreted or even filtered by each level of an organization.
Respond to changes using telemetry / data. With the advent of continuous real-world usage telemetry, the debate and dialog move from deciding what the problems to be solved might be to solving the problem. You don’t spend energy arguing over the problem, but debating the merits of various solutions.
- Strengthen organization and partnerships. Organizations that communicate openly and transparently leave much less room for politics and hidden agendas. The transparency afforded by tools might introduce some rough and tumble in the early days as new “norms” are created but over time the ability to collaborate will only improve given the shared context and information base everyone works from.
- Focus on the destination, not the journey. The real-time sharing of information forces organizations to operate in real-time. Problems are in the here and now and demand solutions in the present. The benefit of this “pressure” is that a focus on the internal systems, the steps along the way, or intermediate results is, out of necessity, de-emphasized.
Organization culture change
Continuously productive organizations look and feel different from traditional organizations. As a comparison, consider how different a reunion (college, family, etc.) is in the era of Facebook usage. When everyone gets together there is so much more that is known—the reunion starts from shared context and “intimacy”. Organizations should be just as effective, no matter how big or how geographically dispersed.
Effective organizations were previously defined by rhythms of weekly, monthly and quarterly updates. These “episodic” connection points had high production values (and costs) and ironically relatively low retention and usage. Management liked this approach as it placed a high value on and required active management as distinct from the work. Tools were designed to run these meetings or email blasts, but over time these were far too often over-produced and tended to be used more for backward looking pseudo-accountability.
Looking ahead, continuously productive organizations will be characterized by the following:
- Execution-centric focus. Rather than indexing on the process of getting work done, the focus will shift dramatically to execution. The management doctrine of the late 20th century was about strategy. For decades we all knew that strategy took a short time to craft in reality, but in practice almost took on a life of its own. This often led to an ever-widening gap between strategy and execution, with execution being left to those of less seniority. When everyone has the ability to know what can be known (which isn’t everything) and to know what needs to be done, execution reigns supreme. The opportunity to improve or invent will be everywhere and even with finite resources available, the biggest failure of an organization will be a failure to act.
- Management framing context with teams deciding. Because information required discovery and flowed (deliberately) inefficiently management tasked itself with deciding “things”. The entire process of meetings degenerated into a ritualized process to inform management to decide amongst options while outside the meeting “everyone” always seemed to know what to do. The new role of management is to provide decision-making frameworks, not decisions. Decisions need to be made where there is the most information. Framing the problem to be solved out of the myriad of problems and communicating that efficiently is the new role of management.
- Outside is your friend. Previously the prevailing view was that inside companies there was more information than there was outside and often the outside was viewed as being poorly informed or incomplete. The debate over just how much wisdom resides in the crowd will continue and certainly what distinguishes companies with competitive products will be just how they navigate the crowd and simultaneously serve both articulated and unarticulated needs. For certain, the idea that the outside is an asset to the creation of value, not just the destination of value, is enabled by the tools and continuous flow of information.
- Employees see management participate and learn, everyone has the tools of management. It took practically 10 years from the introduction of the PC until management embraced it as a tool for everyday use by management. The revolution of social tools is totally different because today management already uses the socialplace tools outside of work. Using Twitter for work is little different from using Facebook for family. Employees expect management to participate directly and personally, whether the tool is a public cloud service or a private/controlled service. The idea of having an assistant participate on behalf of a manager with a social tool is as archaic as printing out email and typing in handwritten replies. Management no longer has separate tools or a different (more complete) set of books for the business, but rather information about projects and teams becomes readily accessible.
- Individuals own devices, organizations develop and manage IP. PCs were first acquired by individual tech enthusiasts or leading edge managers and then later by organizations. Over time PCs became physical assets of organizations. As organizations focused more on locking down and managing those assets and as individuals more broadly had their own PCs, there was a decided shift to being able to just “use a computer” when needed. The ubiquity of mobile devices almost from the arrival of smartphones and certainly tablets, has placed these devices squarely in the hands of individuals. The tablet is mine. And because it is so convenient for the rest of my life and I value doing a good job at work, I’m more than happy to do work on it “for free”. In exchange, organizations are rapidly moving to tools and processes that more clearly identify the work products as organization IP not the devices. Cloud-based services become the repositories of IP and devices access that through managed credentials.
Individuals and teams work differently
The new tools and techniques come together to improve upon the way individuals and teams interact. Just as the first communication tools transformed business, the tools of mobile and continuous productivity change the way interactions happen between individuals and teams.
- Sense and respond. Organizations through the PC era were focused on planning and reacting cycles. The long lead time to plan combined with the time to plan a reaction to events that were often delayed measurements themselves characterized “normal”. New tools are much more real-time and the information presented represents the whole of the information at work, not just samples and surveys. The way people will work will focus much more on everyone being sensors for what is going on and responding in real-time. Think of the difference between calling for a car or hailing a cab and using Uber or Lyft from either a consumer perspective or from the business perspective of load balancing cars and awareness of the assets at hand as representative to sensing and responding rather than planning.
- Bottom up and network centric. The idea of management hierarchy or middle management as gatekeepers is being broken down by the presence of information and connectivity. The modern organization working to be the most productive will foster an environment of bottom up—that is people closest to the work are empowered with information and tools to respond to changes in the environment. These “bottoms” of the organization will be highly networked with each other and connected to customers, partners, and even competitors. The “bandwidth” of this network is seemingly instant, facilitated by information sharing tools.
- Team and crowd spanning the internal and external. The barriers of an organization will take on less and less meaning when it comes to the networks created by employees. Nearly all businesses at scale are highly virtualized across vendors, partners, and customers. Collaboration on product development, product implementation, and product support take place spanning information networks as well as human networks. The “crowd” is no longer a mob characterized by comments on a blog post or web site, but can be structured and systematically tapped with rich demographic information to inform decisions and choices.
- Unstructured work rhythm. The highly structured approach to work that characterized the 20th century was created out of a necessity for gathering, analyzing, and presenting information for “costly” gatherings of time constrained people and expensive computing. With the pace of business and product change enabled by software, there is far less structure required in the overall work process. The rhythm of work is much more like routine social interactions and much less like daily, weekly, monthly staff meetings. Industries like news gathering have seen these radical transformations, as one example.
Data becomes pervasive (and big)
With software capabilities come ever-increasing data and information. While the 20th century enabled the collection of data and to a large degree the analysis of data to yield ever improving decisions in business, the prevalence of continuous data again transforms business.
- Sharing data continuously. First and foremost, data will now be shared continuously and broadly within organizations. The days when reports were something for management and management waited until the end of the week or month to disseminate filtered information are over. Even though financial data has been relatively available, we’re now able to see how products are used, trouble shoot problems customers might be having, understand the impact of small changes, and try out alternative approaches. Modern organizations will provide tools that enable the continuous sharing of data through mobile-first apps that don’t require connectivity to corporate networks or systems chained to desktop resources
- Always up to date. The implication of continuously sharing information means that everyone is always up to date. When having a discussion or meeting, the real world numbers can be pulled up right then and there in the hallway or meeting room. Members of teams don’t spend time figuring out if they agree on numbers, where they came from or when they were “pulled”. Rather the tools define the numbers people are looking at and the data in those tools is the one true set of facts.
- Yielding best statistical approach informed by telemetry (induction). The notion that there is a “right” answer is antiquated as the printed report. We can now all admit that going to a meeting with a printed out copy of “the numbers” is not worth the debate over the validity or timeframe of those numbers (“the meeting was rescheduled, now we have to reprint the slides.”) Meetings now are informed by live data using tools such as Mixpanel or live reporting from Workday, Salesforce and others. We all know now that “right” is the enemy of “close enough” given that the datasets we can work with are truly based on census and not surveys. This telemetry facilitates an inductive approach to decision-making.
- Valuing more usage. Because of the ability to truly understand the usage of products—movies watched, bank accounts used, limousines taken, rooms booked, products browsed and more—the value of having more people using products and services increases dramatically. Share matters more in this world because with share comes the best understanding of potential growth areas and opportunities to develop for new scenarios and new business approaches.
New generation of productivity tools, examples and checklist
Bringing together new technologies and new methods for management has implications that go beyond the obvious and immediate. We will all certainly be bringing our own devices to work, accessing and contributing to work from a variety of platforms, and seeing our work take place across organization boundaries with greater ease. We can look very specifically at how things will change across the tools we use, the way we communicate, how success is measured, and the structure of teams.
Tools will be quite different from those that grew up through the desktop PC era. At the highest level the implications about how tools are used are profound. New tools are being developed today—these are not “ports” of existing tools for mobile platforms, but ideas for new interpretations of tools or new combinations of technologies. In the classic definition of innovator’s dilemma, these new tools are less functional than the current state-of-the-art desktop tools. These new tools have features and capabilities that are either unavailable or suboptimal at an architectural level in today’s ubiquitous tools. It will be some time, if ever, before new tools have all the capabilities of existing tools. By now, this pattern of disruptive technologies is familiar (for example, digital cameras, online reading, online videos, digital music, etc.).
The user experience of this new generation of productivity tools takes on a number of attributes that contrast with existing tools, including:
- Continuous v. episodic. Historically work took place in peaks and valleys. Rough drafts created, then circulated, then distributed after much fanfare (and often watering down). The inability to stay in contact led to a rhythm that was based on high-cost meetings taking place at infrequent times, often requiring significant devotion of time to catching up. Continuously productive tools keep teams connected through the whole process of creation and sharing. This is not just the use of adjunct tools like email (and endless attachments) or change tracking used by a small number of specialists, but deep and instant collaboration, real-time editing, and a view that information is never perfect or done being assembled.
- Online and shared information. The old world of creating information was based on deliberate sharing at points in time. Heavyweight sharing of attachments led to a world where each of us became “merge points” for work. We worked independently in silos hoping not to step on each other never sure where the true document of record might be or even who had permission to see a document. New tools are online all the time and by default. By default information can be shared and everyone is up to date all the time.
- Capture and continue The episodic nature of work products along with the general pace of organizations created an environment where the “final” output carried with it significant meaning (to some). Yet how often do meetings take place where the presenter apologizes for data that is out of date relative to the image of a spreadsheet or org chart embedded in a presentation or memo? Working continuously means capturing information quickly and in real-time then moving on. There are very few end points or final documents. Working with customers and partners is a continuous process and the information is continuous as well.
- Low startup costs. Implementing a new system used to be a time consuming and elaborate process viewed as a multi-year investment and deployment project. Tools came to define the work process and more critically make it impossibly difficult to change the work process. New tools are experienced the same way we experience everything on the Internet—we visit a site or download an app and give it a try. The cost to starting up is a low-cost subscription or even a trial. Over time more features can be purchased (more controls, more depth), but the key is the very low-cost to begin to try out a new way to work. Work needs change as market dynamics change and the era of tools preventing change is over.
- Sharing inside and outside. We are all familiar with the challenges of sharing information beyond corporate boundaries. Management and IT are, rightfully, protective of assets. Individuals struggle with the basics of getting files through firewalls and email guards. The results are solutions today that few are happy with. Tools are rapidly evolving to use real identities to enable sharing when needed and cross-organization connections as desired. Failing to adopt these approaches, IT will be left watching assets leak out and workarounds continue unabated.
- Measured enterprise integration. The PC era came to be defined at first by empowerment as leading edge technology adopters brought PCs to the workplace. The mayhem this created was then controlled by IT that became responsible to keep PCs running, information and networks secure, and enforce consistency in organizations for the sake of sharing and collaboration. Many might (perhaps wrongly) conclude that the consumerization wave defined here means IT has no role in these tasks. Rather the new era is defined by a measured approach to IT control and integration. Tools for identity and device management will come to define how IT integrates and controls—customization or picking and choosing code are neither likely nor scalable across the plethora of devices and platforms that will be used by people to participate in work processes. The net is to control enterprise information flow, not enterprise information endpoints.
- Mobile first. An example of a transition between the old and new, many see the ability to view email attachments on mobile devices as a way forward. However, new tools imply this is a true bridge solution as mobility will come to trump most everything for a broad set of people. Deep design for architects, spreadsheets for analysts, or computation for engineers are examples that will likely be stationary or at least require unique computing capabilities for some time. We will all likely be surprised by the pace at which even these “power” scenarios transition in part to mobile. The value of being able to make progress while close to the site, the client, or the problem will become a huge asset for those that approach their professions that way.
- Devices in many sizes. Until there is a radical transformation of user-machine interaction (input, display), it is likely almost all of us will continue to routinely use devices of several sizes and those sizes will tend to gravitate towards different scenarios (see http://blog.flurry.com/bid/99859/The-Who-What-and-When-of-iPhone-and-iPad-Usage), though commonality in the platforms will allow for overlap. This overlap will continue to be debated as “compromise” by some. It is certain we will all have a device that we carry and use almost all the time, the “phone”. A larger screen device will continue to better serve many scenarios or just provide a larger screen area upon which to operate. Some will find a small tablet size meeting their needs almost all of the time. Others will prefer a larger tablet, perhaps with a keyboard. It is likely we will see somewhat larger tablets arise as people look to use modern operating systems as full-time replacements for existing computing devices. The implications are that tools will be designed for different device sizes and input modalities.
It is worth considering a few examples of these tools. As an illustration, the following lists tools in a few generalized categories of work processes. New tools are appearing almost every week as the opportunity for innovation in the productivity space is at a unique inflection point. These examples are just a few tools that I’ve personally had a chance to experience—I suspect (and hope) that many will want to expand these categories and suggest additional tools (or use this as a springboard for a dialog!)
- Creation. Quip, Evernote, Paper, Haiku Deck, Lucidchart
- Storage and Sharing. Box, Dropbox, Hightail
- Reporting. Mixpanel, Quantifind
- Communications. WhatsApp, Anchor, Voxer
- Tracking. Asana, Todoist, Relaborate
- Training. Udacity, Thinkful, Codeacademy
The architecture and implementation of continuous productivity tools will also be quite different from the architecture of existing tools. This starts by targeting a new generation of platforms, sealed-case platforms.
The PC era was defined by a level of openness in architecture that created the opportunity for innovation and creativity that led to the amazing revolution we all benefit from today. An unintended side-effect of that openness was the inherent unreliability over time, security challenges, and general futzing that have come to define the experience many lament. The new generation of sealed case platforms—that is hardware, software, and services that have different points of openness, relative to previous norms in computing, provide for an experience that is more reliable over time, more secure and predictable, and less time-consuming to own and use. The tradeoff seems dramatic (or draconian) to those versed in old platforms where tweaking and customizing came to dominate. In practice the movement up the stack, so to speak, of the platform will free up enormous amounts of IT budget and resources to allow a much broader focus on the business. In addition, choice, flexibility, simplicity in use, and ease of using multiple devices, along with a relative lack of futzing will come to define this new computing experience for individuals.
The sealed case platforms include iOS, Android, Chromebooks, Windows RT, and others. These platforms are defined by characteristics such as minimizing APIs that manipulate the OS itself, APIs that enforce lower power utilization (defined background execution), cross-application security (sandboxing), relative assurances that apps do what they say they will do (permissions, App Stores), defined semantics for exchanging data between applications, and enforced access to both user data and app state data. These platforms are all relatively new and the “rules” for just how sealed a platform might be and how this level of control will evolve are still being written by vendors. In addition, devices themselves demonstrate the ideals of sealed case by restricting the attachment of peripherals and reducing the reliance on kernel mode software written outside the OS itself. For many this evolution is as controversial as the transition automobiles made from “user-serviceable” to electronic controlled engines, but the benefits to the humans using the devices are clear.
Building on the sealed case platform, a new generation of applications will exhibit a significant number of the following attributes at the architecture and implementation level. As with all transitions, debates will rage over the relative strength or priority of one or more attributes for an app or scenario (“is something truly cloud” or historically “is this a native GUI”). Over time, if history is any guide, the preferred tools will exhibit these and other attributes as a first or native priority, and de-prioritize the checklists that characterized the “best of” apps for the previous era.
The following is a checklist of attributes of tools for continuous productivity:
- Mobile first. Information will be accessed and actions will be performed mobile first for a vast majority of both employees and customers. Mobile first is about native apps, which is likely to create a set of choices for developers as they balance different platforms and different form factors.
- Cloud first. Information we create will be stored first in the cloud, and when needed (or possible) will sync back to devices. The days of all of us focusing on the tasks of file management and thinking about physical storage have been replaced by essentially unlimited cloud storage. With cloud-storage comes multi-device access and instant collaboration that spans networks. Search becomes an integral part of the user-experience along with labels and meta-data, rather than physical hierarchy presenting only a single dimension. Export to broadly used interchange formats and printing remain as critical and archival steps, but not the primary way we share and collaborate.
- User experience is platform native or browser exploitive. Supporting mobile apps is a decision to fully use and integrate with a mobile platform. While using a browser can and will be a choice for some, even then it will become increasingly important to exploit the features unique to a browser. In all cases, the usage within a customer’s chosen environment encourages the full range of support for that platform environment.
- Service is the product, product is the service. Whether an internal IT or a consumer facing offering, there is no distinction where a product ends and a continuously operated and improving service begins. This means that the operational view of a product is of paramount importance to the product itself and it means that almost every physical product can be improved by a software service element.
- Tools are discrete, loosely coupled, limited surface area. The tools used will span platforms and form factors. When used this way, monolithic tools that require complex interactions will fall out of favor relative to tools more focused in their functionality. Doing a smaller set of things with focus and alacrity will provide more utility, especially when these tools can be easily connected through standard data types or intermediate services such as sharing, storage, and identity.
- Data contributed is data extractable. Data that you add to a service as an end-user is easily extracted for further use and sharing. A corollary to this is that data will be used more if it can also be extracted a shared. Putting barriers in place to share data will drive the usage of the data (and tool) lower.
- Metadata is as important as data. In mobile scenarios the need to search and isolate information with a smaller user interface surface area and fewer “keystrokes” means that tools for organization become even more important. The use of metadata implicit in the data, from location to author to extracted information from a directory of people will become increasingly important to mobile usage scenarios.
- Files move from something you manage to something you use when needed. Files (and by corollary mailboxes) will simply become tools and not obsessions. We’re all seeing the advances in unlimited storage along with accurate search change the way we use mailboxes. The same will happen with files. In addition, the isolation and contract-based sharing that defines sealed platforms will alter the semantic level at which we deal with information. The days of spending countless hours creating and managing hierarchies and physical storage structures are over—unlimited storage, device replication, and search make for far better alternatives.
- Identity is a choice. Use of services, particularly consumer facing services, requires flexibility in identity. Being able to use company credentials and/or company sign-on should be a choice but not a requirement. This is especially true when considering use of tools that enable cross-organization collaboration. Inviting people to participate in the process should be as simple as sending them mail today.
- User experience has a memory and is aware and predictive. People expect their interactions with services to be smart—to remember choices, learn preferences, and predict what comes next. As an example, location-based services are not restricted to just maps or specific services, but broadly to all mobile interactions where the value of location can improve the overall experience.
- Telemetry is essential / privacy redefined. Usage is what drives incremental product improvements along with the ability to deliver a continuously improving product/service. This usage will be measured by anonymous, private, opt-in telemetry. In addition, all of our experiences will improve because the experience will be tailored to our usage. This implies a new level of trust with regard to the vendors we all use. Privacy will no doubt undergo (or already has undergone) definitional changes as we become either comfortable or informed with respect to the opportunities for better products.
- Participation is a feature. Nearly every service benefits from participation by those relevant to the work at hand. New tools will not just enable, but encourage collaboration and communication in real-time and connected to the work products. Working in one place (document editor) and participating in another (email inbox) has generally been suboptimal and now we have alternatives. Participation is a feature of creating a work product and ideally seamless.
- Business communication becomes indistinguishable from social. The history of business communication having a distinct protocol from social goes back at least to learning the difference between a business letter and a friendly letter in typing class. Today we use casual tools like SMS for business communication and while we will certainly be more respectful and clear with customers, clients, and superiors, the reality is the immediacy of tools that enable continuous productivity will also create a new set of norms for business communication. We will also see the ability to do business communication from any device at any time and social/personal communication on that same device drive a convergence of communication styles.
- Enterprise usage and control does not make things worse. In order for enterprises to manage and protect the intellectual property that defines the enterprise and the contribution employees make to the enterprise IP, data will need to be managed. This is distinctly different from managing tools—the days of trying to prevent or manage information leaks by controlling the tools themselves are likely behind us. People have too many choices and will simply choose tools (often against policy and budgets) that provide for frictionless work with coworkers, partners, customers, and vendors. The new generation of tools will enable the protection and management of information that does not make using tools worse or cause people to seek available alternatives. The best tools will seamlessly integrate with enterprise identity while maintaining the consumerization attributes we all love.
What comes next?
Over the coming months and years, debates will continue over whether or not the new platforms and newly created tools will replace, augment, or see occasional use relative to the tools with which we are all familiar. Changes as significant as those we are experiencing right now happen two ways, at first gradually and then quickly, to paraphrase Hemingway. Some might find little need or incentive to change. Others have already embraced the changes. Perhaps those right now on the cusp, realize that the benefits of their new device and new apps are gradually taking over their most important work and information needs. All of these will happen. This makes for a healthy dialog.
It also makes for an amazing opportunity to transform how organizations make products, serve customers, and do the work of corporations. We’re on the verge of seeing an entire rewrite of the management canon of the 20th century. New ways of organizing, managing, working, collaborating are being enabled by the tools of the continuous productivity paradigm shift.
Above all, it makes for an incredible opportunity for developers and those creating new products and services. We will all benefit from the innovations in technology that we will experience much sooner than we think.
Targeting multiple operating systems has been an industry goal or non-goal depending on your perspective since some of the earliest days of computing. For both app developers and platform builders, the evolution of their work follow typical patterns—patterns where their goals might be aligned or manageable in the short term but become increasingly divergent over time.
While history does not always repeat itself, the ingredients for a repeat of cross-platform woes currently exist in the domain of mobile apps (mobile means apps developed for modern sealed-case platforms such as iOS, Android, Windows RT, Windows Phone, Blackberry, etc.) The network effects of platforms and the “winner take all” state that many believe is reached (or perhaps desirable) influences the behavior and outcome of cross-platform app development as well as platform development.
Today app developers generally write apps targeting several of the mobile platforms. If you look at number of “sockets” over the past couple of years there was an early dominance of iOS followed by a large growth of Android. Several other platforms currently compete for the next round of attention. Based on apps in respective app stores these are two leaders for the new platforms. App developers today seeking the most number of client sockets target at least iOS and Android, often simultaneously. It is too early to pick a winner.
Some would say that the role of the cloud services or the browser make app development less about the “client” socket. The data, however, suggests that customers prefer the interaction approach and integration capability of apps and certainly platform builders touting the size of app stores further evidences that perspective. Even the smallest amount of “dependency” (for customers or technical reasons) on the client’s unique capabilities can provide benefits or dramatically improve the quality of the overall experience.
In discussions with entrepreneurs I have had, it is clear the approach to cross-platform is shifting from “obviously we will do multiple platforms” to thinking about which platform comes first, second, or third and how many to do. Chris Dixon recently had some thoughts about this in the context of modern app development in general (tablets and/or phones). I would agree that tablets drive a different type of app over time simply because the scenarios can be quite different even with identically capable devices under the hood. The cross-platform question only gets more difficult if apps take on unique capabilities or user experiences for different sized screens, which is almost certainly the case.
The history of cross-platform development is fairly well-known by app developers.
The goal of an app developer is to acquire as many customers as possible or to have the highest quality engagement with a specific set of customers. In an environment where customers are all using one platform (by platform we mean set of APIs, tools, languages that are used to build an app) the choice for a developer is simple, which is to target the platform APIs in a fully exploitive manner.
The goal of being the “best” app for the platform is a goal shared by both app and platform developers. The reason for this is that nearly any app will have app competitors and one approach to differentiation will be to be the app that is best on the platform—at the very least this will garner the attention of the platform builder and result in amplification of the marketing and outreach of a given app (for example, given n different banking apps, the one that is used in demonstrations or platform evangelism will be the one that touts the platform advantages).
Once developers are faced with two or more platforms to target, the discussion typically starts with attempting to measure the size of the customer base for each platform (hence the debate today about whether market share or revenue define a more successful platform). New apps (at startups or established companies) will start with a dialog that depending on time or resources jumps through incredible hoops to attempt to model the platform dynamics. Questions such as which customers use which platforms, velocity of platform adoption, installed base, likelihood of reaching different customers on platforms, geography of usage, and pretty much every variable imaginable. The goal is to attempt to define the market impact of either support multiple platforms or betting on one platform. Of course none of these can be known. Observer bias is inherent in the process only because this is all about forecasting a dynamic system based on the behavior of people. But basing a product plan on a rapidly evolving and hard to define “market share” metric is fraught with problems.
During this market sizing debate, the development team is also looking at how challenging cross platform support can be. While mostly objective, just as with the market sizing studies, bias can sneak in. For example, if the developers’ skills align with one platform or a platform makes certain architectural assumptions that are viewed favorably then different approaches to platform choices become easy or difficult.
Developers that are fluent in HTML might suggest that things be done in a browser or use a mobile browser solution. Even the business might like this approach because it leverages an existing effort or business model (serving ads for example). Some view the choices Facebook made for their mobile apps as being influenced by these variables.
As the dialog continues, developers will tend to start to see the inherent engineering costs in trying to do a great job across multiple platforms. They will start to think about how hard it is to keep code bases in sync or where features will be easier/harder or appear first or even just sim-shipping across platforms. Very quickly developers will generally start to feel pulled in an impossible direction by having to be across multiple platforms and that it is just not viable to have a long-term cross-platform strategy.
The business view will generally continue to drive a view that the more sockets there are the better. Some apps are inherently going to drive the desire or need for cross-platform support. Anything that is about communications for example will generally argue for “going where the people are” or “our users don’t know the OS of their connected endpoints” and thus push for supporting multiple platforms. Apps that are offered as free front ends for services (online banking, buying tickets, or signing up for yoga class) will also feel pressures to be where the customers are and to be device agnostic. As you keep going through scenarios the business folks will become convinced that the only viable approach is to be on all the popular platforms.
That puts everyone in a very tense situation—everyone is feeling stressed about achieving success. Something has to give though.
We’ve all been there.
The industry has seen this cross-platform movie several times. It might not always be the same and each generation brings with it new challenges, new technologies, and potentially different approaches that could lead to different outcomes. Knowing the past is important.
Today’s cross-platform challenge can be viewed differently primarily because of a few factors when looking at the challenge from an app developer / ISV:
- App Services. Much of the functionality for today’s apps resides on software as a service infrastructure. The apps themselves might be viewed as fairly lightweight front ends to these services, at least for some class of apps or some approaches to app building. This is especially true today as most apps are still fairly “first generation”.
- Languages and tools. Today’s platforms are more self-contained in that the languages and tools are also part of the platform. In previous generations there were languages that could be used across different platforms (COBOL, C, C++) and standards for those languages even if there were platform-specific language extensions. While there are ample opportunities for shared libraries of “engine” code in many of today’s platforms, most modern platforms are designed around a heavy tilt in favor of one language, and those are different across platforms. Given the first point, it is fair to say that the bulk of the code (at least initially) on the device will be platform specific anyway.
- Integration. Much of what goes on in apps today is about integration with the platform. Integration has been increasing in each generation of platform shifts. For example, in the earliest days there was no cross-application sharing, then came the basics through files, then came clipboard sharing. Today sharing is implicit in nearly every app in some way.
Even allowing for this new context, there is a cycle at work in how multiple, competing platforms evolve.
This is a cycle so you need to start somewhere.
Initially there is a critical mass around one platform. As far as modern platforms go when iOS was introduced it was (and remains) unique in platform and device attributes so mobile apps had one place to go and all the new activity was on that platform. This is a typical first-mover scenario in a new market.
Over time new platforms emerge (with slightly different characteristics) creating a period of time where cross-platform work is the norm. This period is supported by the fact that platforms are relatively new and are each building out the base infrastructure which tends to look similar across the new platforms.
There are solid technical reasons why cross-platform development seems to work in the early days of platform proliferation. When new platforms begin to emerge they are often taking similar approaches to “reinventing” what it means to be a platform. For example, when GUI interfaces first came about the basics of controls, menus, and windows were close enough that knowledge of one platform readily translated to other platforms. It was technically not too difficult to create mapping layers that allowed the same code to be used to target different platforms.
During this phase of platform evolution the platforms are all relatively immature compared to each other. Each is focused on putting in place the plumbing that approaches platform design in this new shared view. In essence the emerging platforms tend to look more similar that different. The early days of web browsers–which many believed were themselves platforms–followed this pattern. There was a degree of HTML that was readily shared and consistent across platforms. At least this was the case for a while.
During this time there is often a lot of re-learning that takes place. The problems solved in the previous generation of platforms become new again. New solutions to old problems arise, sometimes frustrating developers. But this “new growth” also brings with it a chance to revisit old assumptions and innovate in new ways, even if the problem space is the same.
Even with this early commonality, things can be a real challenge. For example, there is a real desire for applications to look and feel “native”. Sometimes this is about placement of functionality such as where settings are located. It could be about the look or style of graphical elements or the way visual aspects of the platform are reflected in your app. It could also be about highly marketed features and how well your app integrates as evidence for supporting the platform.
Along the way things begin to change and the platforms diverge because of two factors. First, once the plumbing common to multiple early platforms is in place, platform builders begin to express their unique point of view of how platform services experiences should evolve. For example, Android is likely to focus on unique services and how the client interacts with and uses those services. To most, iOS has shown substantially more innovation in client-side innovation and first-party experiences. The resulting APIs exposed to developers start to diverge in capabilities and new API surface areas no longer seem so common between platforms.
Second, competition begins to drive how innovation progresses. While the first mover might have one point of view, the second (or third) mover might take the same idea of a service or API but evolve it slightly differently. It might integrate with backends differently or it might have a very different architecture. The role of voice input/reco, maps, or cloud storage are examples of APIs that are appearing on platforms but the expression of those APIs and capabilities they support are evolving in different ways that there are no obvious mappings between them.
As the platforms diverge developers start to make choices about what APIs to support on each platform or even which platforms to target. With these choices come a few well known challenges.
- Tools and languages. Initially the tools and languages might be different but things seem manageable. In particular, developers look to put as much code in common languages (“platform agnostic code”) or implement code as a web service (independent of the client device). This is a great strategy but does not allow for the reality that a good deal of code (and differentiation) will serve as platform-specific user experience or integration functionality. Early on tools are relatively immature and maybe even rudimentary which makes it easier to build infrastructure around managing a cross-platform project. Over time the tools themselves will become more sophisticated and diverge in capabilities. New IDEs or tools will be required for the platforms in order to be competitive and developers will gravitate to one toolset, resulting in developers themselves less able to context switch between platforms. At the very least, managing two diverging code bases using different tools becomes highly challenging–even if right now some devs think they have a handle on the situation.
- User interaction and design (assets). Early in platform evolution the basics of human interaction tend to be common and the approaches to digital assets can be fairly straight forward. As device capabilities diverge (DPI, sensors, screen sizes) the ability for the user interaction to be common also diverges. What works on one platform doesn’t seem right on another. Tablet sized screens introduce a whole other level of divergence to this challenge. Alternate input mechanisms can really divide platform elements (voice, vision, sensors, touch metaphors).
- Platform integration. Integrating with a platform early on is usually fairly “trivial”. Perhaps there are a few places you put preferences or settings, or connect to some platform services such as internationalization or accessibility. As platforms evolve, where and how to integrate poses challenges for app developers. Notifications, settings, printing, storage, and more are all places where finding what is “common” between platforms will become increasingly difficult to impossible. The platform services for identity, payment, and even integration with third party services will become increasingly part of the platform API as well. When those APIs are used other benefits will accrue to developers and/or end-users of apps—and these APIs will be substantially different across platforms.
- More code in the service. The new platforms definitely emphasize code in services to provide a way to be insulated from platform changes. This is a perfect way to implement as much of your own domain as you can. Keep in mind that the platforms themselves are evolving and growing and so you can expect services provided by the platform to be part of the core app API as well. Storage is a great example of this challenge. You might choose to implement storage on your own to avoid a platform dependency. Such an approach puts you in the storage business though and probably not very competitively from a feature or cost perspective. Using a third party API can pose the same challenge as any cross-platform library. At the same time, the platforms evolve and likely will implement storage APIs and those APIs will be rich and integrate with other services as well.
- Cross-platform libraries. One of the most common approaches developers attempt (and often provided by third parties as well) is to develop or use a library that abstracts away platform differences or claims to map a unique “meta API” to multiple platforms. These cross—platform libraries are conceptually attractive but practically unworkable over time. Again, early on this can work. Over time the platform divergence is real. There’s nothing you can do to make services that don’t exist on one platform magically appear on another or APIs that are architecturally very different morph into similar architectures. Worse, as an app developer you end up relying on essentially a “shadow” OS provided by a team that has a fraction of the resources for updates, tooling, documentation, etc. even if this team is your own dev team. As a counter example, games commonly use engines across platforms, but they rely on a very narrow set of platform APIs and little integration. Nevertheless, there are those that believe this can be a path (as it is for games). It is important to keep in mind that the platforms are evolving rapidly and the customer desire for well-integrated apps (not just apps that run).
- Multiple teams. Absent the ability to share app client code (because of differing languages), keeping multiple teams in sync on the same app is extremely challenging. Equally challenging is having one team time slice – not only is that mentally inefficient, maintaining up to date skills and knowledge for multiple platforms is challenging. Even beyond the basics of keeping the feature set the same, there are problems to overcome. One example is just timing of releases. It might be hard enough to keep features in sync and sim ship, but imagine that the demand spike for a new release of your app when the platform changes (and maybe even requires a change to your app). You are then in a position to need a release for one platform. But if you are halfway done with new features for your app you have a very tricky disclosure and/or code management challenge. These challenges are compounded non-linearly as the number of platforms increases.
These are a few potential challenges. Not every app will run into these and some might not even be real challenges for a particularly app domain. By and large, these are the sorts of things that have dogged developers working cross-platform approaches across clients, servers, and more over many generations.
The obvious question will continue to be debated, which is if there is a single platform winner or not. Will developers be able to pick a platform and reach their own business and product goals by focusing on one platform as a way of avoiding the issues associated with supporting multiple platforms?
The only thing we know for sure is that the APIs, tools, and approaches of different platforms will continue to evolve and diverge. Working across platforms will only get more difficult, not easier.
The new platforms moved “up the stack” and make it more difficult for developers to isolate themselves from the platform changes. In the old days, developers could re-implement parts of the platform within the app and use that across platforms or even multiple apps. Developers could hook the system and customize the behavior as they saw fit. The more sealed nature of platforms (which delivers amazing benefits for end-users) makes it harder for developers to create their own experience and transport it across platforms. This isn’t new. In the DOS era, apps implemented their own printing subsystems and character-based user models all of which got replaced by GUI APIs all to the advantage of developers willing to embrace the richer platforms and to the advantage of customers that gained a new level of capabilities across apps.
The role of app stores and approval processes, the instant ability for the community to review apps, and the need to break through in the store will continue to drive the need to be great apps on the chosen platforms.
Some will undoubtedly call for standards or some homogonization of platforms. Posix in the OS world, Motif in the GUI world, or even HTML for browsers have all been attempts at this. It is a reasonable goal given we all want our software investments to be on as many devices as possible (and this desire is nothing new). But is it reasonable to expect vendors to pour billions into R&D to support an intentional strategy of commoditization or support for a committee design? Vendors believe we’re just getting started in delivering inovation and so slowing things down this way seems counter-intuitive at best.
Ultimately, the best apps are going to break through and I think the best apps are going to be the ones that work with the platform not in spite of it and the best apps won’t duplicate code in the platform but work with platform.
It means there some interesting choices ahead for many players in these ecosystems.
# # # # #
In the software industry, legacy code is a phrase often used as a negative by engineers and pundits alike to describe the anchor around our collective necks that prevents software from moving forward in innovative ways. Perhaps the correlation between legacy and stagnation is not so obvious—consider that all code is legacy code as soon it is used by customers and clouds alike.
Legacy code is everywhere. Every bit of software we use, whether in an app on a phone, in the cloud, or installed on our PC is legacy code. Every bit of that code is being managed by a team of people who need to do something with it: improve it, maintain it, age it out. The process of evolving code over time is much more challenging than it appears on the face of it. Much like urban planning, it is easy to declare there should be mass transit, a new bridge, or a new exit, but figuring out how to design and engineer a solution free of disruptions or worse is extremely challenging. While one might think software is not concrete and steel, it has a structural integrity well beyond the obvious.
One of the more interesting aspects of Lean Startup for me is the notion of building products quickly and then reworking/pivoting/redoing them as you learn more from early adopters. This works extremely well for small code and customer bases. Once you have a larger code base or paying [sic] customers, there are limits to the ability to rewrite code or change your product, unless the number of new target customers greatly exceeds the number of existing customers. There exists a potential to slow or constrain innovation, or the reduced ability to serve as a platform for innovation. So while being free of any code certainly removes any engineering constraint, few projects are free of existing code for very long.
We tend to think of legacy code in the context of large commercial systems with support lifecycles and compatibility. In practice, lifting the hood of any software project in use by customers will have engineers talking about parts of the system that are a combination of mission critical and very hard to work near. Every project has code that might be deemed too hot to handle, or even radioactive. That’s legacy code.
This post looks at why code is legacy so quickly and some patterns. There’s no simple choice as to how to move forward but being deliberate and complete in how you do turns out to be the most helpful. Like so many things, this product development challenge is highly dependent on context and goals. Regardless, the topic of legacy is far more complex and nuanced than it might appear.
One person’s trash is another’s treasure
Whether legacy code is part of our rich heritage to be brought forward or part of historical anomalies to be erased from usage is often in the eye of the beholder. The newer or more broadly used some software is the more likely we are to see a representation of all views. The rapid pace of change across the marketplace, tools and techniques (computer science), and customer usage/needs only increases the velocity code moves to achieve legacy status.
In today’s environment, it is routine to talk about how business software is where the bulk of legacy code exists because businesses are slow to change. The inability to change quickly might not reflect a lack of desire, but merely prudence. A desire to improve upon existing investments rather than start over might be viewed as appropriately conservative as much as it might be stubborn and sticking to the past.
Business software systems are the heart and soul of what differentiates one company’s offering from another. These are the treasures of a company. Think about the difference between airlines or banks as you experience them. Different companies can have substantially different software experiences and yet all of them need to connect to enormously complex infrastructures. This infrastructure is a huge asset for the company and yet is also where changes need to happen. These systems were all created long before there was an idea of consumers directly accessing every aspect of the service. And yet with that access has come an increasing demand for even more features and more detailed access to the data and services we all know are there. We’re all quick to think of the software systems as trash when we can’t get the answer or service we want when we want it when we know it is in there somewhere.
Businesses also run systems that are essential but don’t necessarily differentiate one business from another or are just not customer facing. Running systems internally for a company to create and share information, communicate, or just run the “plumbing” of a company (accounting, payroll) are essential parts of what make a company a company. Defining, implementing, and maintaining these is exactly the same amount of work as the customer facing systems. These systems come with all the same burdens of security, operations, management, and more.
Only today, many of these seem to have off-the-shelf or cloud alternatives. Thus the choices made by a company to define the infrastructure of the company quickly become legacy when there appear to be so many alternatives entering the marketplace. To the company with a secure and manageable environment these systems are assets or even treasures. To the folks in a company “stuck” using something that seems more difficult or worse than something they can use on the web, these seem like crazy legacy systems, or maybe trash.
Companies, just as cities, need to adapt and change and move forward. There’s not an option to just keep running things as they are—you can’t grow or retain customers if your service doesn’t change but all the competitors around you do. So your treasure is also your legacy—everything that got you to where you are is also part of what needs to change.
Thinking about the systems consumers use quickly shows how much of the consumer world is burdened by existing software that fits this same mold—is the existing system trash or treasure? The answer is both and it just depends on who you ask or even how you ask.
Consumer systems today are primarily service-based. As such the pace of change is substantially different from the pace of change of the old packaged software world since changes only need take place at the service end without action by consumers. This rapid pace of change is almost always viewed as a positive, unless it isn’t.
The services we all use are amazing treasures once they become integral to our lives. Mail, social networking, entertaining, as well as our banking and travel tools are all treasures. They can make our lives easier and more fun. They are all amazing and complex software systems running at massive scale. To the companies that build and run these systems, they are the company treasures. They are the roads and infrastructure of a city.
If you want to start an uproar with a consumer service, then just change the user interface a bit. One day your customers (users, people) sign on and there’s a who moved my cheese moment. Unlike the packaged software world, no choice was made no time was set aside, rather just when you needed to check your mail, update status, or read some news everything is different. Generally the more acute your experience is the more wound up you get about the change. Unlike adding an extra button on an already crowded toolbar, a menu command at the end of a long menu, or just a new set of optional customizations, this in your face change is very rarely well-received.
Sometimes you don’t even need to change your service, but just say you’re going to shut it down and no longer offer it. Even if the service hasn’t changed in a long time or usage has not increased, all of a sudden that legacy system shows up as someone’s treasure. City planners trying to find new uses for a barely used public facility or rezone a parking lot often face incredible resistance from a small but stable customer population, even if the resources could be better used for a more people. That old abandoned building is declared an historic landmark, even if it goes unused. No matter how low the cost or how rich the provider, resources are finite.
The uproar that comes from changing consumer software represents customers clamoring for a maintaining the legacy. When faced with a change, it is not uncommon to see legacy viewed as a heritage and not the negatives usually associated with software legacy.
Often those most vocal about the topic have polarizing views on changes. Platforms might be fragmented and the desire is expressed to get everyone else to change their (browser, runtime, OS) to keep things modern and up to date—and this is expressed with extreme zest for change regardless of the cost to others. At the same time, things that impact a group of influentials or early adopters are most assailed when they do change in ways that run counter to convential wisdom.
Somewhere in this world where change and new are so highly valued and same represents old and legacy, is a real product development challenge. There are choices to be made in product development about the acceptance and tolerance of change, the need to change, and the ability to change. These are questions without obvious answers. While one person’s trash is another’s treasure makes sense in the abstract, what are we to do when it comes to moving systems forward.
Let’s assume it is impossible to really say whether code is legacy to be replaced or rewritten or legacy to be preserved and cherished. We should stipulate this because it doesn’t really matter for two reasons:
- Assuming we’re not going to just shut down the system, it will change. Some people will like the change and other’s will not. One person’s treasure is another’s trash.
- Software engineering is a young and evolving field. Low-level architecture, user interaction, core technologies, tools, techniques, and even tastes will change, and change dramatically. What was once a treasured way to implement something will eventually become obsolete or plain dumb.
These two points define the notion that all existing code is legacy code. The job of product development is to figure out which existing code is a treasure and which is trash.
It is worth having a decision framework for what constitutes trash for your project. Part of every planning process should include a deliberate notion of what code is being treated as trash and what code is a treasure. The bigger the system, the more important it is to make sure everyone is on the same page in this regard. Inconsistencies in how change is handled can lead to frustrated or confused customers down the road.
Written with different assumptions
When a system is created, it is created with a whole host of assumptions. In fact, a huge base of assumptions are not even chosen deliberately at the start of a project. From the programming language to the platform to the basic architecture are chosen rather quickly at the start of a project. It turns out these put the system on a trajectory that will consistently reinforce assumptions.
We’ve seen detailed write-ups of the iOS platform and the evolution of apps relative to screen attributes. On the one hand developers coding to iOS know the specifics of the platform and can “lock” that assumption—a treasure for everyone. Then characteristics of screens potentially change (ppi, aspect ratio, size) and the question becomes whether preserving the fixed point is “supporting legacy” or “holding back innovation”.
While that is a specific example, consider broader assumptions such as bandwidth, cpu v. gpu capability, or even memory. An historic example would be how for the first ten years of PC software there was an extreme focus on reducing the amount of memory or disk storage used by software. Y2K itself was often blamed on people trying to save a few bits in memory or on disk. Structures were packed. Overlays were used. Data stored in binary on disk.
Then one day 32-bits, virtual memory and fast gigabyte disks become normal. For a short time there was a debate about sloppy software development (“why use 32 bits to represent 0-255?”) but by and large software developers were making different assumptions about what was the right starting point. Teams went through code systematically widening words, removing complexity of the 16 bit address space, and so on.
These changes came with a cost—it took time and effort to update applications for a new screen or revisit code for bit-packing assumptions. These seem easy and right in hindsight—these happen to be transparent to end-users. But to a broad audience these changes were work and the assumptions built into the code so innocently just became legacy.
It is easy for us to visualize changes in hardware driving these altered assumptions. But assumptions in the software environment are just as pervasive. Concepts ranging from changes in interaction widgets (commands to toolbars to context sensitive) to metaphors (desktop or panels) or even assumptions about what is expected behavior (spell checking). The latter is interesting because the assumption of having a local dictionary improve over time and support local custom dictionaries was state of the art. Today the expectation is that a web service is the best way to know how to spell something. That’s because you can assume connectivity and assume a rich backend.
When you start a new project, you might even take a step back and try to list all of the assumptions you’re making. Are you assuming screen size or aspect ratio, keyboard or touch, unlimited bandwidth, background processing, single user, credit cards, left to right typing, or more. It is worth noting that in the current climate of cross-platform development, the assumptions made on target platforms can differ quite a bit—what is easy or cheap on one platform might be impossible or costly on another. So your assumptions might be inherited from a target platform. It is rather incredible the long list of things one might assume at the start of a project and each of those translates into a potential roadblock into evolving your system.
Evolved views of well-architected
Software engineering is one of the youngest engineering disciplines. The whole of the discipline is a generation, particularly if you consider the micro-processor based view of the field. As defined by platforms, the notion of what constitutes a well-architected system is something that changes over time. This type of legacy challenge is one that influences engineers in terms of how they think about a project—this is the sort of evolution that makes it easy or difficult to deliver new features, but might not be visible to those using the system.
As an example, the evolution of where code should be executed in a system parallels the evolution of software engineering. From thin-client mainframes to rich-client tightly-coupled client/server to service-oriented architecture we see very different views of the most fundamental choice about where to put code. From modular to structured to object-oriented programming and more we see fundamentally different choices about how to structure code. From a focus on power, cores, and compute cycles to graphics, mobility, and battery life we see dramatic changes in what it means to be modern and well-architected.
The underlying architecture of a system affords developers a (far too) easy way to declare something as legacy code to be reworked. We all know a system written in COBOL is legacy. We all know if a system is a stateful client application to install in order to use the system it needs to be replaced.
When and how to make these choices is much more complex. These systems are usually critical to the operations of a business and it is often entirely possible (or even easier) to continue to deliver functionality on the existing system rather than attempt to replace the system entirely.
One of the most eye-opening examples of this for me is the description of the software developed for the Space Shuttle, which is a long-term project with complexity beyond what can even be recreated, see Architecture of the space shuttle primary avionics software system. The state of the art in software had moved very far, but the risks or impossibility of a modern and current architecture outweighed the benefits. We love to say that not every project is the space shuttle, but if you’re building the accounts system for a bank, then that software is as critical to the bank as avionics are to the shuttle. Mission critical is not only an absolute (“lives at stake”) but also relative in terms of importance to the organization.
A very smart manager of mine once said “given a choice, developers will always choose to rewrite the code that is there to make it better”. What he meant was that taken from a pure engineering approach, developers would gladly rewrite a body of code in order to bring it up to modern levels. But the downside of this is multi-faceted. There’s an opportunity cost. There’s often an inability to clearly understand the full scope of the existing system. And of course, basic software engineering says that 10% of all code changes will yield regressions. Simply reworking code because the definition of well-architected changed might not always be prudent. The flip side of being modern is sometimes the creation of second system syndrome.
Changed notion of extensibility
All software systems with staying power have some notion of extensibility or a platform. While this could be as obvious as an API for system services, it could also be an add-in model, a wire protocol, or even file formats. Once your system introduces extensibility it becomes a platform. Someone, internal or external, will take advantage of your extensibility in ways you probably didn’t envision. You’ve got an instant legacy, but this legacy is now a dependency to external partners critical to your success.
In fact, your efforts at delivering goodness have quickly transformed someone else’s efforts. What was a feature to you can become a mission critical effort to your customer. This is almost always viewed as big win—who doesn’t want people depending on your software in this way. In fact, it was probably the goal to get people to bet their efforts on your extensibility. Success.
Until you want to change it. Then your attempts to move your platform forward are constrained by what put in place in the first version. And often your first version was truly a first version. All the understanding you had of what people wanted to do and what they would do are now informed by real experience. While you can do tons of early testing and pre-release work, a true platform takes a long time before it becomes clear where efforts at tapping extensibility will be focused.
During this time you might even find that the availability of one bit of extensibility caused customers to look at other parts of your system and invent their own extensibility or even exploit the extensibility you provided in ways you did not intend.
In fact whole industries can spring up based on pushing the limits of your extensibility: browser toolbars, social network games, startup programs.
Elements of your software system that are “undocumented implementation” get used by many for good uses. Reversed engineered file formats, wire protocols, or just hooking things at a low level all provide valuable functionality for data transfer, management, or even making systems accessible to users with special needs.
Taking it a step further, extensibility itself (documented or implied) becomes the surface area to exploit for those wishing to do evil things to your system or to use your system as a vector for evil.
What was once a beautiful and useful treasure can quickly turn into trash or worse. Of course if bad things are happening then you can seek to remove the surface area exposed by your system and even then you can be surprised at the backlash that comes. A really interesting example of this is back in 1999 when the “Melissa” virus exploited the automation in Outlook. The reaction was to disable the automation which broke a broad class of add-ins and ended up questioning the very notion of extensibility and automation in email. We’ve seen similar dynamics with viral gaming in social networks where the benefits are clear but once exploited the extensibility can quickly become a liability. Melissa was not a security hole at the time, but since then the notion of extensibility has been redefined and so systems with or utilizing such extensibility get viewed as legacy systems that need to be thought through.
While a system is being developed, there are scenarios and workflows that define the overall experience. Even with the best possible foresight, it is well-established that there is a high error rate in determining how a system will be used in the real world. Some of these errors are fairly gross but many are more nuanced, and depend on the context of usage. The more general purpose a system is the more likely it is to find the usage of a system to be substantially different from what it was designed to do. Conversely, the more task-oriented a system is the more likely it is to quickly see the mistakes or sub-optimal choices that got made.
Usage quickly gets to assumptions built into the system. List boxes designed to hold 100 names work well unless everyone has 1000 names in their lists. Systems designed for high latency networks behave differently when everyone has broadband. And while your web site might be great on a 15” laptop, one day you might find more people accessing it from a mobile browser with touch. These represent the rug being pulled out from under your usage assumptions. Your system implementation became legacy while people are just using it because they used it differently than you assumed.
At the same time, your views evolve on where you might want to take the system or experience. You might see new ways of input based on innovative technologies, new ways of organizing the functionality based on usage or increase in feature scope, or whole new features that change the flow of your system. These step-function changes are based on your role as designer of a system and evolving it to new usage scenarios.
Your view at the time when designing the changes is that you’re moving from the legacy system. Your customers think of the system as treasure. You view your change as the new treasure. Will your customers think of them as treasure or trash?
In these cases the legacy is visible and immediately runs into the risks of alienating those using your system. Changes will be dissected and debated among the core users (even for an internal system—ask the finance team how they like the new invoicing system, for example). Among breadth users the change will be just that, a change. Is the change a lot better or just a lot different? In your eyes or customer’s eyes? Are all customers the same?
We’re all familiar with the uproar that happens when user interface changes. Starting from the version upgrades of DOS classics like dBase or 1-2-3 through the most recent changes to web-based email search, or social networking, changing the user experience of existing systems to reflect new capabilities or usage is easily the most complex transformation existing, aka legacy, code must endure.
If you waded through the above examples of what might make existing code legacy code you might be wondering what in the world you can do? As you’ve come to expect from this blog, there’s no easy answer because the dynamics of product development are complex and the choices dependent upon more variables than you can “compute”. Product development is a system of linear equations with more variables than equations.
The most courageous efforts of software professionals involve moving systems forward. While starting with a clean slate is often viewed as brave and creative, the reality is that it takes a ton of bravery and creativity to decide how to evolve a system. Even the newest web service quickly becomes an enormous challenge to change—the combination of engineering complexities and potential for choosing “wrong” are enough to overwhelm any engineer. Anyone can just keep something running, but keeping something running while moving it to new and broader uses defines the excitement of product development.
Once you have a software system in place with customers/users, and you want to change some existing functionality there are a few options you can choose from.
- Remove code. Sometimes the legacy code can just be removed. The code represents functionality that should no longer be part of your system. Keeping in mind that almost no system has something totally unused, you’re going to run into speed bumps and resistance. While it is often easy to think of removing a feature, chances are there are architectural dependencies throughout a large system that depend on not just the feature but how it is implemented. Often the cost of keeping an implementation around is much lower than the perceived benefit from not having it. There’s an opportunity to make sure that the local desire to have fewer old lines of code to worry about is not trumping a global desire to maintain stability in the overall development process. On the other hand, there can be a high cost or impossibility to keeping the old code around. The code might not meet modern standards for privacy or security, even though it is not executed it exposes surface area that could be executed, for example.
- Run side by side. The most common refrain for any user-interface changes to existing code is to leave both implementations running and just allow a compatibility mode or switch to return to the old way of running. Because the view is that leaving around code is usually not so high cost it is often the case that those on the outside of a project view it as relatively low cost to leave old code paths around. As easy as this sounds, the old code path still has operational complexities (in the case of a service) and/or test matrix complexities that have real costs even if there is no runtime cost to those not accessing it (code not used doesn’t take up memory or drain power). The desire most web developers have to stop supporting older browsers is essentially this argument—keeping around the existing code is more trouble than it might be worth. Side by side is almost never a practical engineering alternative. From a customer point of view it seems attractive except inevitably the question becomes “how long can I keep running things the old way”. Something claimed to be a transition quickly turns into a permanent fixture. Sometimes that temporary ramp the urban planners put in becomes pretty popular. There’s a fun Harvard Business School case on the design of the Office Ribbon ($) that folks might enjoy since it tees up this very question.
- Rewrite underneath. When there are changes in architectural assumptions one approach is to just replumb the system. Developers love this approach. It is also enormously difficult. Implicit in taking this approach is that the rest of the system “above” will function properly in the face of a changed implementation underneath or that there is an obvious match from one generation of plumbing to another. While we all know good systems have abstractions and well-designed interfaces, these depend on characteristics of the underlying architecture. An example of this is what happens when you take advantage of a great architecture like file i/o and then change dramatically the characteristics of the system by using SSDs. While you want everything to just be faster, we know that the whole system depended on the latency and responsiveness of systems that operated an order of magnitude slower. It just isn’t as simple as rewriting—the changes will ripple throughout the system.
- Stage introduction. Given the complexities of both engineering and rolling out a change to customers, often a favored approach is the staged rollout. In this approach the changes are integrated over time through a series of more palatable changes. Perhaps there are architectural changes done first or perhaps some amount of existing functionality is maintained initially. Ironically, this brings us back to the implication that most businesses are the ones slow to change and have the most legacy. In fact, businesses most often employ the staged rollout of system changes. This seems to be the most practical. It doesn’t have the drama of a disruptive change or the apparent smoothness of a compatibility mode, and it does take longer.
Taking these as potential paths to manage transitions of existing code, one might get discouraged. It might even be that it seems like the only answer is to start over. When thinking through all the complexities of evolving a system, starting over, or rebooting, becomes appealing very quickly.
Dilemma of rebooting
Rebooting a system has a great appeal when faced with a complex system that is hard to manage, was architected for a different era, and is loaded with dated assumptions.
This is even more appealing when you consider that the disruption going on in the marketplace that is driving the need for a whole new approach is likely being led by a new competitor that has no existing customers or legacy. This challenge gets to the very heart of the innovator’s dilemma (or disruptive technologies). How can you respond when you’ve got a boat anchor of code?
Sometimes you can call this a treasure or an asset. Often you call them customers.
It is very easy to say you want to rewrite a system. The biggest challenge is in figuring out if you mean literally rewrite it or simply recast it. A rewrite implies that you will carry forth everything you previously had but somehow improved along the dimension driving the need to rework the system. This is impossibly hard. In fact it is almost impossible to name a total rewrite that worked without some major disruption, a big bet, and some sort of transition plan that was itself a major effort.
The dilemma in rewriting the system is the amount of work that goes into the transition. Most systems are not documented or characterized well-enough to even know if you have completely and satisfactorily rewritten it. The implications for releasing a system that you believe is functionally equivalent but turns out not to be are significant in terms if mismatched customer expectations. Even small parts of a system can be enormously complex to rewrite in the sense of bringing forward all existing functionality.
On the other hand, if you have a new product that recasts the old one, but along the lines of different assumptions or different characteristics then it is possible to set expectations correctly while you have time to complete the equivalent of a rewrite or while customers get used to what is missing. There are many challenges that come from implementing this approach as it is effectively a side-by-side implementation but for the entire product, not just part of the code.
Of course an alternative is just an entirely new product that is positioned to do different things well, even if it does some of the existing product. Again, this simply restates the innovator’s dilemma argument. The only difference is that you employ this for your own system.
The biggest frustration software folks have with the “build a new system that doesn’t quite do everything the old one did” is the immediate realization of what is missing. From mail clients to word processors to development tools and more, anything that comes along that is entirely new and modern is immediately compared to the status quo. This is enormously frustrating because of course as software people we are familiar with what is missing, just as we’re familiar with finite time and resources. It is even more interesting when the comparison is made to a competitor who only does new things in a modern way. Solid state storage is fast, reliable, and more. How often it was described as expensive and low capacity relative to 1TB spindle drives. Which storage are we using today—on our phones, tablets, pcs, and even in the cloud? Cost came down and capacities increased.
It is also just as likely that featured deemed missing in some comparison to the existing technology leader will prove to be less interesting as time goes by. Early laptops that lacked wired networking or RGB ports were viewed quite negatively. Today these just aren’t critical. It isn’t that networking or projection aren’t critical, but these have been recast in terms of implementation. Today we think of Wi-Fi or 4G along with technologies for wireless screen sharing, rather than wires for connectivity. The underlying scenario didn’t change, just a radical transformation of how it gets done.
This leads to the reality that systems will converge. While you might think “oh we’ll never need that again” there’s a good chance that even a newly recast, or reimagined, view of a system will quickly need to pick up features and capabilities previously developed.
One person’s treasure is another’s trash.
# # # # #
A cool new app or service releases and immediately the debates begin over the magnitude of the offering. Is it a product, feature, or company? Within a short time, pundits declare something to be too small, too easy to copy, or too inconsequential to merit the startup label. It will be swallowed by one of the bigger platform players. This strikes me as missing some salient elements of product design, even if it is true sometimes. Context matters.
A new mail client for iOS released this week. The first wave of buzz was pretty high and reviews very solid in our circles. Then a writer declares it to be over for them when “Google writes the few lines of code necessary to provide that functionality to my existing proprietary Gmail app”.
The debate over whether something is an app, a feature, or even a whole company goes as far back as most can remember. I remember having to buy a spelling checker separate from the word processor I used about the same time I had to buy graphing software separate from a spreadsheet. There were many examples before that and certain many after that.
The richer our software becomes the higher the expectations are for the rich set of features a product must have when it is first released, and the depth of offerings a company must have for its first product. These expectations are based on some pretty solid consumer notions of switching costs, barrier to entry, and so on. In other words, a new product should probably do more than an old product if it is to replace what is currently in use. Similarly, a new business should have to put in some effort to replace a product if what is being replaced is worthwhile at all.
New products can be incremental improvements over existing products. When released, such a product battles it out within the context of the existing market (one the company is already competing within or maybe a new company with a clear idea of how to improve). A new product can also take an entirely new, revolutionary, view of a problem space and change the evolutionary path we all thought we were on. Most new products get started because folks believe that there is a new approach. New products can also be new companies or come from innovations within existing companies.
A new approach has many potential elements – new user experience, new factoring of the code (app v. browser), new view of scenarios. New products can also be additive—they can do things that existing products do, but add a new twist. This new twist can be an intellectually deep approach to solving a problem or it can be a clever solution. In all cases, a new product is almost always a combination of many previously existing technologies and approaches, brought together in a unique way.
Before writing off a product as a feature or non-company, we should consider the reality that all new products are mostly new combination of old features and that most companies release a first product that has familiarity with something already in the market. The current market climate, changing dynamics of consumers (or businesses), and every-so-slightly-new approaches can make a huge difference.
What are some things that are worth considering before being so flip as to write something off the week it is released:
Forward-looking / disruptive. Many times a new product is ahead of its time. When it is released it is missing features relative to entrenched products, and so to fully “see” the product you need to allow some time for the product to mature. Our landscape is littered with products that came out deficient in remarkable ways, but the feature combination plus the trajectory of the product more than made up for those deficiencies. This is the very definition of disruptive innovation so many people like to talk about so quickly.
Focus. Often new products (and companies) arise because the entrenched products are not as interested in or focused on a scenario, even if there is some work. Quite often existing businesses have an agenda where a given product is not the true focus of the business, but there are some minimal needs that drive offering something. Many people are quick to say that photo editing that removes red-eye, crops, and auto-adjusts is all most people need and so it is natural for the device to offer this “90% case” of features. Yet app stores are filled with photo editing products. Reviewers debate the merits of filters, sharing, and advanced editing features in each. These whole companies are by definition building features that can be easily subsumed yet they continue to thrive.
Price, place, promotion. It is natural for product reviews to ignore the reality of running a business and to focus simply on the features. In practice, starting a company is also about the full spectrum or offering a product—yes it is the product, but it is also the price, place, and promotion of the traditional marketing mix. Innovation can take place in any of these. This innovation could prove to be highly valued by consumers or create the basis for a partnership between a new company and an entrenched/larger company. How many companies distribute news, weather, sports information but do so uniquely because of business, partnerships or other non-product features? Often incremental products innovate in these elements just as much as they do in product features.
Moat. Warren Buffet has famously said “I look for economic castles protected by unbreachable ‘moats’.” The importance of moats is not changed by app stores, mobile, or social, but it does offer a potential to create a new kind of moat that goes beyond the relatively simple view of product features or large companies with the ability to add more code to their existing offerings. A new product might look a whole lot like an existing product but it might create a vibrant new community of consumers or attract the interest of developers to extend the product because of a creative platform, just to name two possible moats.
All of these and more can change the context in which a new product is released. What’s super interesting is how the context–consumer needs/desires, platforms, and so on–can change the view of the importance of any of these attributes over time. What seems important today might not be tomorrow, and vice versa. The elements that create moats or determine the viability of a product change over time, even if the features look pretty close to the same.
There is no doubt that the app that prompted this post has caught the interest of a lot of folks as hundreds of thousands are using it and at least that many are waiting. There’s also no doubt that building a new mail program is a challenge.
At the same time, the very existence of this effort – whether it is labeled a company, product, or feature – shows someone has the passion and desire to improve the state of the art. Someone (and their backers) saw the opportunity to take on the establishment and challenge the notion that the envelope was being pushed or the entrenched folks were simply not making the most of the opportunity they have.
It is more difficult than ever to release a new product to market. Expectations are high. Word spreads fast. Large players are numerous. Let’s all keep in mind that every product we currently use was once a new product that probably failed more than a few first looks.