Posts Tagged ‘innovation’
Smartphones and tablets, along with apps connected to new cloud-computing platforms, are revolutionizing the workplace. We’re still early in this workplace transformation, and the tools so familiar to us will be around for quite sometime. The leaders, managers, and organizations that are using new tools sooner will quickly see how tools can drive cultural changes — developing products faster, with less bureaucracy and more focus on what’s important to the business.
If you’re trying to change how work is done, changing the tools and processes can be an eye-opening first step.
Many of the companies I work with are creating new productivity tools, and every company starting now is using them as a first principle. Companies run their business on new software-as-a-service tools. The basics of email and calendaring infrastructure are built on the tools of the consumerization of IT. Communication and work products between members of the team and partners are using new tools that were developed from the ground up for sharing, collaboration and mobility.
Some of the exciting new tools for productivity that you can use today include: Quip,Evernote, Box and Box Notes, Dropbox, Slack, Hackpad, Asana, Pixxa Perspective, Haiku Deck, and more below. This list is by no means exhaustive, and new tools are showing up all the time. Some tools take familiar paradigms and pivot them for touch and mobile. Others are hybrids of existing tools that take a new view on how things can be more efficient, streamlined, or attuned to modern scenarios. All are easily used via trials for small groups and teams, even within large companies.
Tools drive cultural change
Tools have a critical yet subtle impact on how work gets done. Tools can come to define the work, as much as just making work more efficient. Early in the use of new tools there’s a combination of a huge spike in benefit, along with a temporary dip in productivity. Even with all the improvements, all tools over time can become a drag on productivity as the tools become the end, rather than the means to an end. This is just a natural evolution of systems and processes in organizations, and productivity tools are no exception. It is something to watch for as a team.
The spike comes from the new ways information is acquired, shared, created, analyzed and more. Back when the PC first entered the workplace, it was astounding to see the rapid improvements in basic things like preparing memos, making “slides,” or the ability to share information via email.
There’s a temporary dip in productivity as new individual and organizational muscles are formed and old tools and processes are replaced across the whole team. Everyone individually — and the team has a whole — feels a bit disrupted during this time. Things rapidly return to a “new normal,” and with well-chosen tools and thoughtfully-designed processes, this is an improvement.
As processes mature or age, it is not uncommon for those very gains to become burdensome. When a new lane opens on a highway, traffic moves faster for awhile, until more people discover the faster route, and then it feels like things are back where they started. Today’s most common tools and processes have reached a point where the productivity increases they once brought feel less like improvements and more like extra work that isn’t needed. All too often, the goals have long been lost, and the use of tools is on autopilot, with the reason behind the work simply “because we always did it that way.”
New tools are appearing that offer new ways to work. These new ways are not just different — this is not about fancier reports, doing the old stuff marginally faster, or bigger spreadsheets. Rather, these new tools are designed to solve problems faced by today’s mobile and continuous organization. These tools take advantage of paradigms native to phones and tablets. Data is stored on a cloud. Collaboration takes place in real time. Coordination of work is baked into the tools. Work can be accessed from a broad range of computing devices of all types. These tools all build on the modern SaaS model, so they are easy to get, work outside your firewall and come with the safety and security of cloud-native companies.
The cultural changes enabled by these tools are significant. While it is possible to think about using these tools “the same old way,” you’re likely to be disappointed. If you think a new tool that is about collaboration on short-lived documents will have feature parity with a tool for crafting printed books, then you’re likely to feel like things are missing. If you’re looking to improve your organizational effectiveness at communication, collaboration and information sharing, then you’re also going to want to change some of the assumptions about how your organization works. The fact that the new tools do some things worse and other things differently points to the disruptive innovation that these products have the potential to bring — the “Innovator’s Dilemma” is well known to describe the idea that disruptive products often feel inferior when compared to entrenched products using existing criteria.
Overcoming traps and pitfalls
Based on seeing these tools in action and noticing how organizations can re-form around new ways of working, the following list compiles some of the most common pitfalls addressed by new tools. In other words, if you find yourself doing these things, it’s time to reconsider the tools and processes on your team, and try something new.
Some of these will seem outlandish when viewed through today’s concept. As a person who worked on productivity tools for much of my career, I think back to the time when it was crazy to use a word processor for a college paper; or when I first got a job, and typing was something done by the “secretarial pool.” Even the use of email in the enterprise was first ridiculed, and many managers had assistants who would print out email and then type dictated replies (no, really!). Things change slowly, then all of a sudden there are new norms.
In our Harvard Business School class, “Digital Innovation,” we crafted a notion of “doing it wrong,” and spent a session looking at disruption in the tools of the workplace. In that spirit, “you’re doing it wrong,” if you:
- Spend more time summarizing or formatting a document than worrying about the actual content. Time and time again, people over-invest in the production qualities of a work product, only to realize that all that work was wasted, as most people consume it on a phone or look for the summary. This might not be new, but it is fair to say that the feature sets of existing tools and implementation (both right for when they were created, I believe) would definitely emphasize this type of activity.
- Aim to “complete” a document, and think your work is done when a document is done. The modern world of business and product development knows that you’re never done with a product, and that is certainly the case for documents that are steps along the way. Modern tools assume that documents continue to exist but fade in activity — the value is in getting the work out there to the cloud, and knowing that the document itself is rarely the end goal.
- Figure out something important with a long email thread, where the context can’t be shared and the backstory is lost. If you’re collaborating via email, you’re almost certainly losing important context, and not all the right folks are involved. A modern collaboration tool like Slack keeps everything relevant in the tool, accessible by everyone on the team from everywhere at any time, but with a full history and search.
- Delay doing things until someone can get on your calendar, or you’re stuck waiting on someone else’s calendar. The existence of shared calendaring created a world of matching free/busy time, which is great until two people agree to solve an important problem — two weeks from now. Modern communication tools allow for notifications, fast-paced exchange of ideas and an ability to keep things moving. Culturally, if you let a calendar become a bottleneck, you’re creating an opening for a competitor, or an opportunity for a customer or partner to remain unhappy. Don’t let calendaring become a work-prevention tool.
- Believe that important choices can be distilled down into a one-hour meeting. If there’s something important to keep moving on, then scheduling a meeting to “bring everyone together” is almost certainly going to result in more delays (in addition to the time to get the meeting going in the first place). The one-hour meeting for a challenging issue almost never results in a resolution, but always pushes out the solution. If you’re sharing information all along, and the right people know all that needs to be known, then the modern resolution is right there in front of you. Speaking as a person who almost always shunned meetings to avoid being a bottleneck, I think it’s worth considering that the age-old technique of having short and daily sync meetings doesn’t really address this challenge. Meetings themselves, one might argue, are increasingly questionable in a world of continuously connected teams.
- Bring dead trees and static numbers to the table, rather than live, onscreen data. Live data analysis was invented 20 years ago, but too many still bring snapshots of old data to meetings which then too often digress into analyzing the validity of numbers or debating the slice/view of the data, further delaying action until there’s an update. Modern tools like Tidemark and Apptio provide real-time and mobile access to information. Meetings should use live data, and more importantly, the team should share access to live data so everyone is making choices with all the available information.
- Use the first 30 minutes of a meeting recreating and debating the prior context that got you to a meeting in the first place. All too often, when a meeting is scheduled far in advance, things change so much that by the time everyone is in the room, the first half of the hour (after connecting projectors, going through an enterprise log-on, etc.) is spent with everyone reminding each other and attempting to agree on the context and purpose of the gathering. Why not write out a list of issues in a collaborative document like Quip, and have folks share thoughts and data in real time to first understand the issue?
- Track what work needs to happen for a project using analog tools. Far too many projects are still tracked via paper and pen which aren’t shared, or on whiteboards with too little information, or in a spreadsheet mailed around over and over again. Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
- Need to think which computer or device your work is “on.” Cloud storage from Box,Dropbox, OneDrive and others makes it easy (and essential) to keep your documents in the cloud. You can edit, share, comment and track your documents from any device at any time. There’s no excuse for having a document stuck on a single computer, and certainly no excuse risking the use of USB storage for important work.
- Use different tools to collaborate with partners than you use with fellow employees. Today’s teams are made up of vendors, contractors, partners and customers all working together. Cloud-based tools solve the problem of access and security in modern ways that treat everyone as equals in the collaboration process. There’s a huge opportunity to increase the effectiveness of work across the team by using one set of tools across organizational boundaries.
Many of these might seem far-fetched, and even heretical to some. From laptops to color printing to projectors in conference rooms to wireless networking to the Internet itself, each of those tools were introduced to skeptics who said the tools currently in use were “good enough,” and the new tools were slower, less efficient, more expensive, or just superfluous.
The teams that adopt new tools and adapt their way of working will be the most competitive and productive teams in an organization. Not every tool will work, and some will even fail. The best news is that today’s approach to consumerization makes trial easier and cheaper than at any other time.
If you’re caught in a rut, doing things the old way, the tools are out there to work in new ways and start to change the culture of your team.
–Steven Sinofsky @stevesi
This article originally appeared on <re/code>.
For me, 1984 was the year of Van Halen’s wonderful [sic] album, The Right Stuff, and my second semester of college. It would also prove to be a time of enlightenment for me and computing. On this 30th anniversary of the Apple Macintosh on January 25 and the Superbowl commercial on January 22. I wanted to share my own story of the way the introduction of the Macintosh profoundly changed my path in life.
Perhaps a bit indulgent, bit it seemed worth a little backstory. I think everyone from back then is feeling a bit of nostalgia over the anniversary of the commercial, the product, and what was created.
High School, pre-Macintosh
Like many Dungeons and Dragons players my age, my first exposure to post-Pong computing was an Atari 800 that my best friend was lucky enough to have (our high school was not one to have an Apple ][ which hadn’t really made it to suburban Orlando). While my friends were busy listening to the Talking Heads, Police, and B-52s, I was busy teaching myself to program on the Atari. Even though it had the 8K BASIC cartridge it lacked tape storage. Every time I went over to use the computer I had to start over. Thinking about business at an early age (I suppose) I would continue to code and refine what I thought would be a useful program for our family business, the ability to compute sales tax on purchases from different states. Enter the total sale, compute the sales tax for a state by looking up the rate in a table.
My father, an entrepreneur but hardly a technologist, was looking to buy a computer to “automate” our family business. In 1981, he characteristically dove head first into computing and bought an Osborne I. For a significant amount of money ($1,795, or $4,600 today) we owned an 8 bit CPU and two 90K floppy drives and all (five) of the business programs one could ever need.
I started to write a whole business suite for the business (inventory, customers, orders) in BASIC which is what my father had hoped I would conjure up (in between SATs and college prep). Well that was a lot harder than I thought it would be (so were the SATs). Then I discovered dBase II and something called a “database” that made little sense to me in the abstract (and would only come to mean something much later in my education). In a short time I was able to create a character-based system that would be used to run the family business.
To go to college I had a matching Osborne I with a 300b modem so I could do updates and bug fixes (darn that shipping company–they changed the rate on COD shipments right during midterms which I had hard-coded!).
College Fall Semester
I loaded up the Osborne I and my Royal typewriter/daisy wheel/parallel port “letter quality” printer and was off to sunny Ithaca.
Computer savvy Cornell issued us our “BITNET electronic mail accounts”, mine was TGUJ@CORNELLA.EDU. Equal parts friendly, memorable, and useful and no one knew what to do with them. The best part was email ID came printed on a punch card. As a user of an elite Osborne I felt I went back in time when I had to log on to the mainframe from a VT100 terminal. The only time I ever really used TGUJ was to apply for a job with Computer Services.
I got a job working for the computer services group as a Student Terminal Operator (STO). I had two 4 hour shifts. One was in the main computer science major “terminal room” in Upson Hall featuring dozens of VT100 terminals. The other shift was Friday night (yes, you read that correctly) at the advanced “lab” featuring SGI graphics workstations, IBM PC XTs, an Apple Lisa, peripherals like punch card machines, and a 5′ tall high-speed printer. For the latter, I was responsible for changing the ribbon, a task that required me to put on a mask and plastic arm-length gloves.
It turned out that Friday night was all about people coming in to write papers on the few IBM/MS-DOS PCs using WordPerfect. These were among the few PCs available for general purpose use. I spent most of the time dealing with graduate students writing dissertations. My primary job was keeping track of the keyboard templates that were absolutely required to use WordPerfect. This experience would later make me appreciate the Mac that much more.
In the computer science department I had a chance to work on a Xerox Star and Alto (see below) along with Sun Workstations, microVAX mini, and so on. The resources available were an incredible blessing to the curious. The computing world was a cacophony of tools and platforms with the vast majority of campus not yet tapping into the power of computing and those that did were using what was most readily accessible. Cornell was awash in the sea of different computing platforms, and to my context that just seemed normal, like there were a lot of different types of cars. This was especially apparent from my vantage point in the computer facilities.
One experience with a new, top-secret, computer was about to change all that.
I ended up getting to use a new computer from an unidentified company. One night after my shift, a fellow STO dragged me back to Upson Hall and took me into a locked room in the basement. There I was able to see and use a new computer. It was a wooden box attached to a wall with an actual chain. It had a mouse, which used on the Xerox and Sun workstations. It had a bitmap screen like a workstation. It had an “interface” like the Xerox. There was a menu bar across the top and a desktop of files and folders. It seemed small and much more quiet than the dorm-refrigerator sized units I was used to hearing.
What was really magical about it was that it had a really easy to use painting program that we all just loved. It had a “word processor”. It was much easier to use than the Xerox which had special keys and a somewhat overloaded desktop metaphor. It crashed a lot even after a short time using it. It also started up pretty quickly. Most everything we did with it felt new and different compared to all the other computers we used.
The end of the semester and exams approached. The few times and couple of hours I had to play with this computer were exciting. In the sea of computing options, it was definitely the most exciting thing I had experienced. Perhaps being chained to the wall added to the excitement, but there was something that really resonated with us. When I try to remember the specifics, I mostly recall an emotional buzz.
My computing world was filled with diversity, and complexity, which left me unprepared for the way the world was going to change in just the next six weeks.
To think about Apple’s commercial, one really has think about the context of the start of the year 1984. The Orwellian dialog was omnipresent. Of course as freshman in college we had just finished our obligatory compare/contrast the dystopian messages in Animal Farm, Brave New World, and 1984 not to mention the Cold War as front and center dialog at every turn. The country emerging from recession gave us all a contrasting optimism.
At the same time, IBM was omnipresent. IBM was synonymous with computing. Sure the Charlie Chaplin ads were great, but the image of computing to almost everyone was that of the IBM mainframe (CORNELLA was located out by the Ithaca airport). While IBM was almost literally the pillar of innovation (just a couple of years later, scientists at IBM would spell IBM with Xenon atoms), there was also great deal of distrust given the tenor of the time. The thought of a globally dominant company, a computer company, was uncomfortable to those familiar with fellow Cornellian Kurt Vonnegut’s omnipresent RAMJAC.
Then the Apple commercial ran. It was truly mesmerizing (far more so to me than the Superbowl). It took me about one second to stitch together all that was going on right before my eyes.
Apple was introducing a new computer.
It was going to be a lot different from the IBM PC.
The world was not going to be like 1984.
And most importantly, the computer I had just been playing with weeks earlier was, in fact, the Apple Macintosh.
I was so excited to head back to the terminal rooms and talk about this with my fellow STOs and to use the new Apple Macintosh.
Upon returning to the terminal room in Upson, Macs had already started to replace VT100s. First just a couple and then over time, terminal access moved to an emulation program on Macs (rumor had it that the Macs were actually cheaper than terminals!).
My Friday night shift was transformed. Several Macs were added to the lab. I had to institute a waiting list. Soon only the stalwarts were using the PCs. I started to see a whole new crowd on those lonely computer nights.
I saw seniors in Arts & Sciences preparing resumes and printing them on the ImageWriter (note, significantly easier to change the ribbon, which I had to do quite often every night). Those in the Greek System came by for help making signs for parties. Students discovered their talent with MacPaint pixel art and fat bits. All over campus signs changed overnight from misaligned stencils to ImageWriter printouts testing the limits of font faces per page.
I have to admit, however, I spent an inordinate amount of time attempting to recover documents that were lost to memory corruption bugs on the original MacWrite. The STOs all developed a great trouble shooting script and signs were posted with all sorts of guesses (no more than 4 fonts per document, keep documents under 5 pages, don’t use too many carriage returns). We anxiously awaited updates and students would often wait in line to update their “MacWrite disks” when word spread of an update (hey, there was no Internet download).
In short order, Macintosh swept across campus. Cornell along with many schools was part of Apple’s genius campaign on campuses. While I still had my Osborne, I was using Macintosh more often than not.
The next couple of years saw an explosion of use of Macintosh across campus. The next incoming class saw many students purchasing a Mac at the start of college. Research funds were buying Macs. Everywhere you looked they were popping up on desks. There was even a dedicated store just off campus that sold and serviced Macs. People were changing their office furniture and layout to support using a mouse. Computer labs were being rearranged to support local printers and mice. The campus store started stocking floppy disks, which became a requirement for most every class.
Document creation had moved from typewriters and limited use of WordPerfect to near ubiquitous use of MacWrite practically by final exams that Spring. Later, Microsoft Mac Word, which proved far more robust became the standard.
The Hotel School’s business students were using Microsoft Mac Excel almost immediately.
The Chemistry department made a wholesale switch to Macintosh. The software was a huge driver of this. It is hard to explain how difficult it was to prepare a chemistry journal article before Macintosh (the department employed a full time molecular draftsman to prepare manuscripts). The introduction of ChemDraw was a turning point for publishing chemists (half my major was chemistry).
It was in the Chemistry department where I found a home for my fondness of Macintosh and an incredibly supportive faculty (especially Jon Clardy). The research group had a little of everything, including MS-DOS PCs with mice which were quite a novelty. There were also Macs with external hard drives.
I also had access to MacApp and the tools (LightSpeed Pascal) to write my own Mac software. Until then all my programming had been on PCs (and mainframes, and Unix). I had spent two summers as an intern (at Martin Marietta, the same company dBase programmer Wayne Ratliff worked!) hacking around MS-DOS writing utilities to do things that were as easy as drag and drop on a Mac or just worked with MacWrite and Mac Excel. As fun as learning K&R, C, and INT 21h was, the Macintosh was calling.
My first project was porting a giant Fortran program (Molecular Mechanics) to the Mac. Surprisingly it worked (perhaps today, equally surprising was the existence of a Fortran compiler). It cemented the lab’s view that the Macs could also be for work, not just document creation. Next up I just started exploring the visualizations available on the Mac. Programming graphics was all new to me. Programming an object-oriented event loop seemed mysterious and indirect to me compared to INT 21h or stdio.
But within a few hacking sessions (fairly novel to the chemistry department) the whole thing came together. Unlike all of the previous systems I used, the elegance of the Mac was special. I felt like the more I used it the more it all made sense. When I would bury myself in Unix systems programming it seemed more like a series of things, tricks, you needed to know. Macintosh felt like a system. As I learned more I felt like I was able to guess how new things would work. I felt like the bugs in my programs were more my bugs and not things I misunderstood.
The proof of this was that through the Spring semester my senior year I was able to write a program that visualized the periodic table of the elements using dozens of different variables. It was a way to explore periodicity of the elements. I wrote routines for an X-Y plot, bar charts, text tables, and the pièce de résistance was a 2.5-dimensional perspective of the periodic table showing a single property (commonly used to illustrate the periodic nature of electron affinity). I had to ask a lot of friends who were taking computer graphics on SGIs for help! Still, not only had I just been able to program another new OS (by then this was my 5th or 6th) but I was able to program a graphical user interface for the first time.
MacMendeleev was born.
The geek in all of us has that special moment when at once you feel empowered and marvel at a system. That day in the spring of 1987 when I rendered a perspective drawing from my own code on a system that I had seen go from a chained down plywood box to ubiquity across campus was magical. Even my final report for the project was, to me, a work of art.
The geek in all of us has that special moment when at once you feel empowered and marvel at a system.
It wasn’t just the programming that was possible. It wasn’t just the elegance and learnability of the system. It wasn’t even the ubiquity that the Macintosh achieved on campus. It was all of those. Most of all it represented a tool that allowed me to realize some of my own potential. I was awful at Chemistry. Yet with Macintosh I was able to contribute to the department and probably showed a professor or two that in spite of my lack of actual chemistry aptitude I could do something (and dang, my lab reports looked amazing!). I was, arguably, able to learn some chemistry.
I achieved with Macintosh what became one of the most important building blocks in my education.
I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.
I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.
What came next
Graduate school diverged in terms of computing. We used DEC VMS, though SmallTalk was our research platform. So much of the elegance of the Macintosh OS (MacApp and Lisa before that) was much clearer to me as I studied the nuances of object-oriented programming.
I used my Macintosh II to write papers, make diagrams, and remote into the microVAX at my desk. I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word I won at an ACM conference for my work on MacMendeleev.
I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word…
When I made it to Microsoft I found a great many shared the same experience. I met folks who worked on Mac Excel and also had Macs in boxes chained to tables. I met folks who wrote some of those Macintosh programs I used in college. So many of the folks in the “Apps” team I was hired into that year grew up on that unique mixture of Mac and Unix (Microsoft used Xenix back then). We all became more than converts to MS-DOS and Windows (3.0 was being developed when I landed at Microsoft).
There’s no doubt our collective experiences contribute to the products we each work on. Wikipedia even documents the influence of MacApp on MFC (my first product contribution), which was by design (and also by design was where not to be influenced). It is wonderful to think that through tools like MFC and Visual Basic along with ubiquitous computing, Windows brought to so many young programmers that same feeling of mastery and empowerment that I felt when I first used Macintosh.
Fast-forwarding, I can’t help but think about today’s college students having grown up hacking the web but recently exposed as programmers to mobile platforms. The web to them is like the Atari was to me—programmable, understandable, and fun. The ability to take your ideas, connect them to the Internet, touch your creation, and make your own experience must feel like building a Macintosh program from scratch felt like to me. The unique combination of mastery of the system, elegance of design, and empowerment is what separates a technology from a movement.
Macintosh certainly changed my path in life…
For me, Macintosh was an early contributor to my learning, skills, and ultimately my self-confidence. Macintosh certainly changed my professional path in life. For sure, 1984 was not at all like 1984 for me.
Yes, of course I’m a PC (and definitely a Surface). Nothing contributed more to my professional life than the PC!
PS: How far have we come? Check out this Computer Chronicles from 1985 where the new Macintosh is discussed.
I love visiting Tokyo and have been lucky enough to visit dozens of times over many years. The consumer electronics industry has certainly had ups and downs recently, but a constant has been the leading edge consumer and business adoption of new technologies. From PCs in the workplace to broadband at home and smartphones (a subject of many humorous team meetings back pre-bubble when I clearly didn’t get it and was content with the magic of my BB 850!) Japan has always had a leading adoption curve even when not necessarily producing the products used globally.
This visit was about visiting the University of Tokyo and meeting with some entrepreneurs. That, however, doesn’t stop me from spending time observing what CE is being used in the workplace, on the subway, and most importantly for sale in the big stores such as Yodobashi, Bic, and Labi and of course the traditional stalls at Akihabara. The rapid adoption, market size, and proximity to Korea and China often mean many of the products seen are not yet widely available in the US/Europe or are just making their way over. There’s a good chance what is emphasized in the (really) big retail space is often a leading indicator for what will show up at CES in January.
If you’re not familiar with Yodobashi, here’s the flagship store in Akihabara – over 250,000 sq ft and visited by 10’s of millions of people every year. I was once fortunate enough to visit the underground operations center, and as a kid who grew up in Orlando it sure feels a lot like the secret underground tunnels of the Magic Kingdom!
With that in mind here are 10 observations (all on a single page). This is not statistical in any way, just what caught my eye.
- Ishikawa Oku lab. The main focus of the trip was to visit University of Tokyo. Included in that was a wonderful visit with Professor Ishikawa-san and his lab which conducts research on exploring parallel, high-speed, and real-time operations for sensory information processing. What is so amazing about this work is that it has been going on for 20 years starting with very small and very slow digital sensors and now with Moore’s law applied to image capture along with parallel processing amazing things are possible such as can be seen in some of these Youtube videos (with > 5 million views), see http://www.youtube.com/ishikawalab. More about the lab http://www.k2.t.u-tokyo.ac.jp/index-e.html.
- 4K Displays. Upon stepping off the escalator on the video floor, one is confronted with massive numbers of massive 4K displays. Every manufacturer has displays and touts 4K resolution along with their requisite tricks at upscaling. The prices are still relatively high but the selection is much broader than readily seen in the US. Last year 4K was new at CES and it seems reasonable to suspect that the show floor will be all 4K. As a footnote relative to last year, 3D was downplayed significantly. In addition, there are numerous 4K cameras on sale now, more so than the US.
- Digital still. The Fuji X and Leica rangefinder digital cameras are getting a lot of floorspace and it was not uncommon to see tourists snapping photos (for example in Meiji Garden). The point and shoot displays feature far fewer models with an emphasis on attributes that differentiate them from phones such as waterproof or ruggedized. There’s an element of nostalgia, in Japan in particular, driving a renewed popularity in this form factor.
- Nikon Df. This is a “new” DSLR with the same sensor as the D-800/D4 that is packaged in a retro form factor. The Nikon Df is definitely only for collectors but there was a lot of excitement for the availability on November 21. It further emphasized the nostalgia elements of photography as the form factor has so dramatically shifted to mobile phones.
- Apple presence in store. The Apple presence in the main stores was almost overwhelming. Much of the first floor and the strategic main entry of Yodobashi were occupied by the Apple store-within-a-store. There were large crowds and as you often see with fans of products, they are shopping the very products they own and are holding in their hands. There has always been a fairly consistent appreciation of the Apple design aesthetic and overall quality of hardware but the widespread usage did not seem to follow. To be balanced, one would have to take note of the substantial presence of the Nexus 5 in the stores, which was substantially and well-visited.
- PCs. The size of the PC display area, relative to mobile and iOS accessories, definitely increased over the past 7 months since I last visited. There were quite a large number of All-In-One designs (which have always been popular in Japan, yet somehow could never quite leap across the Pacific until Windows 8). There were a lot of very new Ultrabooks running Haswell chips from all the major vendors in the US, Japan, and China. Surface was prominently displayed.
- iPhone popularity. There was a ubiquity of the iPhone that is new. Android had gained a very strong foothold over the national brands that came with the transition to nationwide LTE. Last year there was a large Android footprint through Samsung handsets that was fairly visible on display and in use. While the Android footprint is clearly there, the very fast rise of iPhone, particularly the easily spotted iPhone 5s was impressive. The vast expanse of iPhone accessories for sale nearly everywhere supports the opportunity. A driver for this is that the leading carrier (DoCoMo) is now an iPhone supplier. Returning from town, I saw this article speaking to the rise of iOS in Japan recently, iPhone 5S/C made up 76% of new smartphone sales in Japan this October.
- Samsung Galaxy J. Aside from the Nexus 5, the Android phone being pushed quite a bit was the Samsung Galaxy J. This is a model only in Asia right now. It was quite nice. It sports an ID more iPhone-like (squared edges), available in 5c-like colors, along with the latest QC processor, 5″ HD display, and so on. It is still not running Kitkat of course. For me in the store, it felt better than a Galaxy S. Given the intricacies of the US market, I don’t know if we’ll see this one any time soon. The Galaxy Note can be seen “in the wild” quite often and there seems to be quite a lot of interest based on what devices on display people would stop and interact with.
- Tablets. Tablets were omnipresent. They were signage in stores, menus in restaurants, in use on the subway, and in use at every place where people were sitting down and eating/drinking/talking. While in the US we are used to asking “where are all the Android tablets”, I saw a lot of 7″ Android tablets in use in all of those places. One wouldn’t expect the low-priced import models to be visible but there are many Japan OEMs selling Android tablets that could be spotted. I also saw quite a few iPad Minis in use, particularly among students on the trains.
- Digital video. As with compact digital cameras, there was a rather extreme reduction in the number of dedicated video recorders. That said, GoPro cameras had a lot of retail space and accessories were well placed. For example, there were GoPros connected to all sorts of gear/showing off all sorts of accessories at Tokyu Hand (the world’s most amazing store, imho). Professional HD and UHD cameras are on display in stores which is cool to see, for example Red and Arri. One of the neatest uses of video which is available stateside but I had not seen is the Sony DEV-50 binoculars/camera. It is pricey (USD$2000) but also pretty cool if you’ve got the need for it. They have reasonable sensors, support 3D, and more. The only challenge is stability which make sense given the equivalent focal length, but there is image stabilization which helps quite a bit in most circumstances.
There were many other exciting and interesting products one could see in this most wired and gadget friendly city. One always is on the lookout for that unique gift this holiday season, so I found my stocking-stuffer. Below you can see a very effective EMF shielding baseball hat (note, only 90% effective). As a backup stocking-stuffer, all gloves purchased in Japan appear to be designed with resistive touch screens in mind :-)
PS: Here’s me with some super fun students in a class on Entrepreneurship and Innovation at the University of Tokyo.
I’ve been surprised at the “feedback” I receive when I talk about products that compete with those made by Microsoft. While I spent a lot of time there, one thing I learned was just how important it is to immerse yourself in competitive products to gain their perspective. It helps in so many ways (see http://blog.learningbyshipping.com/2013/01/14/learning-from-competition/).
Dave Winer (@davewiner) wrote a thoughtful post on How the Times reviews tech today. As I reflected on the post, it seemed worth considering why this challenge might be unique to tech and how it relates to the use of competitive products.
When considering creative works, it takes ~two hours to see a film or slightly more for other productions. Even a day or two for a book. After which you can collect your thoughts and analysis and offer a review. Your collected experience in the art form is relatively easily recalled and put to good use in a thoughtful review.
When talking about technology products, the same approach might hold for casually used services or content consumption services. In considering tools for “intellectual work” as Winer described (loved that phrase), things start to look significantly different.Software tools (for “intellectual work”) are complex because they do complex things. In order to accomplish something you need to first have something to accomplish and then accomplish it. It is akin to reviewing the latest cameras for making films or the latest cookware for making food. While you can shoot a few frames or make a single meal, tools like these require many hours and different tasks. You shouldn’t “try” them as much as “use” them for something that really matters. Only then can you collect your thoughts and analysis.Because tools of depth offer many paths and ways to use them there is an implicit “model” to how they are used. Models take a time to adapt to. A cinematographer that uses film shouldn’t judge a digital camera after a few test frames and maybe not even after the first completed work.
The tools for writing, thinking, creating that exist today present models for usage. Whether it is a smartphone, a tablet, a “word processor”, or a photo editor these devices and accompanying software define models for usage that are sophisticated in how they are approached, the flow of control, and points of entry. They are hard to use because they do hard things.
The fact that many of those that write reviews rely on an existing set of tools, software, devices to for their intellectual pursuits implies that conceptual models they know and love are baked into their perspective. It means tools that come along and present a new way of working or seeing the technology space must first find a way to get a clean perspective.
This of course is not possible. One can’t unlearn something. We all know that reviewers are professionals and just as we expect a journalist covering national policy debates must not let their bias show, tech reviewers must do the same. This implicit “model bias” is much more difficult to overcome because it simply takes longer to see and use a product than it does to learn about and understand (but not necessarily practice) a point of view in a policy debate. The tell-tale sign of “this review composed on the new…” is great, but we also know right after the review the writer has the option of returning to their favorite way of working.
As an example, I recall the tremendous difficulty in the early days of graphical user interface word processors. The incumbent WordPerfect was a character based word processor that was the very definition of a word processor. The one feature that we heard relentlessly was called reveal codes which was a way of essentially seeing the formatting of the document as codes surrounding text (well today we think of that as HTML). Word for Windows was a WYSIWYG word processor in Windows and so you just formatted things directly. If it was bold on screen then it was implicitly surrounded by <B> and </B> (not literally but conceptually those codes).
Reviewers (and customers) time and time again felt Word needed reveal codes. That was the model for usage of a “word processor”. It was an uphill battle to move the overall usage of the product to a new level of abstraction. There were things that were more difficult in Word and many things much easier, but reveal codes was simply a model and not the answer to the challenges. The tech world is seeing this again with the rise of new productivity tools such as Quip, Box Notes, Evernote, and more. They don’t do the same things and they do many things differently. They have different models for usage.
At the business level this is the chasm challenge for new products. But at the reviewer level this is a challenge because it simply takes time to either understand or appreciate a new product. Not every new product, or even most, changes the rules of the predecessor successfully. But some do. The initial reaction to the iPhone’s lack of keyboard or even de-emphasizing voice calls shows how quickly everyone jumped to the then current definition of smartphone as the evaluation criteria.Unfortunately all of this is incompatible with the news cycle for the onslaught of new products or the desire to have a collective judgement by the time the event is over (or even before it starts).This is a difficult proposition. It starts to sound like blaming politicians for not discussing the issues. Or blaming the networks for airing too much reality tv. Isn’t is just as much what peole will click through as it is what reviewers would write about. Would anyone be interested in reading a Samsung review or pulling another ios 7 review after the 8 weeks of usage that the product deserves?
The focus on youth and new users as the baseline for review is simply because they do not have the “baggage” or “legacy” when it comes to appreciating a new product. The disconnect we see in excitement and usage is because new to the category users do not need to spend time mapping their model and just dive in and start to use something for what it was supposed to do. Youth just represents a target audience for early adopters and the fastest path to crossing the chasm.
Here are a few things on my to-do list for how to evaluate a new product. The reason I use things for a long time is because I think in our world with so many different models
- Use defaults. Quite a few times when you first approach a product you want to immediately customize it to make it seem like what you’re familiar with. While many products have customization, stick with the defaults as long as possible. Don’t like where the browser launching button is, leave there anyway. There’s almost always a reason. I find the changes in the default layout of iOS 6 v. 7 interesting enough to see what the shift in priorities means for how you use the product.
- Don’t fight the system. When using a new product, if something seems hard that used to seem easy then take a deep breath and decide it probably isn’t the way the product was meant to do that thing. It might even mean that the thing you’re trying to do isn’t necessarily something you need to do with the new product. In DOS WordPerfect people would use tables to create columns of text. But in Word there was a columns feature and using a table for a newsletter layout was not the best way to do that. Sure there needed to be “Help” to do this, but then again someone had to figure that out in WordPerfect too.
- Don’t jump to doing the complex task you already figured out in the old tool. Often as a torture test, upon first look at a product you might try to do the thing you know is very difficult–that side by side chart, reducing overexposed highlights, or some complex formatting. Your natural tendency will be to use the same model and steps to figure this out. I got used to one complicated way of using levels to reduce underexposed faces in photos and completely missed out on the “fill flash” command in a photo editor.
- Don’t do things the way you are used to. Related to this is tendency to use one device the way you were used to. For example, you might be used to going to the camera app and taking a picture then choosing email. But the new phone “prefers” to be in email and insert an image (new or just taken) into a message. It might seem inconvenient (or even wrong) at first, but over time this difference will go away. This is just like learning gear shift patterns or even the layout of a new grocery store perhaps.
- Don’t assume the designers were dumb and missed the obvious. Often connected to trying to do something the way you are used to is the reality that something might just seem impossible and thus the designers obviously missed something or worse. There is always a (good) chance something is poorly done or missing, but that shouldn’t be the first conclusion.
But most of all, give it time. It often takes 4-8 weeks to really adjust to a new system and the more expert you are the more time it takes. I’ve been using Macs on and off since before the product was released to the public, but even today it has taken me the better part of six months to feel “native”. It took me about 3 months of Android usage before I stopped thinking like an iPhone user. You might say I am wired too much or you might conclude it really does take a long time to appreciate a design for what it is supposed to do. I chuckle at the things that used to frustrate me and think about how silly my concerns were at day 0, day 7, and even day 30–where the volume button was, the charger orientation, the way the PIN worked, going backwards, and more.
What happens when the tools and technologies we use every day become mainstream parts of the business world? What happens when we stop leading separate “consumer” and “professional” lives when it comes to technology stacks? The result is a dramatic change in the products we use at work and as a result an upending of the canon of management practices that define how work is done.
This paper says business must embrace the consumer world and see it not as different, less functional, or less enterprise-worthy, but as the new path forward for how people will use technology platforms, how businesses will organize and execute work, and how the roles of software and hardware will evolve in business. Our industry speaks volumes of the consumerization of IT, but maybe that is not going far enough given the incredible pace of innovation and depth of usage of the consumer software world. New tools are appearing that radically alter the traditional definitions of productivity and work. Businesses failing to embrace these changes will find their employees simply working around IT at levels we have not seen even during the earliest days of the PC. Too many enterprises are either flat-out resisting these shifts or hoping for a “transition”—disruption is taking place, not only to every business, but within every business.
Continuous productivity is an era that fosters a seamless integration between consumer and business platforms. Today, tools and platforms used broadly for our non-work activities are often used for work, but under the radar. The cloud-powered smartphone and tablet, as productivity tools, are transforming the world around us along with the implied changes in how we work to be mobile and more social. We are in a new era, a paradigm shift, where there is evolutionary discontinuity, a step-function break from the past. This constantly connected, social and mobile generational shift is ushering a time period on par with the industrial production or the information society of the 20th century. Together our industry is shaping a new way to learn, work, and live with the power of software and mobile computing—an era of continuous productivity.
Continuous productivity manifests itself as an environment where the evolving tools and culture make it possible to innovate more and faster than ever, with significantly improved execution. Continuous productivity shifts our efforts from the start/stop world of episodic work and work products to one that builds on the technologies that start to answer what happens when:
- A generation of new employees has access to the collective knowledge of an entire profession and experts are easy to find and connect with.
- Collaboration takes place across organization and company boundaries with everyone connected by a social fiber that rises above the boundaries of institutions.
- Data, knowledge, analysis, and opinion are equally available to every member of a team in formats that are digital, sharable, and structured.
- People have the ability to time slice, context switch, and proactively deal with situations as they arise, shifting from a world of start/stop productivity and decision-making to one that is continuous.
Today our tools force us to hurry up and wait, then react at all hours to that email or notification of available data. Continuous productivity provides us a chance at a more balanced view of time management because we operate in a rhythm with tools to support that rhythm. Rather than feeling like you’re on call all the time waiting for progress or waiting on some person or event, you can simply be more effective as an individual, team, and organization because there are new tools and platforms that enable a new level of sanity.
Some might say this is predicting the present and that the world has already made this shift. In reality, the vast majority of organizations are facing challenges or even struggling right now with how the changes in the technology landscape will impact their efforts. What is going on is nothing short of a broad disruption—even winning organizations face an innovator’s dilemma in how to develop new products and services, organize their efforts, and communicate with customers, partners, and even within their own organizations. This disruption is driven by technology, and is not just about the products a company makes or services offered, but also about the very nature of companies.
The starting point for this revolution in the workplace is the socialplace we all experience each and every day.
We carry out our non-work (digital) lives on our mobile devices. We use global services like Facebook, Twitter, Gmail, and others to communicate. In many places in the world, local services such as Weibo, MixIt, mail.ru, and dozens of others are used routinely by well over a billion people collectively. Entertainment services from YouTube, Netflix to Spotify to Pandora and more dominate non-TV entertainment and dominate the Internet itself. Relatively new services such as Pinterest or Instagram enter the scene and are used deeply by tens of millions in relatively short times.
While almost all of these services are available on traditional laptop and desktop PCs, the incredible growth in usage from smartphones and tablets has come to represent not just the leading edge of the scenario, but the expected norm. Product design is done for these experiences first, if not exclusively. Most would say that designing for a modern OS first or exclusively is the expected way to start on a new software experience. The browser experience (on a small screen or desktop device) is the backup to a richer, more integrated, more fluid app experience.
In short, the socialplace we are all familiar with is part of the fabric of life in much of the world and only growing in importance. The generation growing up today will of course only know this world and what follows. Around the world, the economies undergoing their first information revolutions will do so with these technologies as the baseline.
Briefly, it is worth reflecting on and broadly characterizing some of the history of the workplace to help to place the dramatic changes into historic context.
The industrial revolution that defined the first half of the 20th century marked the start of modern business, typified by high-volume, large-scale organizations. Mechanization created a culture of business derived from the capabilities and needs of the time. The essence of mechanization was the factory which focused on ever-improving and repeatable output. Factories were owned by those infusing capital into the system and the culture of owner, management, and labor grew out of this reality. Management itself was very much about hierarchy. There was a clear separation between labor and management primarily focused on owners/ownership.
The information available to management was limited. Supply chains and even assembly lines themselves were operated with little telemetry or understanding of the flow of raw materials through to sales of products. Even great companies ultimately fell because they lacked the ability to gather insights across this full spectrum of work.
The problems created by the success of mechanized production were met with a solution—the introduction of the computer and the start of the information revolution. The mid-20th century would kick off a revolution in business, business marked by global and connected organizations. Knowledge created a new culture of business derived from the information gathering and analysis capabilities of first the mainframe and then the PC.
The essence of knowledge was the people-centric office which focused on ever-improving analysis and decision-making to allocate capital, develop products and services, and coordinate the work across the globe. The modern organization model of a board of directors, executives, middle management, and employees grew out of these new capabilities. Management of these knowledge-centric organizations happened through an ever-increasing network of middle-managers. The definition of work changed and most employees were not directly involved in making things, but in analyzing, coordinating, or servicing the products and services a company delivered.
The information available to management grew exponentially. Middle-management grew to spend their time researching, tabulating, reporting, and reconciling the information sources available. Information spanned from quantitative to qualitative and the successful leaders were expert or well versed in not just navigating or validating information, but in using it to effectively influence the organization as a whole. Knowledge is power in this environment. Management took over the role of resource allocation from owners and focused on decision-making as the primary effort, using knowledge and the skills of middle management to inform those choices.
A symbol of knowledge productivity might be the meeting. Meetings came to dominate the culture of organizations: meetings to decide what to meet about, meetings to confirm that people were on the same page, meetings to follow-up from other meetings, and so on. Management became very good at justifying meetings, the work that went into preparing, having, and following up from meetings. Power derived from holding meetings, creating follow-up items and more. The work products of meetings—the pre-reading memos, the presentations, the supporting analytics began to take on epic proportions. Staff organizations developed that shadowed the whole process.
The essence of these meetings was to execute on a strategy—a multi-year commitment to create value, defend against competition, and to execute. Much of the headquarters mindset of this era was devoted to strategic analysis and planning.
The very best companies became differentiated by their use of information technologies in now legendary ways such as to manage supply chain or deliver services to customers. Companies like Wal-Mart pioneered the use of technology to bring lower prices and better inventory management. Companies like the old MCI developed whole new products based entirely on the ability to write software to provide new ways of offering existing services.
Even with the broad availability of knowledge and information, companies still became trapped in the old ways of doing things, unable to adapt and change. The role of disruption as a function not just of technology development but as management decision-making showed the intricate relationship between the two. With this era of information technology came the notion of companies too big and too slow to react to changes in the marketplace even with information right there in front of collective eyes.
The impact of software, as we finished the first decade of the 21st century, is more profound than even the most optimistic software people would have predicted. As the entrepreneur and venture capitalist Marc Andreessen wrote two years ago, “software is eating the world”. Software is no longer just about the internal workings of business or a way to analyze information and execute more efficiently, but has come to define what products a business develops, offers, and serves. Software is now the product, from cars to planes to entertainment to banking and more. Every product not only has a major software component but it is also viewed and evaluated through the role of software. Software is ultimately the product, or at least a substantial part of differentiation, for every product and service.
Today’s workplace: Continuous Productivity
Today’s workplace is as different as the office was from the factory.
Today’s organizations are either themselves mobile or serving customers that are mobile, or likely both. Mobility is everywhere we look—from apps for consumers to sales people in stores and the cash registers to plane tickets. With mobility comes an unprecedented degree of freedom and flexibility—freedom from locality, limited information, and the desktop computer.
The knowledge-based organization spent much energy on connecting the dots between qualitative sampling and data sourced on what could be measured. Much went into trying get more sources of data and to seek the exact right answer to important management decisions. Today’s workplace has access to more data than ever before, but along with that came understanding that just because it came from a computer it isn’t right. Data is telemetry based on usage from all aspects of the system and goes beyond sampling and surveys. The use of data today substitutes algorithms seeking exact answers with heuristics informed by data guessing the best answer using a moment’s worth of statistical data. Today’s answers change over time as more usage generates more data. We no longer spend countless hours debating causality because what is happening is right there before our eyes.
We see this all the time in the promotion of goods on commerce sites, the use of keyword search and SEO, even the way that search itself corrects spellings or maps use a vast array of data to narrow a potentially very large set of results from queries. Technologies like speech or vision have gone from trying to compute the exact answer to using real-time data to provide contextually relevant and even more accurate guesses.
The availability of these information sources is moving from a hierarchical access model of the past to a much more collaborative and sharing-first approach. Every member of an organization should have access to the raw “feeds” that could be material to their role. Teams become the focus of collaborative work, empowered by the data to inform their decisions. We see the increasing use of “crowds” and product usage telemetry able to guide improved service and products, based not on qualitative sampling plus “judgment” but on what amounts to a census of real-world usage.
Information technology is at the heart of all of these changes, just as it was in the knowledge era. The technologies are vastly different. The mainframe was about centralized information and control. The PC era empowered people to first take mainframe data and make better use of it and later to create new, but inherently local or workgroup specific information sources. Today’s cloud-based services serve entire organizations easily and can also span the globe, organizations, and devices. This is such a fundamental shift in the availability of information that it changes everything in how information is collected, shared, and put to use. It changes everything about the tools used to create, analyze, synthesize, and share information.
Management using yesterday’s techniques can’t seem keep up with this world. People are overwhelmed by the power of their customers with all this information (such as when social networks create a backlash about an important decision, or we visit a car dealer armed with local pricing information). Within organizations, managers are constantly trying to stay ahead of the curve. The “young” employees seem to know more about what is going on because of Twitter and Facebook or just being constantly connected. Even information about the company is no longer the sole domain of management as the press are able to uncover or at least speculate about the workings of a company while employees see this speculation long before management is communicating with employees. Where people used to sit in important meetings and listen to important people guess about information, people now get real data from real sources in real-time while the meeting is taking place or even before.
This symbol of the knowledge era, the meeting, is under pressure because of the inefficiency of a meeting when compared to learning and communicating via the technology tools of today. Why wait for a meeting when everyone has the information required to move forward available on their smartphones? Why put all that work into preparing a perfect pitch for a meeting when the data is changing and is a guess anyway, likely to be further informed as the work progresses? Why slow down when competitors are speeding up?
There’s a new role for management that builds on this new level of information and employees skilled in using it. Much like those who grew up with PC “natively” were quick to assume their usage in the workplace (some might remember the novelty of when managers first began to answer their own email), those who grow up with the socialplace are using it to do work, much to the chagrin of management.
Management must assume a new type of leadership that is focused on framing the outcome, the characteristics of decisions, and the culture of the organization and much less about specific decision-making or reviewing work. The role of workplace technology has evolved significantly from theory to practice as a result of these tools. The following table contrasts the way we work between the historic norms and continuous productivity.
|Then||Now, Continuous Productivity|
|Hierarchy, top down or middle out||Network, bottom up|
|Internal committees||Internal and external teams, crowds|
|Presenting packaged and produced ideas, documents||Sharing ideas and perspectives continuously, service|
|Data based on snapshots at intervals, viewed statically||Data always real-time, viewed dynamically|
|Exact answers||Approximation and iteration|
|More users||More usage|
Today’s workplace technology, theory
Modern IT departments, fresh off the wave of PC standardization and broad homogenization of the IT infrastructure developed the tools and techniques to maintain, ne contain, the overall IT infrastructure.
A significant part of the effort involved managing the devices that access the network, primarily the PC. Management efforts ran the gamut from logon scripts, drive scanning, anti-virus software, standard (or only) software load, imaging, two-factor authentication and more. Motivating this has been the longstanding reliability and security problems of the connected laptop—the architecture’s openness so responsible for the rise of the device also created this fragility. We can see this expressed in two symbols of the challenges faced by IT: the corporate firewall and collaboration. Both of these technologies offer good theories but somewhat backfire in practice in today’s context.
With the rise of the Internet, the corporate firewall occupied a significant amount of IT effort. It also came to symbolize the barrier between employees and information resources. At some extremes, companies would routinely block known “time wasters” such as social networks and free email. Then over time as the popularity of some services grew, the firewall would be selectively opened up for business purposes. YouTube and other streaming services are examples of consumer services that transitioned to an approved part of enterprise infrastructure given the value of information available. While many companies might view Twitter as a time-wasting service, the PR departments routinely use it to track news and customer service might use it to understand problems with products so it too becomes an expected part of infrastructure. These “cracks” in the notion of enterprise v. consumer software started to appear.
Traditionally the meeting came to symbolize collaboration. The business meeting which occupied so much of the knowledge era has taken on new proportions with the spread of today’s technologies. Businesses have gone to great lengths to automate meetings and enhance them with services. In theory this works well and enables remote work and virtual teams across locations to collaborate. In practical use, for many users the implementation was burdensome and did not support the wide variety of devices or cross-organization scenarios required. The merger of meetings with the traditional tools of meetings (slides, analysis, memos) was also cumbersome as sharing these across the spectrum of devices and tools was also awkward. We are all familiar with the first 10 minutes of every meeting now turning into a technology timesink where people get connected in a variety of ways and then sync up with the “old tools” of meetings while they use new tools in the background.
Today’s workspace technology, practice
In practice, the ideal view that IT worked to achieve has been rapidly circumvented by the low-friction, high availability of a wide variety of faster-to-use, easier-to-use, more flexible, and very low-cost tools that address problems in need of solutions. Even though this is somewhat of a repeat of the introduction of PCs in the early 1990’s, this time around securing or locking down the usage of these services is far more challenging than preventing network access and isolating a device. The Internet works to make this so, by definition.
Today’s organizations face an onslaught of personally acquired tablets and smartphones that are becoming, or already are, the preferred device for accessing information and communication tools. As anyone who uses a smartphone knows, accessing your inbox from your phone quickly becomes the preferred way to deal with the bulk of email. How often do people use their phones to quickly check mail even while in front of their PC (even if the PC is not in standby or powered off)? How much faster is it to triage email on a phone than it is on your PC?
These personal devices are seen in airports, hotels, and business centers around the world. The long battery life, fast startup time, maintenance-free (relatively), and of course the wide selection of new apps for a wide array of services make these very attractive.
There is an ongoing debate about “productivity” on tablets. In nearly all ways this debate was never a debate, but just a matter of time. While many look at existing scenarios to be replicated on a tablet as a measure of success of tablets at achieving “professional productivity”, another measure is how many professionals use their tablets for their jobs and leave their laptops at home or work. By that measure, most are quick to admit that tablets (and smartphones) are a smashing success. The idea that tablets are used only for web browsing and light email seems as quaint as claiming PCs cannot do the work of mainframes—a common refrain in the 1980s. In practice, far too many laptops have become literally desktops or hometops.
While the use of tools such as AutoCAD, Creative Suite, or enterprise line of business tools will be required and require PCs for many years to come, the definition of professional productivity will come to include all the tasks that can be accomplished on smartphones and tablets. The nature of work is changing and so the reality of the tools in use are changing as well.
Perhaps the most pervasive services for work use are cloud-based storage products such as DropBox, Hightail (YouSendIt), or Box. These products are acquired easily by consumers, have straightforward browser-based interfaces and apps on all devices, and most importantly solve real problems required by modern information sharing. The basic scenario of sharing large files with a customers or partners (or even fellow employees) across heterogeneous devices and networks is easily addressed by these tools. As a result, expensive and elaborate (or often much richer) enterprise infrastructure goes unused for this most basic of business needs—sharing files. Even the ubiquitous USB memory stick is used to get around the limitations of enterprise storage products, much to the chagrin of IT departments.
Tools beyond those approved for communication are routinely used by employees on their personal devices (except of course in regulated industries). Tools such as WhatsApp or WeChat have hundreds of millions of users. A quick look at Facebook or Twitter show that for many of those actively engaged the sharing of work information, especially news about products and companies, is a very real effort that goes beyond “the eggs I had for breakfast” as social networks have sometimes been characterized. LinkedIn has become the goto place for sales people learning about customers and partners and recruiters seeking to hire (or headhunt) and is increasingly becoming a primary source of editorial content about work and the workplace. Leading strategists are routinely read by hundreds of thousands of people on LinkedIn and their views shared among the networks employees maintain of their fellow employees. It has become challenging for management to “compete” with the level and volume of discourse among employees.
The list of devices and services routinely used by workers at every level is endless. The reality appears to be that for many employees the number of hours of usage in front of approved enterprise apps on managed enterprise devices is on the decline, unless new tablets and phones have been approved. The consumerization of IT appears to be very real, just by anecdotally observing the devices in use on public transportation, airports, and hotels. Certainly the conversation among people in suits over what to bring on trips is real and rapidly tilting towards “tablet for trips”, if not already there.
The frustration people have with IT to deliver or approve the use of services is readily apparent, just as the frustration IT has with people pushing to use insecure, unapproved, and hard to manage tools and devices. Whenever IT puts in a barrier, it is just a big rock in the information river that is an organization and information just flows around it. Forward-looking IT is working diligently to get ahead of this challenge, but the models used to reign in control of PCs and servers on corporate premises will prove of limited utility.
A new approach is needed to deal with this reality.
Transition versus disruption
The biggest risks organizations face is in thinking the transition to a new way of working will be just that, a transition, rather than a disruption. While individuals within an organization, particularly those that might be in senior management, will seek to smoothly transition from one style of work to another, the bulk of employees will switch quickly. Interns, new hires, or employees looking for an edge see these changes as the new normal or the only normal they’ve ever experienced. Our own experience with PCs is proof of how quickly change can take place.
In Only the Paranoid Survive, Andy Grove discussed breaking the news to employees of a new strategy at Intel only to find out that employees had long ago concluded the need for change—much to the surprise of management. The nature of a disruptive change in management is one in which management believes they are planning a smooth transition to new methods or technologies only to find out employees have already adopted them.
Today’s technology landscape is one undergoing a disruptive change in the enterprise—the shift to cloud based services, social interaction, and mobility. There is no smooth transition that will take place. Businesses that believe people will gradually move from yesterday’s modalities of work to these new ways will be surprised to learn that people are already working in these new ways. Technologists seeking solutions that “combine the best of both worlds” or “technology bridge” solutions will only find themselves comfortably dipping their toe in the water further solidifying an old approach while competitors race past them. The nature of disruptive technologies is the relentless all or nothing that they impose as they charge forward.
While some might believe that continuing to focus on “the desktop” will enable a smoother transition to mobile (or consumer) while the rough edges are worked out or capabilities catch up to what we already have, this is precisely the innovator’s dilemma – hunkering down and hoping things will not take place as quickly as they seem to be for some. In fact, to solidify this point of view many will point to a lack of precipitous decline or the mission critical nature in traditional ways of working. The tail is very long, but innovation and competitive edge will not come from the tail. Too much focus on the tail will risk being left behind or at the very least distract from where things are rapidly heading. Compatibility with existing systems has significant value, but is unlikely to bring about more competitive offerings, better products, or step-function improvements in execution.
Culture of continuous productivity
The culture of continuous productivity enabled by new tools is literally a rewrite of the past 30 years of management doctrine. Hierarchy, top-down decision making, strategic plans, static competitors, single-sided markets, and more are almost quaint views in a world literally flattened by the presence of connectivity, mobility, and data. The impact of continuous productivity can be viewed through the organization, individuals and teams, and the role of data.
The social and mobile aspects of work, finally, gain support of digital tools and with those tools the realization of just how much of nearly all work processes are intrinsically social. The existence and paramount importance of “document creation tools” as the nature of work appear, in hindsight, to have served as a slight detour of our collective focus. Tools can now work more like we like to work, rather than forcing us to structure our work to suit the tools. Every new generation of tools comes with promises of improvements, but we’ve already seen how the newest styles of work lead to improvements in our lives outside of work. Where it used to be novel for the person with a PC to use those tools to organize a sports team or school function, now we see the reverse and we see the tools for the rest of life being used to improve our work.
This existence proof makes this revolution different. We already experience the dramatic improvements in our social and non-work “processes”. With the support and adoption of new tools, just as our non-work lives saw improvements we will see improvements in work.
The cultural changes encouraged or enabled by continuous productivity include:
- Innovate more and faster. The bottom line is that by compressing the time between meaningful interactions between members of a team, we will go from problem to solution faster. Whether solving a problem with an existing product or service or thinking up a new one, the continuous nature of communication speeds up the velocity and quality of work. We all experience the pace at which changes outside work take place compared to the slow pace of change within our workplaces.
Flatten hierarchy. The difficulty in broad communication, the formality of digital tools, and restrictions on the flow of information all fit perfectly with a strict hierarchical model of teams. Managers “knew” more than others. Information flowed down. Management informed employees. Equal access to tools and information, a continuous multi-way dialog, and the ease and bringing together relevant parties regardless of place in the organization flattens the hierarchy. But more than that, it shines a light on the ineffectiveness and irrelevancy of a hierarchy as a command structure.
- Improve execution. Execution improves because members of teams have access to the interactions and data in real-time. Gone are the days of “game of telephone” where information needed to “cascade” through an organization only to be reinterpreted or even filtered by each level of an organization.
Respond to changes using telemetry / data. With the advent of continuous real-world usage telemetry, the debate and dialog move from deciding what the problems to be solved might be to solving the problem. You don’t spend energy arguing over the problem, but debating the merits of various solutions.
- Strengthen organization and partnerships. Organizations that communicate openly and transparently leave much less room for politics and hidden agendas. The transparency afforded by tools might introduce some rough and tumble in the early days as new “norms” are created but over time the ability to collaborate will only improve given the shared context and information base everyone works from.
- Focus on the destination, not the journey. The real-time sharing of information forces organizations to operate in real-time. Problems are in the here and now and demand solutions in the present. The benefit of this “pressure” is that a focus on the internal systems, the steps along the way, or intermediate results is, out of necessity, de-emphasized.
Organization culture change
Continuously productive organizations look and feel different from traditional organizations. As a comparison, consider how different a reunion (college, family, etc.) is in the era of Facebook usage. When everyone gets together there is so much more that is known—the reunion starts from shared context and “intimacy”. Organizations should be just as effective, no matter how big or how geographically dispersed.
Effective organizations were previously defined by rhythms of weekly, monthly and quarterly updates. These “episodic” connection points had high production values (and costs) and ironically relatively low retention and usage. Management liked this approach as it placed a high value on and required active management as distinct from the work. Tools were designed to run these meetings or email blasts, but over time these were far too often over-produced and tended to be used more for backward looking pseudo-accountability.
Looking ahead, continuously productive organizations will be characterized by the following:
- Execution-centric focus. Rather than indexing on the process of getting work done, the focus will shift dramatically to execution. The management doctrine of the late 20th century was about strategy. For decades we all knew that strategy took a short time to craft in reality, but in practice almost took on a life of its own. This often led to an ever-widening gap between strategy and execution, with execution being left to those of less seniority. When everyone has the ability to know what can be known (which isn’t everything) and to know what needs to be done, execution reigns supreme. The opportunity to improve or invent will be everywhere and even with finite resources available, the biggest failure of an organization will be a failure to act.
- Management framing context with teams deciding. Because information required discovery and flowed (deliberately) inefficiently management tasked itself with deciding “things”. The entire process of meetings degenerated into a ritualized process to inform management to decide amongst options while outside the meeting “everyone” always seemed to know what to do. The new role of management is to provide decision-making frameworks, not decisions. Decisions need to be made where there is the most information. Framing the problem to be solved out of the myriad of problems and communicating that efficiently is the new role of management.
- Outside is your friend. Previously the prevailing view was that inside companies there was more information than there was outside and often the outside was viewed as being poorly informed or incomplete. The debate over just how much wisdom resides in the crowd will continue and certainly what distinguishes companies with competitive products will be just how they navigate the crowd and simultaneously serve both articulated and unarticulated needs. For certain, the idea that the outside is an asset to the creation of value, not just the destination of value, is enabled by the tools and continuous flow of information.
- Employees see management participate and learn, everyone has the tools of management. It took practically 10 years from the introduction of the PC until management embraced it as a tool for everyday use by management. The revolution of social tools is totally different because today management already uses the socialplace tools outside of work. Using Twitter for work is little different from using Facebook for family. Employees expect management to participate directly and personally, whether the tool is a public cloud service or a private/controlled service. The idea of having an assistant participate on behalf of a manager with a social tool is as archaic as printing out email and typing in handwritten replies. Management no longer has separate tools or a different (more complete) set of books for the business, but rather information about projects and teams becomes readily accessible.
- Individuals own devices, organizations develop and manage IP. PCs were first acquired by individual tech enthusiasts or leading edge managers and then later by organizations. Over time PCs became physical assets of organizations. As organizations focused more on locking down and managing those assets and as individuals more broadly had their own PCs, there was a decided shift to being able to just “use a computer” when needed. The ubiquity of mobile devices almost from the arrival of smartphones and certainly tablets, has placed these devices squarely in the hands of individuals. The tablet is mine. And because it is so convenient for the rest of my life and I value doing a good job at work, I’m more than happy to do work on it “for free”. In exchange, organizations are rapidly moving to tools and processes that more clearly identify the work products as organization IP not the devices. Cloud-based services become the repositories of IP and devices access that through managed credentials.
Individuals and teams work differently
The new tools and techniques come together to improve upon the way individuals and teams interact. Just as the first communication tools transformed business, the tools of mobile and continuous productivity change the way interactions happen between individuals and teams.
- Sense and respond. Organizations through the PC era were focused on planning and reacting cycles. The long lead time to plan combined with the time to plan a reaction to events that were often delayed measurements themselves characterized “normal”. New tools are much more real-time and the information presented represents the whole of the information at work, not just samples and surveys. The way people will work will focus much more on everyone being sensors for what is going on and responding in real-time. Think of the difference between calling for a car or hailing a cab and using Uber or Lyft from either a consumer perspective or from the business perspective of load balancing cars and awareness of the assets at hand as representative to sensing and responding rather than planning.
- Bottom up and network centric. The idea of management hierarchy or middle management as gatekeepers is being broken down by the presence of information and connectivity. The modern organization working to be the most productive will foster an environment of bottom up—that is people closest to the work are empowered with information and tools to respond to changes in the environment. These “bottoms” of the organization will be highly networked with each other and connected to customers, partners, and even competitors. The “bandwidth” of this network is seemingly instant, facilitated by information sharing tools.
- Team and crowd spanning the internal and external. The barriers of an organization will take on less and less meaning when it comes to the networks created by employees. Nearly all businesses at scale are highly virtualized across vendors, partners, and customers. Collaboration on product development, product implementation, and product support take place spanning information networks as well as human networks. The “crowd” is no longer a mob characterized by comments on a blog post or web site, but can be structured and systematically tapped with rich demographic information to inform decisions and choices.
- Unstructured work rhythm. The highly structured approach to work that characterized the 20th century was created out of a necessity for gathering, analyzing, and presenting information for “costly” gatherings of time constrained people and expensive computing. With the pace of business and product change enabled by software, there is far less structure required in the overall work process. The rhythm of work is much more like routine social interactions and much less like daily, weekly, monthly staff meetings. Industries like news gathering have seen these radical transformations, as one example.
Data becomes pervasive (and big)
With software capabilities come ever-increasing data and information. While the 20th century enabled the collection of data and to a large degree the analysis of data to yield ever improving decisions in business, the prevalence of continuous data again transforms business.
- Sharing data continuously. First and foremost, data will now be shared continuously and broadly within organizations. The days when reports were something for management and management waited until the end of the week or month to disseminate filtered information are over. Even though financial data has been relatively available, we’re now able to see how products are used, trouble shoot problems customers might be having, understand the impact of small changes, and try out alternative approaches. Modern organizations will provide tools that enable the continuous sharing of data through mobile-first apps that don’t require connectivity to corporate networks or systems chained to desktop resources
- Always up to date. The implication of continuously sharing information means that everyone is always up to date. When having a discussion or meeting, the real world numbers can be pulled up right then and there in the hallway or meeting room. Members of teams don’t spend time figuring out if they agree on numbers, where they came from or when they were “pulled”. Rather the tools define the numbers people are looking at and the data in those tools is the one true set of facts.
- Yielding best statistical approach informed by telemetry (induction). The notion that there is a “right” answer is antiquated as the printed report. We can now all admit that going to a meeting with a printed out copy of “the numbers” is not worth the debate over the validity or timeframe of those numbers (“the meeting was rescheduled, now we have to reprint the slides.”) Meetings now are informed by live data using tools such as Mixpanel or live reporting from Workday, Salesforce and others. We all know now that “right” is the enemy of “close enough” given that the datasets we can work with are truly based on census and not surveys. This telemetry facilitates an inductive approach to decision-making.
- Valuing more usage. Because of the ability to truly understand the usage of products—movies watched, bank accounts used, limousines taken, rooms booked, products browsed and more—the value of having more people using products and services increases dramatically. Share matters more in this world because with share comes the best understanding of potential growth areas and opportunities to develop for new scenarios and new business approaches.
New generation of productivity tools, examples and checklist
Bringing together new technologies and new methods for management has implications that go beyond the obvious and immediate. We will all certainly be bringing our own devices to work, accessing and contributing to work from a variety of platforms, and seeing our work take place across organization boundaries with greater ease. We can look very specifically at how things will change across the tools we use, the way we communicate, how success is measured, and the structure of teams.
Tools will be quite different from those that grew up through the desktop PC era. At the highest level the implications about how tools are used are profound. New tools are being developed today—these are not “ports” of existing tools for mobile platforms, but ideas for new interpretations of tools or new combinations of technologies. In the classic definition of innovator’s dilemma, these new tools are less functional than the current state-of-the-art desktop tools. These new tools have features and capabilities that are either unavailable or suboptimal at an architectural level in today’s ubiquitous tools. It will be some time, if ever, before new tools have all the capabilities of existing tools. By now, this pattern of disruptive technologies is familiar (for example, digital cameras, online reading, online videos, digital music, etc.).
The user experience of this new generation of productivity tools takes on a number of attributes that contrast with existing tools, including:
- Continuous v. episodic. Historically work took place in peaks and valleys. Rough drafts created, then circulated, then distributed after much fanfare (and often watering down). The inability to stay in contact led to a rhythm that was based on high-cost meetings taking place at infrequent times, often requiring significant devotion of time to catching up. Continuously productive tools keep teams connected through the whole process of creation and sharing. This is not just the use of adjunct tools like email (and endless attachments) or change tracking used by a small number of specialists, but deep and instant collaboration, real-time editing, and a view that information is never perfect or done being assembled.
- Online and shared information. The old world of creating information was based on deliberate sharing at points in time. Heavyweight sharing of attachments led to a world where each of us became “merge points” for work. We worked independently in silos hoping not to step on each other never sure where the true document of record might be or even who had permission to see a document. New tools are online all the time and by default. By default information can be shared and everyone is up to date all the time.
- Capture and continue The episodic nature of work products along with the general pace of organizations created an environment where the “final” output carried with it significant meaning (to some). Yet how often do meetings take place where the presenter apologizes for data that is out of date relative to the image of a spreadsheet or org chart embedded in a presentation or memo? Working continuously means capturing information quickly and in real-time then moving on. There are very few end points or final documents. Working with customers and partners is a continuous process and the information is continuous as well.
- Low startup costs. Implementing a new system used to be a time consuming and elaborate process viewed as a multi-year investment and deployment project. Tools came to define the work process and more critically make it impossibly difficult to change the work process. New tools are experienced the same way we experience everything on the Internet—we visit a site or download an app and give it a try. The cost to starting up is a low-cost subscription or even a trial. Over time more features can be purchased (more controls, more depth), but the key is the very low-cost to begin to try out a new way to work. Work needs change as market dynamics change and the era of tools preventing change is over.
- Sharing inside and outside. We are all familiar with the challenges of sharing information beyond corporate boundaries. Management and IT are, rightfully, protective of assets. Individuals struggle with the basics of getting files through firewalls and email guards. The results are solutions today that few are happy with. Tools are rapidly evolving to use real identities to enable sharing when needed and cross-organization connections as desired. Failing to adopt these approaches, IT will be left watching assets leak out and workarounds continue unabated.
- Measured enterprise integration. The PC era came to be defined at first by empowerment as leading edge technology adopters brought PCs to the workplace. The mayhem this created was then controlled by IT that became responsible to keep PCs running, information and networks secure, and enforce consistency in organizations for the sake of sharing and collaboration. Many might (perhaps wrongly) conclude that the consumerization wave defined here means IT has no role in these tasks. Rather the new era is defined by a measured approach to IT control and integration. Tools for identity and device management will come to define how IT integrates and controls—customization or picking and choosing code are neither likely nor scalable across the plethora of devices and platforms that will be used by people to participate in work processes. The net is to control enterprise information flow, not enterprise information endpoints.
- Mobile first. An example of a transition between the old and new, many see the ability to view email attachments on mobile devices as a way forward. However, new tools imply this is a true bridge solution as mobility will come to trump most everything for a broad set of people. Deep design for architects, spreadsheets for analysts, or computation for engineers are examples that will likely be stationary or at least require unique computing capabilities for some time. We will all likely be surprised by the pace at which even these “power” scenarios transition in part to mobile. The value of being able to make progress while close to the site, the client, or the problem will become a huge asset for those that approach their professions that way.
- Devices in many sizes. Until there is a radical transformation of user-machine interaction (input, display), it is likely almost all of us will continue to routinely use devices of several sizes and those sizes will tend to gravitate towards different scenarios (see http://blog.flurry.com/bid/99859/The-Who-What-and-When-of-iPhone-and-iPad-Usage), though commonality in the platforms will allow for overlap. This overlap will continue to be debated as “compromise” by some. It is certain we will all have a device that we carry and use almost all the time, the “phone”. A larger screen device will continue to better serve many scenarios or just provide a larger screen area upon which to operate. Some will find a small tablet size meeting their needs almost all of the time. Others will prefer a larger tablet, perhaps with a keyboard. It is likely we will see somewhat larger tablets arise as people look to use modern operating systems as full-time replacements for existing computing devices. The implications are that tools will be designed for different device sizes and input modalities.
It is worth considering a few examples of these tools. As an illustration, the following lists tools in a few generalized categories of work processes. New tools are appearing almost every week as the opportunity for innovation in the productivity space is at a unique inflection point. These examples are just a few tools that I’ve personally had a chance to experience—I suspect (and hope) that many will want to expand these categories and suggest additional tools (or use this as a springboard for a dialog!)
- Creation. Quip, Evernote, Paper, Haiku Deck, Lucidchart
- Storage and Sharing. Box, Dropbox, Hightail
- Reporting. Mixpanel, Quantifind
- Communications. WhatsApp, Anchor, Voxer
- Tracking. Asana, Todoist, Relaborate
- Training. Udacity, Thinkful, Codeacademy
The architecture and implementation of continuous productivity tools will also be quite different from the architecture of existing tools. This starts by targeting a new generation of platforms, sealed-case platforms.
The PC era was defined by a level of openness in architecture that created the opportunity for innovation and creativity that led to the amazing revolution we all benefit from today. An unintended side-effect of that openness was the inherent unreliability over time, security challenges, and general futzing that have come to define the experience many lament. The new generation of sealed case platforms—that is hardware, software, and services that have different points of openness, relative to previous norms in computing, provide for an experience that is more reliable over time, more secure and predictable, and less time-consuming to own and use. The tradeoff seems dramatic (or draconian) to those versed in old platforms where tweaking and customizing came to dominate. In practice the movement up the stack, so to speak, of the platform will free up enormous amounts of IT budget and resources to allow a much broader focus on the business. In addition, choice, flexibility, simplicity in use, and ease of using multiple devices, along with a relative lack of futzing will come to define this new computing experience for individuals.
The sealed case platforms include iOS, Android, Chromebooks, Windows RT, and others. These platforms are defined by characteristics such as minimizing APIs that manipulate the OS itself, APIs that enforce lower power utilization (defined background execution), cross-application security (sandboxing), relative assurances that apps do what they say they will do (permissions, App Stores), defined semantics for exchanging data between applications, and enforced access to both user data and app state data. These platforms are all relatively new and the “rules” for just how sealed a platform might be and how this level of control will evolve are still being written by vendors. In addition, devices themselves demonstrate the ideals of sealed case by restricting the attachment of peripherals and reducing the reliance on kernel mode software written outside the OS itself. For many this evolution is as controversial as the transition automobiles made from “user-serviceable” to electronic controlled engines, but the benefits to the humans using the devices are clear.
Building on the sealed case platform, a new generation of applications will exhibit a significant number of the following attributes at the architecture and implementation level. As with all transitions, debates will rage over the relative strength or priority of one or more attributes for an app or scenario (“is something truly cloud” or historically “is this a native GUI”). Over time, if history is any guide, the preferred tools will exhibit these and other attributes as a first or native priority, and de-prioritize the checklists that characterized the “best of” apps for the previous era.
The following is a checklist of attributes of tools for continuous productivity:
- Mobile first. Information will be accessed and actions will be performed mobile first for a vast majority of both employees and customers. Mobile first is about native apps, which is likely to create a set of choices for developers as they balance different platforms and different form factors.
- Cloud first. Information we create will be stored first in the cloud, and when needed (or possible) will sync back to devices. The days of all of us focusing on the tasks of file management and thinking about physical storage have been replaced by essentially unlimited cloud storage. With cloud-storage comes multi-device access and instant collaboration that spans networks. Search becomes an integral part of the user-experience along with labels and meta-data, rather than physical hierarchy presenting only a single dimension. Export to broadly used interchange formats and printing remain as critical and archival steps, but not the primary way we share and collaborate.
- User experience is platform native or browser exploitive. Supporting mobile apps is a decision to fully use and integrate with a mobile platform. While using a browser can and will be a choice for some, even then it will become increasingly important to exploit the features unique to a browser. In all cases, the usage within a customer’s chosen environment encourages the full range of support for that platform environment.
- Service is the product, product is the service. Whether an internal IT or a consumer facing offering, there is no distinction where a product ends and a continuously operated and improving service begins. This means that the operational view of a product is of paramount importance to the product itself and it means that almost every physical product can be improved by a software service element.
- Tools are discrete, loosely coupled, limited surface area. The tools used will span platforms and form factors. When used this way, monolithic tools that require complex interactions will fall out of favor relative to tools more focused in their functionality. Doing a smaller set of things with focus and alacrity will provide more utility, especially when these tools can be easily connected through standard data types or intermediate services such as sharing, storage, and identity.
- Data contributed is data extractable. Data that you add to a service as an end-user is easily extracted for further use and sharing. A corollary to this is that data will be used more if it can also be extracted a shared. Putting barriers in place to share data will drive the usage of the data (and tool) lower.
- Metadata is as important as data. In mobile scenarios the need to search and isolate information with a smaller user interface surface area and fewer “keystrokes” means that tools for organization become even more important. The use of metadata implicit in the data, from location to author to extracted information from a directory of people will become increasingly important to mobile usage scenarios.
- Files move from something you manage to something you use when needed. Files (and by corollary mailboxes) will simply become tools and not obsessions. We’re all seeing the advances in unlimited storage along with accurate search change the way we use mailboxes. The same will happen with files. In addition, the isolation and contract-based sharing that defines sealed platforms will alter the semantic level at which we deal with information. The days of spending countless hours creating and managing hierarchies and physical storage structures are over—unlimited storage, device replication, and search make for far better alternatives.
- Identity is a choice. Use of services, particularly consumer facing services, requires flexibility in identity. Being able to use company credentials and/or company sign-on should be a choice but not a requirement. This is especially true when considering use of tools that enable cross-organization collaboration. Inviting people to participate in the process should be as simple as sending them mail today.
- User experience has a memory and is aware and predictive. People expect their interactions with services to be smart—to remember choices, learn preferences, and predict what comes next. As an example, location-based services are not restricted to just maps or specific services, but broadly to all mobile interactions where the value of location can improve the overall experience.
- Telemetry is essential / privacy redefined. Usage is what drives incremental product improvements along with the ability to deliver a continuously improving product/service. This usage will be measured by anonymous, private, opt-in telemetry. In addition, all of our experiences will improve because the experience will be tailored to our usage. This implies a new level of trust with regard to the vendors we all use. Privacy will no doubt undergo (or already has undergone) definitional changes as we become either comfortable or informed with respect to the opportunities for better products.
- Participation is a feature. Nearly every service benefits from participation by those relevant to the work at hand. New tools will not just enable, but encourage collaboration and communication in real-time and connected to the work products. Working in one place (document editor) and participating in another (email inbox) has generally been suboptimal and now we have alternatives. Participation is a feature of creating a work product and ideally seamless.
- Business communication becomes indistinguishable from social. The history of business communication having a distinct protocol from social goes back at least to learning the difference between a business letter and a friendly letter in typing class. Today we use casual tools like SMS for business communication and while we will certainly be more respectful and clear with customers, clients, and superiors, the reality is the immediacy of tools that enable continuous productivity will also create a new set of norms for business communication. We will also see the ability to do business communication from any device at any time and social/personal communication on that same device drive a convergence of communication styles.
- Enterprise usage and control does not make things worse. In order for enterprises to manage and protect the intellectual property that defines the enterprise and the contribution employees make to the enterprise IP, data will need to be managed. This is distinctly different from managing tools—the days of trying to prevent or manage information leaks by controlling the tools themselves are likely behind us. People have too many choices and will simply choose tools (often against policy and budgets) that provide for frictionless work with coworkers, partners, customers, and vendors. The new generation of tools will enable the protection and management of information that does not make using tools worse or cause people to seek available alternatives. The best tools will seamlessly integrate with enterprise identity while maintaining the consumerization attributes we all love.
What comes next?
Over the coming months and years, debates will continue over whether or not the new platforms and newly created tools will replace, augment, or see occasional use relative to the tools with which we are all familiar. Changes as significant as those we are experiencing right now happen two ways, at first gradually and then quickly, to paraphrase Hemingway. Some might find little need or incentive to change. Others have already embraced the changes. Perhaps those right now on the cusp, realize that the benefits of their new device and new apps are gradually taking over their most important work and information needs. All of these will happen. This makes for a healthy dialog.
It also makes for an amazing opportunity to transform how organizations make products, serve customers, and do the work of corporations. We’re on the verge of seeing an entire rewrite of the management canon of the 20th century. New ways of organizing, managing, working, collaborating are being enabled by the tools of the continuous productivity paradigm shift.
Above all, it makes for an incredible opportunity for developers and those creating new products and services. We will all benefit from the innovations in technology that we will experience much sooner than we think.
In technology product development there is always something new on the horizon—something better, faster, lighter, slicker, or just shinier. These shiny objects—technologies that are not quite products but feel like they could be the future—are the stuff that hot news stories are made of, that people will stop and ask about when they see one, or that cause a halo around a company. Balancing existing products and minding the business while developing wildly new products is always the biggest challenge facing established organizations. It is also a big challenge for each of us when we consider all we have to get done in the near term.
Recently there have been a lot of stories about companies doing “crazy” things while at the same time there are stories about the challenges in the “core” business. Google is famous for having very forward looking projects–internet balloons, driverless cars, connected glasses–while at the same time there is a huge transition going on in mobile computing that might impact the web search business that is so phenomenally successful.
When things are going well for a company, shiny objects are hailed as forward-looking innovations from an industry leader. Impatience dominates as people want to see these products in market sooner. When things are not going well for the company, perception radically shifts to one questioning focus on the “core business”. Impatience dominates as people want to see the company stay more in tuned to the challenges in the near term.
In practice, any organization of size engaged in any business with traction needs to be out there firing on all cylinders with the current business while also innovating radically different ideas. Finding a balance in resource allocation, company organization, and both internal and external communications is always going to be a challenge.
Research on the topic led to the work The Ambidextrous Organization, by Charles A. O’Reilly III and Michael L. Tushman. In this work, the authors researched how companies can innovate while maintaining their existing work. As you can imagine, there’s no simple formula or rule and context matters a great deal. The original paper from 2004 has some great case studies worth a read. One of the key learnings is that organizations can be ambidextrous, even if individuals are not always able to deliver on the current business while executing on a new venture at the same time.
In fact doing both at once is almost impossible—both are full time jobs, both require immense focus and dedication, and in reality there are different skills. From my perspective the real “trick” to being ambidextrous is to realize that an organization as whole (the company) needs efforts across a full spectrum of product development innovations. There’s a need for research labs doing pioneering work in deep technical challenges using their depth knowledge and a science-based approach. There’s a need for product development organizations to push the boundaries on existing technology bases in developing innovative new features. And there’s a need for product development organizations to themselves pioneer new products, line extensions or new lines, using their skills in bringing technology to market.
If you consider that a company is a portfolio of efforts and that different skills are required to make different advances, the notion that companies can lose focus or get distracted by shiny objects does not really make a lot of sense. It is certainly the case that one person can be drawn to be too focused on new things and not leave time for their responsibilities. The more senior a person, all the way up to a technology CEO, the more they wear many hats, context shift, and are generally required to focus on many things as a basic job description.
If you’re an engineer working on your company’s bread and butter there’s probably a time when you’ve been frustrated with the company’s shiny objects. When things are going well, the folks working on those look like they are creating the future. When things are not going well, you might think the company is squandering resources. Realizing that much of those observations are just perception, you can feel fortunate that your company leadership is working hard to be ambidextrous. You can do the same for your own growth and learning. Rather than get frustrated, get learning.
Here are a few things you can do yourself to exercise the creative side of your brain if you’re feeling a bit jealous of those shiny projects while you focus on getting the next money maker out the door:
Use competitive products. Nothing can make you think differently about your own work than to live and breathe your main competitor’s product. While not everyone can do this (if you work on jet engines that is a challenge), but do the very best you can to see your competitor’s products from the perspective of their customers. Products can have different conceptual models or approaches and thinking outside of your own box is the first step in being ambidextrous—because sometimes a breakthrough in your product is simply a recognition that your competitor has a better way to approach things.
Attend conferences outside your core expertise. Go to a conference that is in your domain but stay away from the sessions about your company and products. Much like using competitive products you can learn a great deal by attending a deep technical conference and freeing yourself from your own technologies and products. Don’t just stick to your own domain. You can expand your mind by shifting to another technical silo. If you’re a backend developer then go to a games conference and learn the techniques of storyboarding and animation for a change. If you’re industry has a tradeshow then see if you can explore that, but again shy away from your core expertise and expand your perspective. Of course whenever you attend a conference, you owe it to your team and your company to share the learning in some structured way—blog posts internally, team meetings, email, etc.
Explore on your own. Engineers are famous for their garages, basements, and spare rooms. These are where some of the most amazing innovations in technology were created. Use that space to be systematic in how you explore and learn. Build something. Work your way through an online course or book on a topic you don’t know about. Be multi-disciplinary about how you think about things by pulling in ideas from other fields as you explore. What is so amazing about today’s technology space is just how much can be done creatively in software by a single person.
Write and share. If you have the start of creative ideas, then write them down and share them. The essence of academic research boils down to sharing ideas and so borrow a page from them. Writing will help you to make connections with people who share your passion but will also help you to expand your own perspective on topics. Writing is hard and does not come naturally for everyone, but if you’re trying to think outside the box it is a great tool.
Keep a list. One tool I’ve found helpful is to keep a list of all the “interesting things” outside of my day to day to responsibilities. New products and technologies pop up all the time. A list gives you a tool you see potential trends and patterns from your perspective. Go back to the list routinely and remind yourself to follow up on a “sighting” and check back to see how it is evolving. Maybe you should use one of the above to devote more time to it?
Where do you find the time? First and foremost, all large companies allow for time for professional development. It is a matter of working with your manager to best use that time. After that, how you grow in your career and skillsets is a function of the time you’re willing to put in. The investment in time is one that pays back.
Back in the 1980’s the buzz in the exercise world was cross-training. Companies, like shoes, always have specialists working deeply across the spectrum of current products to crazy new ideas. No company can be totally focused on one place—that’s just not healthy. As an individual you should consider how to cross-train your brain when it comes to your own skills. It doesn’t mean you’ll be expert at everything, but you can think beyond that of a specialist.
Healthy companies have a balance of existing products, new products, and wild/breakthrough ideas yet to be products. It might be that some think a company isn’t focused if it is working on projects that seem far afield, but that often just depends on the context at the time. As an engineer you should consider your own growth and training in a similar way. Even though there is always more work to do than time, you owe it to your shareholders (you!) to exercise your brain by exploring new technologies and approaches, even while deadlines loom.
Feel free to connect on LinkedIn or follow @stevesi on twitter.