Learning by Shipping

products, development, management…

Designing for BYO, a product manager view

imageMany companies I work with are creating tools to enhance workplace or personal productivity depends on the “bring your own” or BYO movement to get their product bootstrapped or to just get in the door. Once in the door, the product design challenges of BYO begin.

After those first customers they count on broader, viral, usage within a company to drive revenue growth. While they likely built your product with the notion that “customer equals purchaser” once this changes to “business equals purchaser”, you are going to get a whole different level of feedback.

My guess is most every app and service is both excited and terrified to get to the moment when there is a choice between cozying up to IT and risking alienating your newly minted enthusiasts. It is, by all accounts, a choice. Most I talk to feel like they will navigate this by focusing on customers first and hope to overwhelm the negatives often associated with IT.

Walk that fine line to enable your product to be at some state of détente with IT.

Get over it. Not entirely of course, but there’s some subtly at play. At some point you are going to face a fork in the road; navigate enterprise management or face existential challenges. You can choose to be managed without your cooperation or worse blocked and literally unable to access important assets that your product requires. Alternatively, you might also choose to walk that fine line to enable your product to be at some state of détente with IT.

I know that sounds awful and while I am sure there are some exceptions (in both organizations and products), this is by far the most normal path. It doesn’t have to be a sell-out, but when done well you can bet that you’re going to be in great position to advance the state of the art and contribute positively to enterprise infrastructure.

In fact, as I was typing this post there was this thoughtful article on putting customers first in business apps.

The essence of BYO is that one can easily acquire and begin to use a device, product or service without IT involvement of any kind. You might need to know the server name for email or maybe how to export data from a line of business system, but otherwise the device or app can tap into the necessary resources without first going through IT and/or purchasing. Even better, these tools likely make it very easy to share information with coworkers or collaborators at other organizations. All folks need is a free email account as a gateway to sharing.

Of course all this ease of use has at least two main IT downsides.

First and foremost is security of the network overall. Devices on a network, running code of unknown origin, tapping into servers is a big risk. What can be transmitted by those devices and apps concerns IT. Inbound PDF attachments or simple USB sticks seemed harmless enough at first until they became a massive vectors.

Second, the data and servers being accessed contain information that you need to use but do not own. These are corporate assets and managing and tracking those is a fiduciary responsibility for IT and in some cases such as HIPPA or SEC regulations the penalties for messing up are severe. That simple case of putting something like logmein or internet messaging can potentially become a significant liability.

My own personal experience “helps” me to see this pattern. Working on Microsoft Office in the early days, we were very clearly a “bottom up adoption” product. People were going to stores and buying the product with their own money and creating amazing looking documents they would bring into work (often on PCs bought with personal funds at those same stores). Pretty soon groups of people were using corporate expense accounts to acquire “5 packs” of Office. Then over time, Microsoft grew an enterprise sales force that could offer large deals.

That’s the sales side, but on the product side the management and deployment of the product (deployment being decidedly old school now) became unwieldy. As a result, the late 1990’s saw a movement to reduce so called “TCO” or total cost of ownership. TCO mandated a vast number of controls across the entire platform and from that grew a whole generation of features from the registry, to logon scripts, to the, now, dreaded “corporate desktop”. TCO reached an epic volume as it described owning a $1500 PC as a $20,000 per year expense to companies.

While I was dragged kicking and screaming to deliver features that I felt could be used to make the product worse, the reality was at the time this is also what grew the business.  The tradeoffs, debates, and design choices were all very real.

In a startup, these choices are much more existential than they were for us back then. Given the hurdles to overcome to become a widely used tool, there’s a good chance you might want to be more proactive about how your product fits with BYO.

As a product manager facing this decision point, you have this intense belief that IT wants to make your product worse, harder to use, and to basically ruin your good work. The fact that so few built-for-IT products have the design sense, usability, or approachability of apps and services focused on consumers only reinforces this.

While there are dozens of potential traps and pitfalls that can result in a product falling out of favor, it is a good idea to consider a few important design choices you can make now that will enable your consumer and BYO product to be viewed through a positive light. It is important that these design choices be considered product assets rather than object handlers.

Ultimately, if you design a product to be used in business where you can charge more it should be better, not worse, than a product used in the consumer space. It used to be that the business versions of products charged more so they could do less and be harder to use and acquire. The SaaS and App models invert this. Phil Libin, founder of Evernote, says it best when he says “business class means superior and we challenge ourselves to make our product better when you upgrade to the business version”.

Business class means superior and we challenge ourselves to make our product better when you upgrade to the business version. — Phil Libin, Evernote

The following are five product areas to consider when it comes to making a product business ready:

  • Identity and authentication. The first thing a business needs from a product is that employees should sign into the product using the business-owned credentials (such as Active Directory). This allows IT to send a clear message to the individual that they are operating in a business context. This needs to include authentication mechanisms used at the organization and enforce associated password policy and security. At the same time, you owe it to your own ease of use that stand-alone credentials can be used, especially for collaboration. How you manage the bridge and the commingling of credentials depends on the flow of assets through your product.
  • Network usage. IT organizations guard their network across several dimensions. Platform providers make it possible to use VPN (secured with enterprise credentials) or other access methods for WiFi. Your product should use well-known/documented ports and be clear with IT about what travels over the wire and in what volumes. Techniques like polling, using obscure ports, and more will only hinder your product usage.
  • Changes related to re-orgs. In an organization of any size employees quit, vendors are fired, or staffing on a project just changes. If your product is used across a group of people then IT will want to be there to assist in supporting these changes within your product. How can content remaining on devices be recalled or how can a person lose permissions to content are important design choices you can make in building a product that it BYO friendly.
  • Content “ownership”. If your product creates or consumes content then your product owes it to IT to participate in the content management responsibility of the organization. At one extreme, the clipboard exists on IT apps and in every other app so you can dodge this question by saying it isn’t your exclusive responsibility. On the other hand, by having mechanisms for IT to have some telemetry and actions on content then you invite your product to be desired by IT, not just challenged. More than any other area this is where many potential solutions exist and many possible ways to make the product worse or upgrade to business class.
  • Features. Products are more than editors and tools for sharing, so there are going to be unique features in your product. Some of those unique features will intersect in ways that might run counter to a business policy. Sometimes this could be simple such as an ability to generate email notifications which might be frowned upon. Other times it might be complex as a feature runs directly afoul of regulatory compliance. At some level there are going to be features you give IT permission to enable/disable. No area is more challenging of course and thinking hard about the design tradeoffs when a feature might not be there is important. A feature like password protection might be great for consumers but becomes a huge problem for IT when personnel change. Alternatively, you might have a feature that becomes a “must use” and if that’s the case you want to consider how something you might have thought of as optional becomes permanent. For example, you might optionally support a confirmation email when adding new people to a project and IT might require that email be sent to produce a record of access changes.

There are many other avenues to consider. I think it is possible to make a product better when enabled for business even if you start from the very solid business and design foundation of customer first.

The modern mechanisms for administering IT control are vastly superior to the PC era mechanisms. The idea of running arbitrary code, tweaking every aspect of the UI, or installing add-ins that alter base functionality of a product are long gone. These approaches showed how great products can be made unfamiliar, hard to use, and less robust even with the best intentions Worse, the mechanisms developed to enable these approaches proved to be vectors for security problems, performance challenges, and in general sources of unpredictability and unreliability.

Today’s devices support state-based management, app stores, and security contexts that greatly improve the ability to deliver upgraded business features. To many, these tools are not yet enough. The platform vendors are carefully balancing the approaches they introduce with the known downsides by the old approaches.

There’s a disruption in the way devices, apps, and information are managed, but that does not necessarily mean an elimination.

–Steven Sinofsky

Written by Steven Sinofsky

May 1, 2014 at 3:00 pm

Posted in posts

Tagged with , ,

Shipping is a Feature: Some Guiding Principles for People That Build Things

I love questions about advice because they really force one to think carefully about what to say. My former colleague (and fellow Cornellian), Jackie Bavaro now at Asana, who recently co-authored a thoughtful book on the ins and outs of securing a role in product management, asked the following question on Quora:

Untitled

In the back of my head I always have that product manager view of accountability and so rather than “advise” I much prefer to have a dialog in context and make sure accountability stays with the person asking. It is difficult enough to be a manager and avoid the constant pull of “telling people what to do” and certainly on big topics one really has to be careful. At the same time, spouting cliche’s like “what do you think” or “it depends” can frustrate folks. This in itself is a valuable PM lesson.

I’ve been really lucky (literally) to work with many amazing folks and so many routine interactions yield empowering and powerful insights that one can bring forward. I’ve used blogging over many years to share those and many are also republished in our book on strategy and collaboration.

In thinking about the question and some of the recent design-oriented discussions, here are five takeaways that have always guided me. They didn’t originate with me, except in the sense that I discovered their value while making some mistake.

For these five bits of advice, I chose to focus on what I think is the most challenging aspect of being a PM, which is achieving clarity and maintaining a point of view for a product when all forces work against this very thing. What customers value most in a product is that “it just work” or “does what it is supposed to do,” and yet at every step in a product, the dynamics of design work to make this the most difficult to achieve. For those that have not built products, understanding the context and dynamics of decision making while building something is a bit abstract.

Shipping is a feature. Every PM knows this but it is also the hardest thing to get right. As a PM you throw around things like “the enemy of the good is the perfect” or, well, “shipping is a feature” all the time, yet we all have a hard time getting a product out the door. There’s always more to do to get it right. The way this was taught to me was so old it involved software being shipped in a box on floppies, but the visual has stuck with me. When you ship a product in a box on the back of the box are screen shots and marquee features. What does not come on the box are lists of all the features you thought of doing or different executions you considered. It is that simple. Once you release the product you begin a new adventure building then next iteration. It is almost always the case that what you were thinking before you had customers will change in some ways in the version that comes next. So ship. Learn. Gather data. Iterate. Whether you spend three years or three months developing a product this motion is the same.

You get paid to decide. Some people love making decisions on their own. Other people need socialization and iteration to make a choice. Either way can work (or not) as a product manager, but to be great you really do have to decide. Deciding anything important or meaningful at all means some people will disagree. Some might really disagree a huge amount. The bottom line is a decision has to be made. A decision means to not do something, and to achieve clarity in your design. The classic way this used to come up (and still does) is the inevitable “make it an option”. You can’t decide should a new mouse wheel scroll or zoom? Should there be conversation view or inbox view? Should you AutoCorrect or not? Go ahead and add it to Preferences or Options. But really the only correct path is to decide to have the feature or not. Putting in an option to enable it means it doesn’t exist. Putting in an option to disable means you are forever supporting two (then four, then eight) ways of doing something. Over time (or right away) your product has a muddled point of view, and then worse, people come to expect that everything new can also be turned off, changed, or otherwise ignored. While you can always make mistakes and/or change something later, you have to live with combinatorics or a combinatoric mindset forever. This is the really hard stuff about being a PM but the most critical thing, you bring a point of view to a product—if a product were a person you would want that person to have a clear, focused world-view.

Can’t agree to disagree. Anything that requires more than one person to do (and by definition as a PM you are working with Engineering/Dev so that means what you do) will reach a point where you have to do something and not everyone will agree. On a well-run team there are very rarely that many decisions that span many people all of whom have a voice (if you do, then fix that problem first). When you do reach a point where you just don’t agree, first, contemplate the first lesson and realize you have to ship. Second, see the previous lesson and realize you do have to decide. That leaves you deciding something that some people (or one person) won’t like. What you don’t want to do is end that meeting over coffee with the infamous “we have to ship and I think we should do X, so let’s just move on and agree to disagree”. Endings like that are never good. The “told you so moment” is just out there waiting to appear. The potential for passive-aggressive org dynamics is all too real. Ultimately, this is just a yucky place to be. So if you’re on the “winning” side of such a dialog then you have to bring people along every day for a while. You can’t remind people who was right, or that it is your decision and so on. If you’re on the “losing” side you need to support the team. You can’t remind people when little things go wrong (which they will) that you were right. Once a choice is made, the next step is all about the greater good. Nothing is harder for technologists than this because as technologists we believe there is a “right” answer and folks that don’t agree are simply “wrong”. Context is everything and remember you have to ship–as a team.

Splitting the baby is, well, splitting the baby. Even with all those lessons, time and time again I’ve faced situations where there is a stalemate on the team and the suggestion is made for a middle-of-the-road choice. A feature will appear sometimes. Performance won’t be terrible, but it won’t be great. Customers can do 90% of something, but not everything. Yet it would be possible to decide to have the feature, have great performance, or deliver 100%—it is just that the team dynamic is placing a value on finding a middle road. The biblical narrative of splitting the baby often comes into play here, because in practice if you do arrive at such a comprise what you’ve in effect done is reached a state where in fact you have made no one happy in the room and no one happy down the road. Of course, compromise is a critical part of product design for many reasons. The bigger the team, the more varied customers, the increased divergence of customer needs all lead to a stronger need to find middle paths for complex choices. There is magic when you can do this without just muddling the product. But there is risk that if your design language turns into splitting the baby that the output is exactly what you don’t want to achieve.

10% better can be 100% different. The hardest choices a PM can make are not the new choices for a product—a clean slate is challenging, but that is the truly fun part of design for many. It is not easy, but it is fun. The real challenge comes when deciding what to do next time around (in three weeks, three months or more). The first thing you do is remember all those things you could not get done or had to decide sub-optimally. So you think you’ll go back and “polish” off the work. But remember, now you have customers and they are using the product. They might not see what wasn’t done. They might actually like what you ended up doing. Your temptation to tweak things to “finish” them might come across as better in an incremental sense, but will it be that much better for existing customers? Will it be so much better for new customers that it is worth the risk of touching that code again? The big question for you is whether you can really measure how much better something is—is it more efficient, faster, deeper, etc.? There are many cases, particularly in user-experience flow and design, where incremental improvement simply amounts to speed bumps in using a new release and the downside masks the upside. Sometimes when you improve something 10% what you really do is make it 100% different.

Context is everything in decision making as a PM . The skills and experience of the team matter. The realities of where the business is at a given time or the ability to execute on a proposal are all factors that weigh heavily. Hindsight is 20/20 or better in the world of PM, and we all know there are many standing by to offer perspective, advice, or even criticism of the choices a product makes.

If you don’t make those choices in a timely manner then there won’t be much to talk about. Sometimes the most difficult thing to do is keep moving forward. That’s why some of the most valuable advice I’ve received relate to the very challenges of making tough product calls.

 

–Steven Sinofsky @stevesi

This post originally appeared on a16z.com.

Written by Steven Sinofsky

April 17, 2014 at 8:30 am

You’re doing it wrong

The-main-characterSmartphones and tablets, along with apps connected to new cloud-computing platforms, are revolutionizing the workplace. We’re still early in this workplace transformation, and the tools so familiar to us will be around for quite sometime. The leaders, managers, and organizations that are using new tools sooner will quickly see how tools can drive cultural changes — developing products faster, with less bureaucracy and more focus on what’s important to the business.

If you’re trying to change how work is done, changing the tools and processes can be an eye-opening first step.

Check out a podcast on this topic hosted by Andreessen Horowitz’s Benedict Evans. Available on Soundcloud or on a16z.com.

Many of the companies I work with are creating new productivity tools, and every company starting now is using them as a first principle. Companies run their business on new software-as-a-service tools. The basics of email and calendaring infrastructure are built on the tools of the consumerization of IT. Communication and work products between members of the team and partners are using new tools that were developed from the ground up for sharing, collaboration and mobility.

Some of the exciting new tools for productivity that you can use today include: Quip,EvernoteBox and Box NotesDropboxSlackHackpadAsanaPixxa PerspectiveHaiku Deck, and more below. This list is by no means exhaustive, and new tools are showing up all the time. Some tools take familiar paradigms and pivot them for touch and mobile. Others are hybrids of existing tools that take a new view on how things can be more efficient, streamlined, or attuned to modern scenarios. All are easily used via trials for small groups and teams, even within large companies.

Tools drive cultural change

Tools have a critical yet subtle impact on how work gets done. Tools can come to define the work, as much as just making work more efficient. Early in the use of new tools there’s a combination of a huge spike in benefit, along with a temporary dip in productivity. Even with all the improvements, all tools over time can become a drag on productivity as the tools become the end, rather than the means to an end. This is just a natural evolution of systems and processes in organizations, and productivity tools are no exception. It is something to watch for as a team.

The spike comes from the new ways information is acquired, shared, created, analyzed and more. Back when the PC first entered the workplace, it was astounding to see the rapid improvements in basic things like preparing memos, making “slides,” or the ability to share information via email.

There’s a temporary dip in productivity as new individual and organizational muscles are formed and old tools and processes are replaced across the whole team. Everyone individually — and the team has a whole — feels a bit disrupted during this time. Things rapidly return to a “new normal,” and with well-chosen tools and thoughtfully-designed processes, this is an improvement.

As processes mature or age, it is not uncommon for those very gains to become burdensome. When a new lane opens on a highway, traffic moves faster for awhile, until more people discover the faster route, and then it feels like things are back where they started. Today’s most common tools and processes have reached a point where the productivity increases they once brought feel less like improvements and more like extra work that isn’t needed. All too often, the goals have long been lost, and the use of tools is on autopilot, with the reason behind the work simply “because we always did it that way.”

New tools are appearing that offer new ways to work. These new ways are not just different — this is not about fancier reports, doing the old stuff marginally faster, or bigger spreadsheets. Rather, these new tools are designed to solve problems faced by today’s mobile and continuous organization. These tools take advantage of paradigms native to phones and tablets. Data is stored on a cloud. Collaboration takes place in real time. Coordination of work is baked into the tools. Work can be accessed from a broad range of computing devices of all types. These tools all build on the modern SaaS model, so they are easy to get, work outside your firewall and come with the safety and security of cloud-native companies.

The cultural changes enabled by these tools are significant. While it is possible to think about using these tools “the same old way,” you’re likely to be disappointed. If you think a new tool that is about collaboration on short-lived documents will have feature parity with a tool for crafting printed books, then you’re likely to feel like things are missing. If you’re looking to improve your organizational effectiveness at communication, collaboration and information sharing, then you’re also going to want to change some of the assumptions about how your organization works. The fact that the new tools do some things worse and other things differently points to the disruptive innovation that these products have the potential to bring — the “Innovator’s Dilemma” is well known to describe the idea that disruptive products often feel inferior when compared to entrenched products using existing criteria.

Overcoming traps and pitfalls

Based on seeing these tools in action and noticing how organizations can re-form around new ways of working, the following list compiles some of the most common pitfalls addressed by new tools. In other words, if you find yourself doing these things, it’s time to reconsider the tools and processes on your team, and try something new.

Some of these will seem outlandish when viewed through today’s concept. As a person who worked on productivity tools for much of my career, I think back to the time when it was crazy to use a word processor for a college paper; or when I first got a job, and typing was something done by the “secretarial pool.” Even the use of email in the enterprise was first ridiculed, and many managers had assistants who would print out email and then type dictated replies (no, really!). Things change slowly, then all of a sudden there are new norms.

In our Harvard Business School class, “Digital Innovation,” we crafted a notion of “doing it wrong,” and spent a session looking at disruption in the tools of the workplace. In that spirit, “you’re doing it wrong,” if you:

  1. Spend more time summarizing or formatting a document than worrying about the actual content. Time and time again, people over-invest in the production qualities of a work product, only to realize that all that work was wasted, as most people consume it on a phone or look for the summary. This might not be new, but it is fair to say that the feature sets of existing tools and implementation (both right for when they were created, I believe) would definitely emphasize this type of activity.
  2. Aim to “complete” a document, and think your work is done when a document is done. The modern world of business and product development knows that you’re never done with a product, and that is certainly the case for documents that are steps along the way. Modern tools assume that documents continue to exist but fade in activity — the value is in getting the work out there to the cloud, and knowing that the document itself is rarely the end goal.
  3. Figure out something important with a long email thread, where the context can’t be shared and the backstory is lost. If you’re collaborating via email, you’re almost certainly losing important context, and not all the right folks are involved. A modern collaboration tool like Slack keeps everything relevant in the tool, accessible by everyone on the team from everywhere at any time, but with a full history and search.
  4. Delay doing things until someone can get on your calendar, or you’re stuck waiting on someone else’s calendar. The existence of shared calendaring created a world of matching free/busy time, which is great until two people agree to solve an important problem — two weeks from now. Modern communication tools allow for notifications, fast-paced exchange of ideas and an ability to keep things moving. Culturally, if you let a calendar become a bottleneck, you’re creating an opening for a competitor, or an opportunity for a customer or partner to remain unhappy. Don’t let calendaring become a work-prevention tool.
  5. Believe that important choices can be distilled down into a one-hour meeting. If there’s something important to keep moving on, then scheduling a meeting to “bring everyone together” is almost certainly going to result in more delays (in addition to the time to get the meeting going in the first place). The one-hour meeting for a challenging issue almost never results in a resolution, but always pushes out the solution. If you’re sharing information all along, and the right people know all that needs to be known, then the modern resolution is right there in front of you. Speaking as a person who almost always shunned meetings to avoid being a bottleneck, I think it’s worth considering that the age-old technique of having short and daily sync meetings doesn’t really address this challenge. Meetings themselves, one might argue, are increasingly questionable in a world of continuously connected teams.
  6. Bring dead trees and static numbers to the table, rather than live, onscreen data. Live data analysis was invented 20 years ago, but too many still bring snapshots of old data to meetings which then too often digress into analyzing the validity of numbers or debating the slice/view of the data, further delaying action until there’s an update. Modern tools like Tidemark and Apptio provide real-time and mobile access to information. Meetings should use live data, and more importantly, the team should share access to live data so everyone is making choices with all the available information.
  7. Use the first 30 minutes of a meeting recreating and debating the prior context that got you to a meeting in the first place. All too often, when a meeting is scheduled far in advance, things change so much that by the time everyone is in the room, the first half of the hour (after connecting projectors, going through an enterprise log-on, etc.) is spent with everyone reminding each other and attempting to agree on the context and purpose of the gathering. Why not write out a list of issues in a collaborative document like Quip, and have folks share thoughts and data in real time to first understand the issue?
  8. Track what work needs to happen for a project using analog tools. Far too many projects are still tracked via paper and pen which aren’t shared, or on whiteboards with too little information, or in a spreadsheet mailed around over and over again. Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
  9. Need to think which computer or device your work is “on.” Cloud storage from Box,DropboxOneDrive and others makes it easy (and essential) to keep your documents in the cloud. You can edit, share, comment and track your documents from any device at any time. There’s no excuse for having a document stuck on a single computer, and certainly no excuse risking the use of USB storage for important work.
  10. Use different tools to collaborate with partners than you use with fellow employees. Today’s teams are made up of vendors, contractors, partners and customers all working together. Cloud-based tools solve the problem of access and security in modern ways that treat everyone as equals in the collaboration process. There’s a huge opportunity to increase the effectiveness of work across the team by using one set of tools across organizational boundaries.

Many of these might seem far-fetched, and even heretical to some. From laptops to color printing to projectors in conference rooms to wireless networking to the Internet itself, each of those tools were introduced to skeptics who said the tools currently in use were “good enough,” and the new tools were slower, less efficient, more expensive, or just superfluous.

The teams that adopt new tools and adapt their way of working will be the most competitive and productive teams in an organization. Not every tool will work, and some will even fail. The best news is that today’s approach to consumerization makes trial easier and cheaper than at any other time.

If you’re caught in a rut, doing things the old way, the tools are out there to work in new ways and start to change the culture of your team.

–Steven Sinofsky @stevesi

This article originally appeared on <re/code>.

Written by Steven Sinofsky

April 10, 2014 at 6:00 pm

Posted in posts

Tagged with , , ,

Hiring for a job you never did or can’t do

imageOne of the most difficult stages in growing your own skillset is when you have to hire someone for a job you can’t actually do yourself. Whether you’re a founder of a new company, or just growing a company or team, at some point the skills needed for a growing organization exceed your own experience.

Admitting that you don’t really have the skills the business requires is the first, and most difficult step. This is especially true as an engineer where there’s a tendency to think we can just figure things out. It is not uncommon to go through a thought process that basically boils down to: coding must be the hardest job, so all the other jobs can be done by someone with coding skills.

Fight the fear, let go of control, and make moves towards a well-rounded organization.

Nice try.

If you’ve ever tried some simple home repairs or paint touchup you know this logic doesn’t work—you only need to spend an hour watching some cable TV DiY show and you can see how the people with skills are always unraveling the messes created by those who thought they could improvise. The software equivalent can sometimes be seen as a developer attempting to design the user interaction flow in a paint program or PowerPoint. Sure it can be done by a developer (and there are talented developers who can of course do it all), but he or she can quickly reach their limits, and so will the user interaction.

Open up your engineer’s mind to embrace the truth that every other discipline or function you will ever collaborate with has a deep set of skills experiences that you lack. Relative to engineering, the “softer” skills often pose the biggest eye-opening surprise to engineers. Until you’ve seen the magic worked by those skilled in marketing, communications, sales, business development or a host of other disciplines you might not appreciate the levels of success you can achieve by turning over the task to trained professionals.

I have seen this first-hand many times. Most recently it occurred while working with the a16z portfolio company Local Motion when it came time to do some of the early announcements around the fleet-management company. The co-founders possess engineering and design backgrounds from elite institutions, and built the product themselves, hardware and software. Both are experienced mountaineers, and so they have this engrained sense of self-sufficiency, which is valuable both for building companies and scaling mountains.

When it came time to work with the industry press to tell the story of their company, in some ways they had to suppress their self-sufficient instincts. The founders were self-aware enough to know they had not done this before and agreed to enlist the help of those who have depth and breadth of experience. The pros showed up and spent time learning the team, the business, and the story (professionals do that!). They came back with a plan, roles, responsibilities, and defined what success would look like. It was amazing to watch how the founders absorbed and learned at each step all those things which they had not personally experienced before.

This sounds easy and pretty obvious. But if you put yourself in their shoes you know that this is not just bringing in a hired gun to get some press, rather this is hiring on a new member of the team and a new founding member of the family. What is vital to keep in mind, is that this kind of work is as important as every line of code and every circuit board. The lesson of letting go and letting professionals do their work is clear: delegating is never easy for most, but is spectacularly difficult if you don’t know what the other person is going to do and when the outcome matters a whole lot. Still, you need to let the specialists into your carefully engineered world.

There are moments of terror. You’re watching people talk about your product using tools and techniques you are unfamiliar with to connect with your potential customers. Even though it is a product, you are apt to feel as though this is a discussion about yourself. You question every step. You doubt the skills of the person you hired. You are certain everything will go wrong.

It is at that point—right when you start to panic and think that unless you do this yourself things will fail—that you need to let go. You need to say “yes” to hiring a person to do the work, and then let them do their best work.

Just keep reminding yourself that you’ve never done the job before, and that you’re role is to hire someone who knows more than you. Even when you’ve wrapped your head around that, there are a few ways you can get tripped up when you are The key to success here is avoiding these mistakes when you are in the hiring process:

  1. Asking candidates to teach you. A good candidate will of course know more than you. Their interview is not a time for them to teach you what they do for a living. The interview is for you to learn the specifics of a given candidate, not the job function. The best bet is to do your homework. If you’re hiring your first sales leader then use your network and talk to some subject matter experts and learn the steps of the role ahead of time.
  2. Expecting a candidate to know or create your strategy. It is fine to expect engineering candidates to know the tools and techniques you use. You wouldn’t expect an engineering candidate to know your unannounced product, of course. It is equally challenging to expect a new marketing person to have a marketing plan for your product. Even if you ask them to brainstorm for hours, keep in mind the inputs into the process—they only know the specifics you have provided them. For example, don’t expect a marketing candidate to magically come up with the right pricing strategy for your product without a chance to really dive in. On the other hand, you can expect a candidate to walk you through in extreme detail their most recent work on a similar topic. You can get to their thought process and how they worked through the details of the problem domain.
  3. Interviewing too many folks. You will always hear stories about the best hire ever after seeing 100 people. Those stories are legendary. On the other hand, you rarely hear the stories that start with “we could not find the perfect QA leader so we waited and waited until we had a quality crisis.” Yet these latter stories happen far too often. Again, you should not compromise, but if after bringing a dozen or more people through a process you are still searching, consider the patterns you’re seeing and why this is happening. A good practice if you’ve not found right hire after going through a lot of folks is to bring in a new point of view. Consider recruiting the help of a search firm, a board member, or a subject matter advisor to get you over the first hire in a new job function. Do you need the help of a search firm? Would you benefit from the help of a board member or subject matter advisor to get you over the first hire in a new job function?

These add up to the quest for the perfect hire. When it comes to engineering you give yourself a lot of leeway because you feel you can direct a less experienced person and because you can gauge more easily what they know and don’t know. When it comes to other roles you become more reluctant to let go of a dream candidate. This almost always nets out costing you time, and in a new effort time is money. That isn’t saying to settle, but it is saying to use the same techniques of approximation you naturally use when hiring people in your comfort zone.

The most difficult part of hiring for a job you don’t know first-hand is the human side. Every growing organization needs diversity because every product and service is used by a diverse group of people. The different job functions often bring with them diversity of personality types that add to the challenges of hiring. The highly analytical developer looking to hire a strong qualitative thinker for marketing, or the highly empathetic sales leader, is often going to face a challenge just making the human connection.

This human connection is a two-way street. Embrace it. Recognize the leap each of you are taking. Realize that the interpersonal skills required to call on customers every day are just different than the interpersonal skills used when hacking. The challenge of making that human connection is one for the person doing the hiring to overcome. Often that’s the biggest opportunity for personal growth when hiring people to do a job you can’t.

 

–Steven Sinofsky (@stevesi)

 

Note: A form of this post originally appeared on FastCo.

Written by Steven Sinofsky

April 3, 2014 at 9:30 am

Posted in posts

Tagged with , ,

Look at me! More thoughts on notifications

36458Hunter Walk shared some thoughts on notifications and the challenges he and (certainly through twitter) many people see. Many of the companies I’ve met with see design challenges in how much and when to offer up notifications. There’s a long history of trying different approaches and modalities to notifications and so it seems worth some additional perspective for those familiar with what we see today on modern mobile platforms.

Notifications are one of those features where everyone has an opinion, and rightfully so. The feature is so visible and for just about everyone seems so close to being helpful but yet always off by just a little. There’s a general UX principle that is worth considering, which is anytime you push some feature on your customer you really want it to be right (correct, useful, helpful) for him/her 100% of the time. If not, chances are your customer will recall the negatives of the feature far more than the positives. This applies to notifications, autocorrect, form completion, and more. If you find yourself putting a lot of design energy into how your customer can undo or dismiss your best guess at what was intended, then you’re probably being too aggressive.

Anytime you push some feature on your customer you really want it to be right for him/her 100% of the time.

In some ways, today’s Notification Centers are the extreme case of “we’re collectively going to be wrong so often that we’re just going to put stuff in one place” or “there’s just so much we app developers have to tell you that the platforms are squeezing it all into one place to avoid cluttering up the platform itself”.

It is as if it isn’t enough we have to manage all of our apps, bookmarks, and preferences, we now have to manage all the ways apps tell us stuff we might want to know. Hunter’s raised the complaint (to a chorus of agreement) that after a two weeks, you have to go in and turn off notifications on a new app. Of course this improves battery life, reduces chatter, and then as some noted causes you to start to ignore the app.

What’s an app developer to do?

What to do?

In the PC era a lot of effort went into honing the design of verbs and interaction. It took a decade to develop the right approaches to menus, toolbars, status bars, panes, and more. That’s because many apps were essentially a large set of verbs this was the big design challenge. The rough equivalent to this design challenge is the role of notifications in mobile. That’s because in a mobile world many apps exist to be essentially a stream of information.

Notifications suffer a clear tension between platform pm and the app pm.

Platform pm wants to contain apps to the app experience, extending the walled-garden such that apps don’t interfere with other apps. By definition a notification is a way for one app to interfere with other apps. Platform pm sees notifications as necessary far less frequently than app pm might. This leads to a set of APIs that offer a clear, albeit limited, view of what a notification is and what it can do. This seems reasonable and we all want the platform folks maintaining a global view of consistency in approach and control.

App pm sees the world through the lens of their app. The assumption is that someone downloading and using an app has made a choice to count on that app for the purpose it was designed, and so whether it is an airline app alerting you to a flight (or a potential discount on a future flight), a bank with a balance alert (or advertising a new bank feature), or a communication tool letting you know of some inbound message (or alerting you that a friend is now on line) the app pm sees any of these as worthwhile reasons to interfere with your flow or context. On all platforms, apps often design their own type of notifications that get used while you are using the app (for in app purchase or feature advertising) because the platform is not rich enough. All this seems legitimate, and certainly at the time of initial designs.

Over time the initial designs from both parties tend to lead to an expansion in the ability to interrupt you. Each subsequent release of a platform almost always adds more capabilities to enhance and customize notifications in an effort to offer more while also trying to keep the noise in the system manageable. Each app adds more and more notifications in an effort to more deeply engage with customers and likely to encourage customers to use more surface area of the app.

Many who use iOS 7 have spent quite a bit of time mired notification customization. Here is an overview of the iOS 7 features, both the notifications and notification center, worth a look if you’re on Android or not sure of the impressive depth that Apple designed for notification. Android is probably not quite at the same level of consistency and control, though as you might expect there are several apps that can help you customize notifications of other apps.

Design Challenge

At the extreme we end up with two core design challenges.

Notification spam. This one is easy. Too many apps just think too much of what is going on is important to you. Like too much of any design the burden falls to app product managers to just be more thoughtful. Like so many elements of any platform, when there is a view that making money depends on getting folks to use more of a product or spend more time in a product, the platform starts to look a bit like “surface area to be exploited”. Like the old Start Menu and desktop in Windows, the more places an app can “infuse” itself and invade your space the better. On Android we see this in the share item menu as another bit of surface area to be gamed or exploited.

Notification action. The most common issue with notification is that your flow is interrupted and then you seem to be pulled into a state of distraction until you deal with the inbound notice. We each have our own human based algorithms for how to cope. We always jump on SMS. We (almost) always ignore a ringing phone. We wish we could find a way for some app or another to stop bugging us so we uninstall it.

On iOS there is very little you can do to a notification other than dismiss it or just jump directly to the app or place in the app that generated the notification. Modal or must-act notifications are generally discouraged. The resulting notification center then turns into a list to read that spans a bunch of apps and for some ends up to be a list of stuff you’ve already seen popup or in context or just a reminder to get to the app.

On Android, the design takes a different approach which is to enable notifications that can take actions. This is where the elegance of notifications is really stretched, but in a way that many find appealing. For example, when you receive a new mail message the gmail notification lets you archive it based on reading the initial content or various SMS clients offer you the ability to reply.

On Windows Phone, one has the additional option of pivoting notifications by person on your home screen so you can glance and see that there is activity by person. This has a natural appeal when there are a small set of folks you care deeply about but as a general purpose mechanism it might not scale particularly well.

The core challenge with offering verbs with notifications is almost “classic” in that one can never off the right set of verbs because eventually the design turns into attempting to implement the a substantial number of features of the app in the notification. Mail is a great challenge: delete, file, reply, flag, etc. all become possible verbs. Each usage pattern in aggregate leads to the whole mail experience. The more users you have the larger the group of customers that don’t like the subset of verbs you picked.

Ultimately taking action based on a notification turns into a bit of a frustration in that the notification centers essentially offer a new way to launch all your apps. What was a nice feature turns into a level of indirection almost all the time.

Opportunity

Therein is the opportunity. In a world where many people are almost constantly glancing at their phones and wanting to know more about what is going on in their digital lives and a world where almost every app represents an endless stream of information along with in-app notifications, it seems that notifications need a different level of semantics.

For example, with just a few friends Facebook always has something new to see so why notify you of the obvious. For many, Twitter is essentially a notification engine. Mail certainly is a constant stream, arguably of decreasing importance. In other words, it isn’t even clear what makes sense to notify you about when the natural behavior is to periodically launch apps to see what’s new within the app context and apps are generating new information all the time.

Similarly, if most everyone knows that when you are talking to another human you both have to turn your phones upside down to avoid being distracted (or sharing private information), then there’s a good chance we’ve collectively missed the mark notifications. The iOS “do not disturb” is an awesome feature but yet it seems to undo all the work in both the notification center and in the apps.

My view is that a feature that requires us to customize it before it becomes useful or less annoying is defaulted the wrong way. Of course this is literally impossible with a product used by more than a few people, since any design at all will have both critics and shortcomings. However, it is possible to default to “out of the way” and then provide a mechanism for people to decide what they might want to be notified about once a usage pattern is established.

For example, I might assert that for an app like mail, sms, Facebook, or Twitter the simple iOS badge is enough. We are all in and out of these apps enough during the day that a specific notification is redundant with the in-app notifications already there.

Each app can almost certainly step back and either know a priori or offer a mechanism that puts people in control of their experience with notifications. It is almost certainly the case that if we’re bouncing in and out of apps all the time but really do want to know if SMS comes from a loved one amongst the 100’s of SMS many get each day, that is likely the way to design a feature.

It is easy to imagine using more context (loved the twitter suggestion to not notify while driving/moving fast). It is easy to imagine more machine learning applied to notifications. But I think we can start from a fresh perspective that the mechanisms provided are just being over-used to begin with when we look at modern usage patterns.

–Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

February 15, 2014 at 12:00 pm

Posted in posts

Tagged with , ,

Don’t ban email—change how you work!

NoEmail1How often do you hear things like “let’s ban email”, “no more attachments”, “death to PowerPoint decks”, “we’re going paperless”, “meeting free friday” or one of dozens of “bans” designed to do away with something that has become annoying or inefficient in the workplace? If you’re around long enough you can see just about anything cross over from innovative new tool to candidate to be banned. The problem is that banning a tool (or process) in an attempt at simplification never solves the problem. Rather, one should to look at a different approach, an approach that focuses on the work not the tool or process.

What’s the problem?

It is well understood that new technologies go through an adoption curve. In the classic sense it is a normal distribution as described by researchers in the 1950’s. More recently and generally cited in the software world is Geoffrey Moore’s Crossing the Chasm which describes a slightly different path. These models all share a common view of a group of early adopters followed by a growing base of users of a technology.

While adoption is great, we are all too used to experiencing excess enthusiasm for new technologies. As a technology spreads, so does the enthusiasm. Invariably some folks use the technology to the the point of abusing it. From reply all to massive attachments to elaborate scorecards with more dimensions than anyone can understand, the well-intentioned enthusiastic user turns a game-changing tool into a distraction or worse.

Just as with adoption curves, once can create a conceptual “irritation curve” and overlay it with adoption. Of course what is pictured below is not based on any data or specific to any technology, but consistent with our collective anecdotal point of view.

Adoption-and-Irritation

The key is that at some point the adoption of a new product crosses the chasm and becomes widely used within a company. While there is a time delay, sometimes years, at some point the perceived “abuse” of the technology causes a cross-over where for some set of people the irritation outpaces the utility. Just as there are early adopters, there are also irritation canaries who are the first to feel the utility of the new technology declining with increased usage.

We see this same dynamic not just for tools, but for business processes as well. That status report, dashboard, or checkin mail all start off as well-intentioned and then after some period of time the “just one more thing” or spreading over-usage at all levels of a team turn a positive into a burden.

Then at some point people start to reject the tool or process. Some even call for an outright ban or elimination.

What’s the solution?

The way to break the cycle is to dive into the actual work and not the tool. Historically, tools fade away when the work process changes.

It is tough to find examples of popular tools and processes that were simply banned that did not make a comeback. Companies that ban meetings or email on fridays just have more meetings and email on monday-thursday. I’ve personally seen far too many examples of too much information crammed on to a page (smaller fonts or margins anyone) or slides that need to be printed rather than projected in an effort to squeeze more on a page when there are forced limits on story-telling.

On the other hand, from voice mail to fax machines to pagers to typewriters to voice calls we have examples of tools that achieve high and subsequently irritating usage levels can and do go away because new tools take over. If you were around for any of those then you know that people called for them to be banned and yet they continued, until one day we all just stopped using them.

A favorite historical example is a company that told me they removed all the typewriters when PCs were introduced. The company was trying to save time because typewriters were much more difficult to use than PCs with printers (of course!). The problem was immediately seen by those responsible for the workflows in the company–all of a sudden no one could fill out an expense report, transfer to another department, or pay an invoice. All of these work processes, the blizzard of paperwork that folks thought were caused by typewriters, were rendered inoperable. These processes all required a typewriter to fill out the form and the word processors had no way of navigating pre-printed forms in triplicate. Of course what needed to happen was not a pre-printed form that worked in a word processor (what the administrative folks asked for), but a rethinking of the workflow that could be enabled by new tools (what management needed to do).

This sort of rethinking of work is what is so exciting right now. It is fair to say that the established, and overloaded, desktop work-processes and tools of the past 20 years are being disrupted by a new generation of tools. In addition to re-imagining how work can be done to avoid the problems of the past, these tools are built on a modern, mobile, cloud, and social infrastructure.

For example, Tom Preston-Werner, co-founder of GitHub, tells a great story about the motivations for GitHub that echoes my own personal experience. As software projects grew the communication of code changes/checkins generate an overwhelming blizzard of mail. Rather than just shut down these notifications and hope for the best, what was needed was a better tool so he invented one.

At AsanaDustin Moskovitz, tells of their goal to eliminate email for a whole set of tracking and task management efforts. We’ve all seen examples of the collaborative process playing out poorly by using email. There’s too much email and no ability to track and manage the overall work using the tool. Despite calls to ban the process, what is really needed is a new tool. So Asana is one of many companies working to build tools that are better suited to the work than one we currently all collectively seem to complain about.

Just because a tool is broadly deployed doesn’t mean it is the right or best way to work.

We’re seeing new tools that are designed from the ground up to enable new ways of working and these are based on the learning from the past two decades of tool abuse.

What are some warning signs for teams and managers?

It is easy to complain about a tool. Sometimes the complaints are about the work itself and the tool is just the scapegoat. There’s value in looking at tool usage or process creation from a team or management perspective. My own experience is that the clarion calls to ban a tool or process have some common warning signs that are worth keeping an eye out for as the team might avoid the jump to banning something, which we know won’t work.

  • Who is setting expectations for work product / process? If management is mandating the use of a tool the odds of a rebellion against it go up. As a general rule, the more management frames the outcome and the less the mechanism for the outcome the more tolerance there will be for the tool. Conversely, if the team comes up with a way of working that is hard for outsiders to follow or understand, it is likely to see pushback from partners or management. However, if it is working and the goal is properly framed then it seems harmless to keep using a tool. Teams should be allowed to use or abuse tools as they see fit so long as the work is getting done, no matter how things might look from outside.
  • Does the work product benefit the team doing the work or the person asking? A corollary to above is the tool or process that is mandated but seems to have no obvious benefit is usually a rebellion in-waiting. Document production is notorious for this. From status reports to slides to spreadsheets, the specification by management to create ever more elaborate “work products” for the benefit of management invariably lead to a distaste for the tool. It is always a good idea for management to reduce the need to create work, tools, and processes where the benefit accrues to management exclusively. Once again, the members of the team will likely start to feel like banning the use of the tool is the only way to ease the overload or tax.
  • Do people get evaluated (explicitly or implicitly) on the quality of the work product/process or the end-result? A sure-fire warning sign to the looming distaste of a tool or process is when a given work product becomes a goal or is itself measured. Are people measured by the completion of a report? Does someone look at how many email notifications get generated by someone? Does someone get kudos for completing a template about the group’s progress? All of these are tools that might be considered valuable in the course of achieving the actual goals of the team, but are themselves the path along the way. Are your status reports getting progressively more elaborate? Are people creating email rules to shunt email notifications to a folder? Are people starting to say “gosh I must have missed that”? All of those are warning signs that there is an impending pushback against the tool or process.
  • What doesn’t get done if you just stop? The ultimate indicator for a need to change a tool or process is to play out what would happen if you really did ban it. We all know that banning email is really impractical. There are simply too many exceptions and that is exactly the point. Many tools can have a role in the modern workplace. Banning a tool in isolation of the work never works. Taking a systematic look at the work required that uses a tool, those that use the tool, and those that benefit from the output is the best way to approach the desire to use the most appropriate toolset in the workplace.

What tools need to change in your organization? What work needs to change so that the team doesn’t need to rely on inappropriate or inefficient tools?

–Steven (@stevesi)

PS: As I finished writing this post, this Forrester report came across twitter: Reality Check: Enterprise Social Does Not Stem Email Overload.

Written by Steven Sinofsky

January 31, 2014 at 7:00 am

Posted in posts

Tagged with ,

Why was 1984 not really like “1984”, for me

ScottyTalksToMacFor me, 1984 was the year of Van Halen’s wonderful [sic] album, The Right Stuff, and my second semester of college. It would also prove to be a time of enlightenment for me and computing. On this 30th anniversary of the Apple Macintosh on January 25 and the Superbowl commercial on January 22. I wanted to share my own story of the way the introduction of the Macintosh profoundly changed my path in life.

Perhaps a bit indulgent, bit it seemed worth a little backstory.  I think everyone from back then is feeling a bit of nostalgia over the anniversary of the commercial, the product, and what was created.

High School, pre-Macintosh

Like many Dungeons and Dragons players my age, my first exposure to post-Pong computing was an Atari 800 that my best friend was lucky enough to have (our high school was not one to have an Apple ][ which hadn’t really made it to suburban Orlando). While my friends were busy listening to the Talking Heads, Police, and B-52s, I was busy teaching myself to program on the Atari. Even though it had the 8K BASIC cartridge it lacked tape storage. Every time I went over to use the computer I had to start over. Thinking about business at an early age (I suppose) I would continue to code and refine what I thought would be a useful program for our family business, the ability to compute sales tax on purchases from different states. Enter the total sale, compute the sales tax for a state by looking up the rate in a table.

Atari 800

My father, an entrepreneur but hardly a technologist, was looking to buy a computer to “automate” our family business. In 1981, he characteristically dove head first into computing and bought an Osborne I. For a significant amount of money ($1,795, or $4,600 today) we owned an 8 bit CPU and two 90K floppy drives and all (five) of the business programs one could ever need. 

I started to write a whole business suite for the business (inventory, customers, orders) in BASIC which is what my father had hoped I would conjure up (in between SATs and college prep). Well that was a lot harder than I thought it would be (so were the SATs). Then I discovered dBase II and something called a “database” that made little sense to me in the abstract (and would only come to mean something much later in my education). In a short time I was able to create a character-based system that would be used to run the family business.

Osborne Ad

To go to college I had a matching Osborne I with a 300b modem so I could do updates and bug fixes (darn that shipping company–they changed the rate on COD shipments right during midterms which I had hard-coded!).

College Fall Semester

I loaded up the Osborne I and my Royal typewriter/daisy wheel/parallel port “letter quality” printer and was off to sunny Ithaca.

Computer savvy Cornell issued us our “BITNET electronic mail accounts”, mine was TGUJ@CORNELLA.EDU. Equal parts friendly, memorable, and useful and no one knew what to do with them. The best part was email ID came printed on a punch card. As a user of an elite Osborne I felt I went back in time when I had to log on to the mainframe from a VT100 terminal. The only time I ever really used TGUJ was to apply for a job with Computer Services.

punchcard

I got a job working for the computer services group as a Student Terminal Operator (STO). I had two 4 hour shifts. One was in the main computer science major “terminal room” in Upson Hall featuring dozens of VT100 terminals. The other shift was Friday night (yes, you read that correctly) at the advanced “lab” featuring SGI graphics workstations, IBM PC XTs, an Apple Lisa, peripherals like punch card machines, and a 5′ tall high-speed printer. For the latter, I was responsible for changing the ribbon, a task that required me to put on a mask and plastic arm-length gloves.

1403

It turned out that Friday night was all about people coming in to write papers on the few IBM/MS-DOS PCs using WordPerfect. These were among the few PCs available for general purpose use. I spent most of the time dealing with graduate students writing dissertations. My primary job was keeping track of the keyboard templates that were absolutely required to use WordPerfect. This experience would later make me appreciate the Mac that much more.

In the computer science department I had a chance to work on a Xerox Star and Alto (see below) along with Sun Workstations, microVAX mini, and so on. The resources available were an incredible blessing to the curious. The computing world was a cacophony of tools and platforms with the vast majority of campus not yet tapping into the power of computing and those that did were using what was most readily accessible. Cornell was awash in the sea of different computing platforms, and to my context that just seemed normal, like there were a lot of different types of cars. This was especially apparent from my vantage point in the computer facilities.

xerox-star-interface2

One experience with a new, top-secret, computer was about to change all that.

I ended up getting to use a new computer from an unidentified company. One night after my shift, a fellow STO dragged me back to Upson Hall and took me into a locked room in the basement. There I was able to see and use a new computer. It was a wooden box attached to a wall with an actual chain. It had a mouse, which used on the Xerox and Sun workstations. It had a bitmap screen like a workstation. It had an “interface” like the Xerox. There was a menu bar across the top and a desktop of files and folders. It seemed small and much more quiet than the dorm-refrigerator sized units I was used to hearing.

What was really magical about it was that it had a really easy to use painting program that we all just loved. It had a “word processor”. It was much easier to use than the Xerox which had special keys and a somewhat overloaded desktop metaphor. It crashed a lot even after a short time using it.  It also started up pretty quickly. Most everything we did with it felt new and different compared to all the other computers we used.

The end of the semester and exams approached. The few times and couple of hours I had to play with this computer were exciting. In the sea of computing options, it was definitely the most exciting thing I had experienced. Perhaps being chained to the wall added to the excitement, but there was something that really resonated with us. When I try to remember the specifics, I mostly recall an emotional buzz.

My computing world was filled with diversity, and complexity, which left me unprepared for the way the world was going to change in just the next six weeks.

Superbowl

To think about Apple’s commercial, one really has think about the context of the start of the year 1984. The Orwellian dialog was omnipresent. Of course as freshman in college we had just finished our obligatory compare/contrast the dystopian messages in Animal Farm, Brave New World, and 1984 not to mention the Cold War as front and center dialog at every turn. The country emerging from recession gave us all a contrasting optimism.

At the same time, IBM was omnipresent. IBM was synonymous with computing. Sure the Charlie Chaplin ads were great, but the image of computing to almost everyone was that of the IBM mainframe (CORNELLA was located out by the Ithaca airport). While IBM was almost literally the pillar of innovation (just a couple of years later, scientists at IBM would spell IBM with Xenon atoms), there was also great deal of distrust given the tenor of the time. The thought of a globally dominant company, a computer company, was uncomfortable to those familiar with fellow Cornellian Kurt Vonnegut’s omnipresent RAMJAC.

saupload_ibm_pc_percon_83

Then the Apple commercial ran. It was truly mesmerizing (far more so to me than the Superbowl). It took me about one second to stitch together all that was going on right before my eyes.

Apple

Apple was introducing a new computer.

It was going to be a lot different from the IBM PC.

The world was not going to be like 1984.

And most importantly, the computer I had just been playing with weeks earlier was, in fact, the Apple Macintosh.

I was so excited to head back to the terminal rooms and talk about this with my fellow STOs and to use the new Apple Macintosh.

Returning

Upon returning to the terminal room in Upson, Macs had already started to replace VT100s. First just a couple and then over time, terminal access moved to an emulation program on Macs (rumor had it that the Macs were actually cheaper than terminals!).

128k Mac

My Friday night shift was transformed. Several Macs were added to the lab. I had to institute a waiting list. Soon only the stalwarts were using the PCs. I started to see a whole new crowd on those lonely computer nights.

MacpaintWP

I saw seniors in Arts & Sciences preparing resumes and printing them on the ImageWriter (note, significantly easier to change the ribbon, which I had to do quite often every night). Those in the Greek System came by for help making signs for parties. Students discovered their talent with MacPaint pixel art and fat bits. All over campus signs changed overnight from misaligned stencils to ImageWriter printouts testing the limits of font faces per page.

Imagewriter

sample_printout_macintosh_dot_matrix-printer1

I have to admit, however, I spent an inordinate amount of time attempting to recover documents that were lost to memory corruption bugs on the original MacWrite. The STOs all developed a great trouble shooting script and signs were posted with all sorts of guesses (no more than 4 fonts per document, keep documents under 5 pages, don’t use too many carriage returns). We anxiously awaited updates and students would often wait in line to update their “MacWrite disks” when word spread of an update (hey, there was no Internet download).

In short order, Macintosh swept across campus. Cornell along with many schools was part of Apple’s genius campaign on campuses. While I still had my Osborne, I was using Macintosh more often than not.

macwriteLarge

Completing College

The next couple of years saw an explosion of use of Macintosh across campus. The next incoming class saw many students purchasing a Mac at the start of college. Research funds were buying Macs. Everywhere you looked they were popping up on desks. There was even a dedicated store just off campus that sold and serviced Macs. People were changing their office furniture and layout to support using a mouse. Computer labs were being rearranged to support local printers and mice. The campus store started stocking floppy disks, which became a requirement for most every class.

Document creation had moved from typewriters and limited use of WordPerfect to near ubiquitous use of MacWrite practically by final exams that Spring. Later, Microsoft Mac Word, which proved far more robust became the standard.

Mac Word 1.0

The Hotel School’s business students were using Microsoft Mac Excel almost immediately.

via pingdom and Mike Koss

The Chemistry department made a wholesale switch to Macintosh. The software was a huge driver of this. It is hard to explain how difficult it was to prepare a chemistry journal article before Macintosh (the department employed a full time molecular draftsman to prepare manuscripts). The introduction of ChemDraw was a turning point for publishing chemists (half my major was chemistry).

It was in the Chemistry department where I found a home for my fondness of Macintosh and an incredibly supportive faculty (especially Jon Clardy). The research group had a little of everything, including MS-DOS PCs with mice which were quite a novelty. There were also Macs with external hard drives.

I also had access to MacApp and the tools (LightSpeed Pascal) to write my own Mac software. Until then all my programming had been on PCs (and mainframes, and Unix). I had spent two summers as an intern (at Martin Marietta, the same company dBase programmer Wayne Ratliff worked!) hacking around MS-DOS writing utilities to do things that were as easy as drag and drop on a Mac or just worked with MacWrite and Mac Excel. As fun as learning K&R, C, and INT 21h was, the Macintosh was calling.

thinks-lightspeed-pascal-10-2

My first project was porting a giant Fortran program (Molecular Mechanics) to the Mac. Surprisingly it worked (perhaps today, equally surprising was the existence of a Fortran compiler). It cemented the lab’s view that the Macs could also be for work, not just document creation. Next up I just started exploring the visualizations available on the Mac. Programming graphics was all new to me. Programming an object-oriented event loop seemed mysterious and indirect to me compared to INT 21h or stdio.

But within a few hacking sessions (fairly novel to the chemistry department) the whole thing came together. Unlike all of the previous systems I used, the elegance of the Mac was special. I felt like the more I used it the more it all made sense. When I would bury myself in Unix systems programming it seemed more like a series of things, tricks, you needed to know. Macintosh felt like a system. As I learned more I felt like I was able to guess how new things would work. I felt like the bugs in my programs were more my bugs and not things I misunderstood.

Macintosh Revealed

The proof of this was that through the Spring semester my senior year I was able to write a program that visualized the periodic table of the elements using dozens of different variables. It was a way to explore periodicity of the elements. I wrote routines for an X-Y plot, bar charts, text tables, and the pièce de résistance was a 2.5-dimensional perspective of the periodic table showing a single property (commonly used to illustrate the periodic nature of electron affinity). I had to ask a lot of friends who were taking computer graphics on SGIs for help! Still, not only had I just been able to program another new OS (by then this was my 5th or 6th) but I was able to program a graphical user interface for the first time.

MacMendeleev was born.

MacMendeleev

The geek in all of us has that special moment when at once you feel empowered and marvel at a system. That day in the spring of 1987 when I rendered a perspective drawing from my own code on a system that I had seen go from a chained down plywood box to ubiquity across campus was magical. Even my final report for the project was, to me, a work of art.

The geek in all of us has that special moment when at once you feel empowered and marvel at a system.

It wasn’t just the programming that was possible. It wasn’t just the elegance and learnability of the system. It wasn’t even the ubiquity that the Macintosh achieved on campus. It was all of those. Most of all it represented a tool that allowed me to realize some of my own potential. I was awful at Chemistry. Yet with Macintosh I was able to contribute to the department and probably showed a professor or two that in spite of my lack of actual chemistry aptitude I could do something (and dang, my lab reports looked amazing!). I was, arguably, able to learn some chemistry.

I achieved with Macintosh what became one of the most important building blocks in my education.

I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.

I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.

What came next

Graduate school diverged in terms of computing. We used DEC VMS, though SmallTalk was our research platform. So much of the elegance of the Macintosh OS (MacApp and Lisa before that) was much clearer to me as I studied the nuances of object-oriented programming.

I used my Macintosh II to write papers, make diagrams, and remote into the microVAX at my desk. I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word I won at an ACM conference for my work on MacMendeleev.

I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word…

When I made it to Microsoft I found a great many shared the same experience. I met folks who worked on Mac Excel and also had Macs in boxes chained to tables. I met folks who wrote some of those Macintosh programs I used in college. So many of the folks in the “Apps” team I was hired into that year grew up on that unique mixture of Mac and Unix (Microsoft used Xenix back then). We all became more than converts to MS-DOS and Windows (3.0 was being developed when I landed at Microsoft).

There’s no doubt our collective experiences contribute to the products we each work on. Wikipedia even documents the influence of MacApp on MFC (my first product contribution), which was by design (and also by design was where not to be influenced). It is wonderful to think that through tools like MFC and Visual Basic along with ubiquitous computing, Windows brought to so many young programmers that same feeling of mastery and empowerment that I felt when I first used Macintosh.

Fast-forwarding, I can’t help but think about today’s college students having grown up hacking the web but recently exposed as programmers to mobile platforms. The web to them is like the Atari was to meprogrammable, understandable, and fun. The ability to take your ideas, connect them to the Internet, touch your creation, and make your own experience must feel like building a Macintosh program from scratch felt like to me. The unique combination of mastery of the system, elegance of design, and empowerment is what separates a technology from a movement.

Macintosh certainly changed my path in life…

For me, Macintosh was an early contributor to my learning, skills, and ultimately my self-confidence. Macintosh certainly changed my professional path in life. For sure, 1984 was not at all like 1984 for me.

Happy Anniversary

Yes, of course I’m a PC (and definitely a Surface).  Nothing contributed more to my professional life than the PC!

–Steven Sinofsky (@stevesi, stevesi@mac.com)

PS: How far have we come? Check out this Computer Chronicles from 1985 where the new Macintosh is discussed.

Written by Steven Sinofsky

January 23, 2014 at 7:30 am

Posted in posts

Tagged with , , ,