Learning by Shipping

products, development, management…

Posts Tagged ‘innovation

Africa’s Mobile-Sun Revolution

Solar panels are a part of the landscape in southern Africa. Here, two boys ham it up for the camera while playing around a large-form factor panel.

The transformative potential for mobile communications is upon us in every aspect of life. In the developing world where infrastructure of all types is at a premium, few question the potential for mobile, but many wonder whether it should be a priority.

Note: This post originally appeared in Re/code on April 29, 2015.

Many years of visiting the developing world have taught me that, given the tools, people — including the very poor — will quickly and easily put them to uses that exceed even the well-intentioned ideas of the developed world. Poor people want to and can do everything people of means can do, they just don’t have the money.

Previously, I’ve written about the rise of ubiquitous mobile payments across Africa, and the work to bring free high-speed Wi-Fi to the settlements of South Africa. One thing has been missing, though, and that is access to reliable sources of power to keep these mobile phones and tablets running. In just a short time — less than a year — solar panels have become a commonplace site in one relatively poor village I recently returned to. I think this is a trend worth noting.

Could it be that solar power, potentially combined with large-scale batteries, will be the “grid” in developing markets, perhaps in the near future? I think so.

It is also the sort of disruptive trend we are getting used to seeing in developing markets. The market need and context leads to solutions that leapfrog what we created over many years in the developed world. Wireless phones skipped over landlines. Smartphones skipped over the PC. Mobile banking skipped over plastic cards and banks.

Could it be that solar power, potentially combined with large-scale batteries, will be the “grid” in developing markets, perhaps at least in the near future? I think so. At the very least, solar will prove enormously useful and beneficial and require effectively zero-dollar investments in infrastructure to dramatically improve lives. Solar combined with small-scale appliances, starting with mobile phones, provides an enormous increase in standard of living.

Infrastructure history

Historically, being poor in a developing economy put you at the end of a long chain of government and international NGO assistance when it comes to infrastructure. While people can pull together the makings of shelter and food along with subsistence labor or farming, access to what we in the developing world consider basic rights continues to be a remarkable challenge.

For the past 50 or more years, global organizations have been orchestrating “top down” approaches to building infrastructure: Roads, water, sewage and housing. There have been convincing successes in many of these areas. The recent UN Millennium Development Goals report demonstrates that the percentage of humans living at extreme poverty has decreased by almost half. In 1990, almost half the population in developing regions lived on less than $1.25 a day, the common definition of extreme poverty. This rate dropped to 22 percent by 2010, reducing the number of people living in extreme poverty by 700 million.

Nevertheless, billions of people live every day without access to basic infrastructure needs. Yet they continue to thrive, grow and improve their lives.

This UN Millennium Development Goals infographic shows the dramatic decline in percentage of people living under extreme poverty. (United Nations)

This UN Millennium Development Goals infographic shows the dramatic decline in percentage of people living under extreme poverty. (United Nations)

While the efforts to introduce major infrastructure will continue, the pace can sometimes be slower than either the people would like or what those of us in the developing world believe should be “acceptable.”

A village I know of, about 10 miles outside a major city in southern Africa, started from a patch of land contributed by the government about six years ago, and grew to a thriving neighborhood of 400 single-family homes. These homes are multi-room, secure, cement structures with indoor connections to sewage. The families of these homes earn about $100-$200 a month in a wide range of jobs. By way of comparison, these homes cost under $10,000 to build.

While the roads are unpaved, this is hardly noticed. But one thing has become much more noticeable of late is the lack of electrical power. Historically, this has not been nearly as problematic as we in the developing world might think. Their economy and jobs were tuned to daylight hours and work that made use of the energy sources available.

Solar-powered streetlights have been installed recently — here under construction — increasing public safety and providing light to the community.

Solar-powered streetlights have been installed recently — here under construction — increasing public safety and providing light to the community.

Several finished homes around a nearly complete streetlight installation that also illuminates a drinking-water well, enabling nighttime access to water.

Several finished homes around a nearly complete streetlight installation that also illuminates a drinking-water well, enabling nighttime access to water.

In an effort to bring additional safety to the village, the citizens worked with local government to install solar “street lights,” such as the one pictured here. This simple development began to change the nighttime for residents. These were installed beginning about nine months ago (as seen in the first photo, with a closer to production installation in the second).

Historically, this type of infrastructure, street lighting, would come after a connection to the electrical grid and development of roads. Solar power has made this “reordering” possible and welcome. Lighting streets is great, but that leads to more demands for power.

Mobile phones, the new infrastructure

These residents are pretty well off, even on relatively low wages that are three to five times the extreme poverty level. While they lack electricity and roads, they are safe, secured and sheltered.

One of the contributors to the improved standard of living has been mobile phones. Over the past couple of years, mobile phone penetration in this village has reached essentially 100 percent per household, and most adults have a mobile.

The use of mobiles is not a luxury, but essential to daily life. Those that commute into the city to sell or buy supplies can check on potential or availability via mobile.

Families can stay connected even when one goes far away for a good job or better work. Safety can be maintained by a “neighborhood watch” system powered by mobile. Students can access additional resources or teacher help via mobile. Of course, people love to use their phones to access the latest World Cup soccer results or listen to religious broadcasts.

All of these uses and infinitely more were developed in a truly bottom-up approach. There were no courses, no tutorials, no NGOs showing up to “deploy” phones or to train people. Access to the tools of communication and information as a platform were put to uses that surprise even the most tech-savvy (i.e., me). Mobile is so beneficial and so easy to access that it has quickly become ubiquitous and essential.

Last year, when I wrote for Re/code about mobile banking and free Wi-Fi, I received a fair number of comments and emails saying how this seemed like an unnecessary luxury, and that smartphones were being pushed on people who couldn’t afford the minutes or kilobytes, or would much rather have better access to water or toilets. The truth is, when you talk to people who live here, the priority for access unquestionably goes to mobile communication. In their own words, time and time again, the priority is attached to mobile communications and information.

Fortunately, because of the openness most governments have had to investments from multinational telecoms such as MTN, Airtel and Orange, most cities and suburban areas of the continent are well covered by 2G and often 3G connectivity. The rates are competitive across carriers, and many people carry multiple SIMs to arbitrage those rates, since saving pennies matters (calls within a carrier network are often cheaper than across carriers).

Mobile powered by solar

There has been one problem, though, and that is keeping phones charged. The more people use their phones (day and night), the more this has become a problem. While many of us spend time searching for outlets, what do you do when the nearest outlet might be a few miles away?

It is not uncommon to see one outlet shared by many members of a community. This outlet is in the community center, which is one of a small number of grid-connected structures. Note the variety of feature phones.

It is not uncommon to see one outlet shared by many members of a community. This outlet is in the community center, which is one of a small number of grid-connected structures. Note the variety of feature phones.

When there is an outlet, you often see people grouped around it, or one person volunteers to rotate phones through the charging cycles. Above a picture of an outlet in the one building connected to power, the community center. This is a pretty common sight.

Small portable solar panels can serve as “permanent” power sources when roof-mounted. You can see the extension wire drawn through the window.

Small portable solar panels can serve as “permanent” power sources when roof-mounted. You can see the extension wire drawn through the window.

An amazing transformation is taking place, and that is the rise of solar. What we might see as an exotic or luxury form of power for hikers and backpackers, or something reasonably well-off people use to augment their home power, has become as common a sight as the water pump.

The plethora of phones sharing a single outlet has been replaced by the portable solar panel out in front of every single home.

An interesting confluence of two factors has brought solar so quickly and cheaply to these people. First, as we all know, China has been investing massively in solar technology, solar panels and solar-powered devices. That has brought choice and low prices, as one would expect. In seeking growth opportunities, Chinese companies are looking to the vast market opportunity in Africa, where people are still not connected to a grid. There’s a full supply chain of innovation, from the solar through to integrated appliances with batteries.

Second, China has a significant presence in many African countries, and is contributing a massive amount of support in dollars and people to build out more traditional infrastructure, particularly transportation. In fact, many Chinese immigrants in country on work projects become the first customers of some of these solar innovations.

People are exposed to low-cost, low-power portable solar panels and they are “hooked.” In fact, you can now see many small stores that sell 100w panels for the basics of charging phones. You can see solar for sale in the image below. I left the whole store in the photo just to offer a bit of culture. The second photo shows the solar “for sale” offers.

A typical storefront in this community, selling a variety of important products for the home. Solar panels are for sale, as indicated by the signs in the upper left.

A typical storefront in this community, selling a variety of important products for the home. Solar panels are for sale, as indicated by the signs in the upper left.

Detail from the storefront showing the solar panels for sale. There is a vibrant after-market for panels, as they often change hands, depending on the capital needs of a family.

Detail from the storefront showing the solar panels for sale. There is a vibrant after-market for panels, as they often change hands, depending on the capital needs of a family.

Like many significant investments, there’s a vibrant market in both used panels and in the repair and maintenance of panels and wiring. Solar is a budding industry, for sure.

But people want more than to charge their phones once they see the “power” of solar. Here is where the ever-improving and shrinking of solar, LED lights, lithium batteries and more are coming together to transform the power consumption landscape and the very definition of “home appliances.”

In the developed world, we are transitioning from incandescent and fluorescent lighting in a rapid pace (in California, new construction effectively requires LED). LED lights, in addition to lasting “forever,” also consume 80 percent less power. Combining LED lights, low-cost rechargeable batteries and solar, you can all of a sudden light up a home at night. Econet is one of the largest mobile carriers/companies in Africa, and has many other ventures that improve the lives of people.

Here are a few Econet-developed LED lanterns recharging outside a home. This person has three lights, and shares or rents them with neighbors as a business. Not only are these cheaper and more durable than a fossil-fuel-based lantern, they have no ongoing cost, since they are powered by the sun.

Several modern, portable, solar-powered LED lamps sold at very low cost by mobile provider Econet. The owner rents these lamps out for short-term use.

Several modern, portable, solar-powered LED lamps sold at very low cost by mobile provider Econet. The owner rents these lamps out for short-term use.

With China bringing down the cost of larger panels, and the abundance of trade between Africa and China, there’s an explosion in slightly larger solar panels. In fact, many of the homes I saw just nine months ago now commonly sport a large two-by-four-foot solar panel on the roof or strategically positioned for maximal use.

These two boys were hanging out when I walked by, and quickly chose a formal pose in front of their home, which has a large permanent solar panel mounted on the roof.

These two boys were hanging out when I walked by, and quickly chose a formal pose in front of their home, which has a large permanent solar panel mounted on the roof.

Panels are often on the ground, because they move between homes where the investment for the panel has been shared by a couple of families. This might seem inefficient or odd to many, but the developing world is the master of the shared economy. Many might be familiar with the founding story of Lyft based on experiences with shared van rides in Zimbabwe, Zimride.

A trio of medium-sized solar panels strategically placed outside the doors of several homes sharing a courtyard.

A trio of medium-sized solar panels strategically placed outside the doors of several homes sharing a courtyard.

Just the first step

We are just at the start of this next revolution at improving the lives of people in developing economies using solar power.

Three sets of advances will contribute to improved standards of living relative to economics, safety and comfort.

First, more and more battery-operated appliances will make their way into the world marketplace. At CES this year, we saw battery-operated developed-market products for everything from vacuum cleaners to stoves. Once something is battery-powered, it can be easily charged. These innovations will make their way to appliances that are useful in the context of the developing world, as we have seen with home lighting. The improvement in batteries in both cost and capacity (and weight) will drive major changes in appliances across all markets.

Second, the lowering of the price of solar panels will continue, and they will become commonplace as the next infrastructure requirement. This will then make possible all sorts of improvements in schools, work and safety. One thing that can then happen is an improvement in communication that comes from high speed Wi-Fi throughout villages like the one described here. Solar can power point-to-point connectivity or even a satellite uplink. Obviously, costs of connectivity itself will be something to deal with, but we’ve already seen how people adapt their needs and use of cash flow when something provides an extremely high benefit. It is far more likely that Wi-Fi will be built out before broad-based 3G or 4G coverage and upgrades can happen.

Third, I would not be surprised to see innovations in battery storage make their way to the developing markets long before they are ubiquitous in the developed markets.

A full-sized “roof” solar panel leaning up against a clothesline. Often roof-mounting panels is structurally challenging, so it is not uncommon to see these larger panels placed nearby on the ground.

A full-sized “roof” solar panel leaning up against a clothesline. Often roof-mounting panels is structurally challenging, so it is not uncommon to see these larger panels placed nearby on the ground.

Developed markets will value batteries for power backup in case of a loss of power and solar storage (rather than feeding back to the grid). But in the developing markets, a battery pack could provide continuous and on-demand power for a home in quantity, as well as nighttime power allowing for studying, businesses and more. This is transformative, as people can then begin to operate outside of daylight hours and to use a broader range of appliances that can save time, increase safety in the home and improve quality of life.

Our industry is all about mobile and cloud. With the arrival of low-cost solar, it’s no surprise that the revolution taking place in developing markets these days is rooted in mobile-sun.

Steven Sinofsky (@stevesi)

Photos by the author unless otherwise noted.

Written by Steven Sinofsky

May 6, 2015 at 9:30 pm

Startups aren’t features (of products or companies)

Checklist with pen isolated on whiteCompanies often pay very close attention to new products from startups as they launched and ponder their impact on their scale, mainstream work. Almost all of the time the competitive risk was deemed minimal. Then one day the impact is significant.

In fact up until such a point most pundits and observers likely said that the startup will get overrun or crushed by a big company in the adjacent space. By this time it is often too late for the incumbent and what was a product challenge now looks like an opportunity to take on the challenges of venture integration.

Why is this dynamic so often repeated? Why does the advantage tilt to startups when it comes to innovation, particularly innovation that disrupts the traditional category definition or go to market of a product?

Much of the challenge described here is rooted in how we discuss technology disruption. Incumbents are faced with “disruption” on a daily basis and from all constituencies. To a great degree as an incumbent the sky is always falling. For every product that truly disrupts there are likely hundreds of products, technologies, marketing campaigns, pricing strategies and more that some were certain would be last straw for an incumbent.

Because statistically new ideas are not likely to disrupt and new companies are likely to fail, incumbents become experts at defining away the challenges and risks posed by a new entrant into the market. Incumbents view the risk of wild swings in strategy or execution as much higher risk than odds of a 1 in 100 chance a new technology upending the near term business. Factoring in any reasonable timeline and the incumbent has every incentive to side with statistics.

To answer “why startups aren’t features” this post looks at the three elements of a startup that competes with an incumbent: incumbent’s reaction, challenges faced by the incumbent, and the advantages of the startup.


When a startup enters a space thought (by the incumbent or conventional wisdom) to be occupied by an incumbent there are series of reasonably predictable reactions that take place. The more entrenched the incumbent the more reasoned and bullet proof the logic appears to be. Remember, most technologies fail to take hold and most startups don’t grow into significant competitors. I’ve personally reacted to this situation as both a startup and as the incumbent.

Doesn’t solve a problem customers have. The first reaction is to just declare a product as not solving a customer problem. This is sort of the ultimate “in the bubble” reaction because the reality is that the incumbent’s existing customers almost certainly don’t have the specific problem being solved because they too live in the very same context. In a world where enterprises were comfortable sending PPT/PDFs over dedicated lines to replicated file servers, web technologies didn’t solve a problem anyone had (this is a real example I experienced in evangelizing web technology).

Just a feature. The first reaction to most startups is that whatever is being done is a feature of an existing product. Perhaps the most famous of all of these was Steve Jobs declaring Dropbox to be “a feature not a product”. Across the spectrum from enterprise to consumer this reaction is routine. Every major communication service, for example, enabled the exchange of photos (AIM, Messenger, MMS, Facebook, and more). Yet, from Instagram to Snapchat some incredibly innovative and valuable startups have been created that to some do nothing more than slight variations in sharing photos. In collaboration, email, app development, storage and more enterprise startups continue to innovate in ways that solve problems in uniquely valuable ways all while incumbents feel like they “already do that”. So while something might be a feature of an existing product, it is almost certainly not a feature exactly like one in an existing product or likely to become one.

Only a month’s work. One asset incumbents have is an existing engineering infrastructure and user experience. So when a new “feature” becomes interesting in the marketplace and discussions turn to “getting something done” the conclusion is usually that the work is about a month. Often this is based on estimate for how much effort the startup put into the work. However, the incumbent has all sorts of constraints that turn that month into many months: globalization, code reviews, security audits, training customer support, developing marketing plans, enterprise customer roadmaps, not to mention all the coordination and scheduling adjustments. On top of all of that, we all know that it is far easier to add a new feature to a new code base than to add something to a large and complex code base. So rarely is something a month’s work in reality.


One thing worth doing as a startup (or as a customer of an incumbent) is considering why the challenges continue even if the incumbent spins up an effort to compete.

Just one feature. If you take at face value that the startup is doing just a feature then it is almost certainly the case that it will be packaged and communicated as such. The feature will get implemented as an add-on, an extra click or checkbox, and communicated to customers as part of the existing materials. In other words, the feature is an objection handler.

Takes a long time to integrate. At the enterprise level, the most critical part of any new feature or innovation is how it integrates with existing efforts. In that regard, the early feedback about the execution will always push for more integration with existing solutions. This will slow down the release of the efforts and tend to pile on more and more engineering work that is outside the domain of what the competitor is doing.

Doesn’t fit with broad value proposition. The other side of “just one feature” is that the go to market execution sees the new feature as somehow conflicting with the existing value proposition. This means that while people seem to be seeing great value in a solution the very existence of the solution runs counter to the core value proposition of the existing products. If you think about all those photo sharing applications, the whole idea was to collect all your photos, enable you to later share them or order prints or mugs. Along comes disappearing photos and that doesn’t fit at all with what you do. At the enterprise level, consider how the enterprise world was all about compliance and containing information while faced with file sharing that is all about beyond the firewall. Faced with reconciling these positioning elements, the incumbent will choose to sell against the startup’s scenario rather than embrace it.


Startups also have some advantages in this dynamic that are readily exploitable. Most of the time when a new idea is taking hold one can see how the startup is maximizing the value they bring along one of these dimensions.

Depth versus breadth. Because the incumbent often views something new as a feature of an existing product, the startup has an opportunity to innovate much more deeply in the space. In any scenario becomes interesting, the flywheel of innovation that comes from usage creates many opportunities to improve the scenario. So while the early days might look like a feature, a startup is committed to the full depth of a scenario and only that scenario. They don’t have any pressure to maintain something that already exists or spend energy elsewhere. In a world where customers want the app to offer a full stack solution or expect a tool to complete the scenario without integrating something else, this turns out to be a huge advantage.

Single release effort. The startup is focused on one line of development. There’s no coordination, no schedules to align, no longer term marketing plans to reconcile and so on. Incumbents will often try to change plans but more often than not the reactions are in whitepapers (for enterprise) or beta releases (for consumer). While it might seem obvious, this is where the clarity, focus, and scale of the startup can be most advantageous.

Clear and recognizable value proposition/identity. The biggest challenge incumbents face when adding a new capability to their product/product line is where to put it so it will get noticed. There’s already enormous surface area in the product, the marketing, and also in the business/pricing. Even the basics of telling customers that you’ve done something new is difficult and calling attention to a specific feature it often ends up as a supporting point on the third pillar. Ironically, those arguing to compete more directly are often faced with internal pressures that amount to “don’t validate the competitor that much”. This means even if the feature exists in the incumbent’s product, it is probably really difficult to know that and equally difficult to find. The startup perspective is that the company comes to stand for the entire end-to-end scenario and over time when customers’ needs turn to that feature or scenario, there is total clarity in where to get the app or service.

Even with all of these challenges, this dynamic continues: initially dismissing startup products, later attempting to build what they do, and in general difficulty in reacting to inherent advantages of a startup. One needs to look long and hard for a story where an incumbent organically competed and won against a startup in a category or feature area.

Secret Weapon

More often than not the new categories of products come about because there is a change in the computing landscape at a fundamental level. This change can be the business model, for example the change to software as a service. It could also be the architecture, such as a move to cloud. There could also be a discontinuity in the core computing platform, such as the switch to graphical interface, the web, or mobile.

There’s a more subtle change which is when an underlying technology change is simply too difficult for incumbents to do in an additive fashion. The best way to think about this is if an incumbent has products in many spaces but a new product arises that contains a little bit of two of the incumbent’s products. In order to effectively compete, the incumbent first must go through a process of deciding which team takes the lead in competing. Then they must address innovator’s dilemma challenges and allocate resources in this new area. Then they must execute both the technology plans and go to market plans. While all of this is happening, the startup unburdened by any of these races ahead creating a more robust and full featured solution.

At first this might seem a bit crazy. As you think about it though, modern software is almost always a combination of widely reused elements: messaging, communicating, editing, rendering, photos, identity, storage, API / customization, payments, markets, and so on. Most new products represent bundles or mash-ups of these ingredients. The secret sauce is the precise choice of elements and of course the execution. Few startups choose to compete head-on with existing products. As we know, the next big thing is not a reimplementation of the current big thing.

The secret weapon in startups competing with large scale incumbents is to create a product that spans the engineering organization, takes a counter-intuitive architectural approach, or lands in the middle of the different elements of a go to market strategy. While it might sound like a master plan to do this on purpose, it is amazing how often entrepreneurs simply see the need for new products as a blending of existing solutions, a revisiting of legacy architectural assumptions, and/or emphasis on different parts of the solution.

—Steven Sinofsky (@stevesi)

Written by Steven Sinofsky

November 17, 2014 at 12:00 pm

Posted in posts

Tagged with , ,

Mobile OS Paradigm

Cycle of nature of work, capabilities of tools, architecture of platform.

Cycle of nature of work, capabilities of tools, architecture of platform.

Are tablets the next big thing, a saturated market (already), dead (!), or just in a lull? The debate continues while the sales of tablets continue to outpace laptops and will soon overtake all PCs (of all form factors and OS). What is really going on is an architectural transformation—the architecture that defined the PC is being eclipsed by the mobile OS architecture.

The controversy of this dynamic rests with the disruptive nature—the things that were easy to do with a PC architecture that are hard or impossible to do with a mobile OS, as well as the things in a mobile OS that make traditional PCs seem much easier. Legacy app compatibility, software required for whole professions, input preferences, peripherals, and more are all part of this. All of these are also rapidly changing as software evolves, scenarios adapt, and with that what is really important changes.

Previous posts have discussed the changing nature of work and the new capabilities of tools. This post details the architecture of the platform. Together these three form an innovation cycle—each feeding into and from each other, driving the overall change in the computing landscape we see today.

The fundamental shift in the OS is really key to all of this. For all the discussed negatives the mobile OS architecture brings to the near term, it is also an essential and inescapable transition. Often during these transitions we focus in the near term on the visible differences and miss the underlying nature of the change.

During the transition from mini to PC, the low price and low performance created a price/performance gap that the minis thought they would exploit. Yet the scale volume, architectural openness, and rapid improvement in multi-vendor tools (and more) contributed to a rapid acceleration that could not compare.

During the transition from character-based to GUI-based PCs many focused on the expense of extra peripherals such as graphics cards and mice, requirement for more memory and MIPs, not to mention the performance implications of the new interface in terms of training and productivity. Yet, Moore’s law, far more robust peripheral support (printers and drivers), and ability to draw on multi-app scenarios (clipboard and more) transformed computing in ways character-based could not.

The same could be said about the transition to internetworking with browsers. The point is that the ancillary benefits of these architectural transitions are often overlooked while the dialog primarily focuses on the immediate and visible changes in the platform and experience. Sometimes the changes are mitigated over time (i.e. adding keyboard shortcuts to GUI or the evolution of the PC to long file names and real multi-tasking and virtual memory). Other times the changes become the new paradigm as new customers and new scenarios dominate (i.e. mouse, color, networking).

The transition to the mobile OS platforms is following this same pattern. For all the debates about touch versus keyboard, screen-size, vertical integration, or full-screen apps, there are fundamental shifts in the underlying implementation of the operating system that are here to stay and have transformed computing.

We are fortunate during this transition because we first experienced this with phones that we all love and use (more than any other device) so the changes are less of a disconnect with existing behavior, but that doesn’t reduce the challenge for some or even the debate.

Mobile OS paradigm

The mobile OS as defined by Android, iOS, Windows RT, Chrome OS, Windows Phone, and others is a very different architecture from the PC as envisioned by Windows 7/8, Mac OS X, Linux desktop. The paradigm includes a number of key innovations that when taken together define the new paradigm.

  1. ARM. ARM architecture for mobile provides a different view of the “processor”: SoC, multi-vendor, simpler, lower power consumption, fanless, rich graphics, connectivity, sensors, and more. All of these are packaged in a much lower cost way. I am decidedly not singling out Intel/AMD about this change, but the product is fundamentally different than even Intel’s SoCs and business approach. ARM is also incompatible with x86 instructions which means, even virtualized, the existing base of software does not run, which turns out to be an asset during this change (the way OS/360 and VMS didn’t run on PCs).
  2. Security. At the heart of mobile is a more secure platform. It is not more secure because there are few pointers in the implementation or fewer APIs, but more secure because apps run with a different notion of what they can/cannot do and there is simply no way to get apps on the device that can violate those rules (other than for developers of course). There’s a full kernel there but you cannot just write your own kernel mode drivers to do whatever you want. Security is a race of course and so more socially engineered, password stealing, packet sniffing, phone home evil apps will no doubt make their way to mobile but you won’t see drive by buffer overrun attacks take over your device, keystroke loggers, or apps that steal other apps’ data.
  3. Quality over time and telemetry. We are all familiar with the way PCs (and to a lesser but non-zero degree Macs) decay over time or get into states where only a reformat or re-imaging will do. Fragility of the PC architecture in this regard is directly correlated with the openness and so very hard to defend against, even among the most diligent enthusiasts (myself included). The mobile OS is designed from the ground up with a level of isolation between the OS and apps and between apps that all but guarantee the device will continue to run and perform the way it did on the first day. When performance does take a turn for the worse, there’s ongoing telemetry that can easily point to the errant/causal app and removing it returns things to that baseline level of excellence.
  4. App store model. The app store model provides for both a full catalog of apps easily searched and a known/reviewed source of apps that adhere to some (vendor-specified) level of standards. While vendors are taking different approaches to the level of consistency and enforcement, it is fair to say this approach offers so many advantages. Even in the event of a failure of the review/approval process, apps can be revoked if they prove to be malicious in intent or fixed if there was an engineering mistake. In addition, the centralized reviews provide a level of app telemetry that has previously not existed. For developers and consumers, the uniform terms and licensing of apps and business models are significant improvements (though they come with changes in how things operate).
  5. All day battery life. All day battery life has been a goal of devices since the first portable/battery PCs. The power draw of x86 chipsets (including controllers and memory), the reliability challenges of standby power cycles, and more have made this incredibly difficult to reliably “add on” to the open PC architecture. Because of the need for device drivers, security software, and more the likelihood that a single install or peripheral will dramatically change the power profile of a traditional device is commonplace. The “closed” nature of a mobile OS along with the process/app model make it possible to have all day battery life regardless of what is thrown at it.
  6. Always connected. A modern mobile OS is designed to be always connected to a variety of networks, most importantly the WWAN. This is a capability from the chipset through the OS. This connectivity is not just an alternative for networking, but built into the assumptions of the networking stack, the process model, the app model, and the user model. It is ironic that the PC architecture which had optional connectivity is still less good at dealing with intermittent connectivity than mobile which has always been less consistent than LAN or wifi. The need to handle the constant change in connectivity drove a different architecture. In addition, the ability to run with essentially no power draw and screen off while “waking up” instantly for inbound traffic is a core capability.
  7. Always up to date apps/OS. Today’s PC OSes all have updaters and connectivity to repositories from their vendors, but from the start the modern mobile OS is designed to be constantly updated at both the app and OS from one central location (even if the two updates are handled differently). We are in a little bit of an intermediate state because on PCs there are some apps (like Chrome and Firefox, and security patches on Windows) that update without prompts by default yet on mobile we still see some notifications for action. I suspect in short order we will see uniform and seamless, but transparent, updates.
  8. Cloud-centric/stateless. For decades people have had all sorts of tricks to try to maintain a stateless PC: the “M” drive, data drives or partitions, roaming profiles, boot from server, VM or VDI, even routine re-imaging, etc. None of these worked reliably and all had the same core problem, which was that whatever could go wrong if you weren’t running them could still go wrong and then you’re one good copy was broken everywhere. The mobile OS is designed from the start to have state and data in the cloud and given the isolation, separation, and kernel architecture you can reliably restore your device often in minutes.
  9. Touch. Touch is the clearly the most visible and most challenging transition. Designing the core of the OS and app model for touch first but with support for keyboards has fundamentally altered the nature of how we expect to interact with devices. No one can dispute that for existing workloads on existing software that mouse and keyboard are superior and will remain so (just as we saw in the transition from mainframe to mini, CUI to GUI, client/server to web, etc.) However, as the base of software and users grows, the reality is that things will change—work will change, apps will change, and thus work products will change, such that touch-first will continue to rise. My vote is that the modern “laptop” for business will continue to be large screen tablets with keyboards (just as the original iPad indicated). The above value propositions matter even more to todays mobile information worker as evidenced by the typical airport waiting area or hotel lobby lounge. I remain certain that innovation will continue to fill in the holes that currently exist in the mobile OS and tablets when it comes to keyboards. Software will continue to evolve and change the nature of precision pointing making it only something you need for PC only scenarios.
  10. Enterprise management. Even in the most tightly managed environment, the business PC demonstrates the challenges of the architecture. Enterprise control on a mobile OS is designed to be a state management system, not a compute based approach. When you use a managed mobile device, enterprise management is about controlling access to the device and some set of capabilities (policies), but not about running arbitrary code and consuming arbitrary system resources. The notion that you might type your PIN or password to your mobile device and initiate a full scan of your storage and install an arbitrary amount of software before you can answer a call is not something we will see on a modern mobile OS. So many of the previous items in the list have been seen as challenges by enterprise IT and somewhat ironically the tools developed to diagnose and mitigate them have only deepened the challenges for the PC. With mobile storage deeply encrypted, VPN access to enterprise resources, and cloud data that never lands on your device there are new ways to think of “device management”.

Each of these are fundamental to the shift to the mobile OS. Many other platform features are also significantly improved such as accessibility, global language support, even the clipboard and printing.

What is important about these is how much of a break from the traditional PC model they are. It isn’t any one of these as much as the sum total that one must look at in terms of the transition.

Once one internalizes all these moving parts, it becomes clear why the emphasis on the newly architected OS and the break from past software and hardware is essential to deliver the benefits. These benefits are now what has come to be expected from a computing device.

While a person new to computing this year might totally understand a large screen device with a keyboard for some tasks, it is not likely that it would make much sense to have to reboot, re-image, or edit the registry to remove malware, or why a device goes from x hours of battery life to 1/2 x hours just because some new app was installed. At some point the base expectations of a device change.

The mobile OS platforms we see today represent a new paradigm. This new paradigm is why you can have a super computer in your pocket or access to millions of apps that together “just work”.

–Steven Sinofsky (@stevesi)


Written by Steven Sinofsky

August 12, 2014 at 1:00 pm

You’re doing it wrong

The-main-characterSmartphones and tablets, along with apps connected to new cloud-computing platforms, are revolutionizing the workplace. We’re still early in this workplace transformation, and the tools so familiar to us will be around for quite sometime. The leaders, managers, and organizations that are using new tools sooner will quickly see how tools can drive cultural changes — developing products faster, with less bureaucracy and more focus on what’s important to the business.

If you’re trying to change how work is done, changing the tools and processes can be an eye-opening first step.

Check out a podcast on this topic hosted by Andreessen Horowitz’s Benedict Evans. Available on Soundcloud or on a16z.com.

Many of the companies I work with are creating new productivity tools, and every company starting now is using them as a first principle. Companies run their business on new software-as-a-service tools. The basics of email and calendaring infrastructure are built on the tools of the consumerization of IT. Communication and work products between members of the team and partners are using new tools that were developed from the ground up for sharing, collaboration and mobility.

Some of the exciting new tools for productivity that you can use today include: Quip,EvernoteBox and Box NotesDropboxSlackHackpadAsanaPixxa PerspectiveHaiku Deck, and more below. This list is by no means exhaustive, and new tools are showing up all the time. Some tools take familiar paradigms and pivot them for touch and mobile. Others are hybrids of existing tools that take a new view on how things can be more efficient, streamlined, or attuned to modern scenarios. All are easily used via trials for small groups and teams, even within large companies.

Tools drive cultural change

Tools have a critical yet subtle impact on how work gets done. Tools can come to define the work, as much as just making work more efficient. Early in the use of new tools there’s a combination of a huge spike in benefit, along with a temporary dip in productivity. Even with all the improvements, all tools over time can become a drag on productivity as the tools become the end, rather than the means to an end. This is just a natural evolution of systems and processes in organizations, and productivity tools are no exception. It is something to watch for as a team.

The spike comes from the new ways information is acquired, shared, created, analyzed and more. Back when the PC first entered the workplace, it was astounding to see the rapid improvements in basic things like preparing memos, making “slides,” or the ability to share information via email.

There’s a temporary dip in productivity as new individual and organizational muscles are formed and old tools and processes are replaced across the whole team. Everyone individually — and the team has a whole — feels a bit disrupted during this time. Things rapidly return to a “new normal,” and with well-chosen tools and thoughtfully-designed processes, this is an improvement.

As processes mature or age, it is not uncommon for those very gains to become burdensome. When a new lane opens on a highway, traffic moves faster for awhile, until more people discover the faster route, and then it feels like things are back where they started. Today’s most common tools and processes have reached a point where the productivity increases they once brought feel less like improvements and more like extra work that isn’t needed. All too often, the goals have long been lost, and the use of tools is on autopilot, with the reason behind the work simply “because we always did it that way.”

New tools are appearing that offer new ways to work. These new ways are not just different — this is not about fancier reports, doing the old stuff marginally faster, or bigger spreadsheets. Rather, these new tools are designed to solve problems faced by today’s mobile and continuous organization. These tools take advantage of paradigms native to phones and tablets. Data is stored on a cloud. Collaboration takes place in real time. Coordination of work is baked into the tools. Work can be accessed from a broad range of computing devices of all types. These tools all build on the modern SaaS model, so they are easy to get, work outside your firewall and come with the safety and security of cloud-native companies.

The cultural changes enabled by these tools are significant. While it is possible to think about using these tools “the same old way,” you’re likely to be disappointed. If you think a new tool that is about collaboration on short-lived documents will have feature parity with a tool for crafting printed books, then you’re likely to feel like things are missing. If you’re looking to improve your organizational effectiveness at communication, collaboration and information sharing, then you’re also going to want to change some of the assumptions about how your organization works. The fact that the new tools do some things worse and other things differently points to the disruptive innovation that these products have the potential to bring — the “Innovator’s Dilemma” is well known to describe the idea that disruptive products often feel inferior when compared to entrenched products using existing criteria.

Overcoming traps and pitfalls

Based on seeing these tools in action and noticing how organizations can re-form around new ways of working, the following list compiles some of the most common pitfalls addressed by new tools. In other words, if you find yourself doing these things, it’s time to reconsider the tools and processes on your team, and try something new.

Some of these will seem outlandish when viewed through today’s concept. As a person who worked on productivity tools for much of my career, I think back to the time when it was crazy to use a word processor for a college paper; or when I first got a job, and typing was something done by the “secretarial pool.” Even the use of email in the enterprise was first ridiculed, and many managers had assistants who would print out email and then type dictated replies (no, really!). Things change slowly, then all of a sudden there are new norms.

In our Harvard Business School class, “Digital Innovation,” we crafted a notion of “doing it wrong,” and spent a session looking at disruption in the tools of the workplace. In that spirit, “you’re doing it wrong,” if you:

  1. Spend more time summarizing or formatting a document than worrying about the actual content. Time and time again, people over-invest in the production qualities of a work product, only to realize that all that work was wasted, as most people consume it on a phone or look for the summary. This might not be new, but it is fair to say that the feature sets of existing tools and implementation (both right for when they were created, I believe) would definitely emphasize this type of activity.
  2. Aim to “complete” a document, and think your work is done when a document is done. The modern world of business and product development knows that you’re never done with a product, and that is certainly the case for documents that are steps along the way. Modern tools assume that documents continue to exist but fade in activity — the value is in getting the work out there to the cloud, and knowing that the document itself is rarely the end goal.
  3. Figure out something important with a long email thread, where the context can’t be shared and the backstory is lost. If you’re collaborating via email, you’re almost certainly losing important context, and not all the right folks are involved. A modern collaboration tool like Slack keeps everything relevant in the tool, accessible by everyone on the team from everywhere at any time, but with a full history and search.
  4. Delay doing things until someone can get on your calendar, or you’re stuck waiting on someone else’s calendar. The existence of shared calendaring created a world of matching free/busy time, which is great until two people agree to solve an important problem — two weeks from now. Modern communication tools allow for notifications, fast-paced exchange of ideas and an ability to keep things moving. Culturally, if you let a calendar become a bottleneck, you’re creating an opening for a competitor, or an opportunity for a customer or partner to remain unhappy. Don’t let calendaring become a work-prevention tool.
  5. Believe that important choices can be distilled down into a one-hour meeting. If there’s something important to keep moving on, then scheduling a meeting to “bring everyone together” is almost certainly going to result in more delays (in addition to the time to get the meeting going in the first place). The one-hour meeting for a challenging issue almost never results in a resolution, but always pushes out the solution. If you’re sharing information all along, and the right people know all that needs to be known, then the modern resolution is right there in front of you. Speaking as a person who almost always shunned meetings to avoid being a bottleneck, I think it’s worth considering that the age-old technique of having short and daily sync meetings doesn’t really address this challenge. Meetings themselves, one might argue, are increasingly questionable in a world of continuously connected teams.
  6. Bring dead trees and static numbers to the table, rather than live, onscreen data. Live data analysis was invented 20 years ago, but too many still bring snapshots of old data to meetings which then too often digress into analyzing the validity of numbers or debating the slice/view of the data, further delaying action until there’s an update. Modern tools like Tidemark and Apptio provide real-time and mobile access to information. Meetings should use live data, and more importantly, the team should share access to live data so everyone is making choices with all the available information.
  7. Use the first 30 minutes of a meeting recreating and debating the prior context that got you to a meeting in the first place. All too often, when a meeting is scheduled far in advance, things change so much that by the time everyone is in the room, the first half of the hour (after connecting projectors, going through an enterprise log-on, etc.) is spent with everyone reminding each other and attempting to agree on the context and purpose of the gathering. Why not write out a list of issues in a collaborative document like Quip, and have folks share thoughts and data in real time to first understand the issue?
  8. Track what work needs to happen for a project using analog tools. Far too many projects are still tracked via paper and pen which aren’t shared, or on whiteboards with too little information, or in a spreadsheet mailed around over and over again. Asana is a simple example of an easy-to-use and modern tool that decreases (to zero) email flow, allows for everyone to contribute and align on what needs to be done, and to have a global view of what is left to do.
  9. Need to think which computer or device your work is “on.” Cloud storage from Box,DropboxOneDrive and others makes it easy (and essential) to keep your documents in the cloud. You can edit, share, comment and track your documents from any device at any time. There’s no excuse for having a document stuck on a single computer, and certainly no excuse risking the use of USB storage for important work.
  10. Use different tools to collaborate with partners than you use with fellow employees. Today’s teams are made up of vendors, contractors, partners and customers all working together. Cloud-based tools solve the problem of access and security in modern ways that treat everyone as equals in the collaboration process. There’s a huge opportunity to increase the effectiveness of work across the team by using one set of tools across organizational boundaries.

Many of these might seem far-fetched, and even heretical to some. From laptops to color printing to projectors in conference rooms to wireless networking to the Internet itself, each of those tools were introduced to skeptics who said the tools currently in use were “good enough,” and the new tools were slower, less efficient, more expensive, or just superfluous.

The teams that adopt new tools and adapt their way of working will be the most competitive and productive teams in an organization. Not every tool will work, and some will even fail. The best news is that today’s approach to consumerization makes trial easier and cheaper than at any other time.

If you’re caught in a rut, doing things the old way, the tools are out there to work in new ways and start to change the culture of your team.

–Steven Sinofsky @stevesi

This article originally appeared on <re/code>.

Written by Steven Sinofsky

April 10, 2014 at 6:00 pm

Posted in posts

Tagged with , , ,

Why was 1984 not really like “1984”, for me

ScottyTalksToMacFor me, 1984 was the year of Van Halen’s wonderful [sic] album, The Right Stuff, and my second semester of college. It would also prove to be a time of enlightenment for me and computing. On this 30th anniversary of the Apple Macintosh on January 25 and the Superbowl commercial on January 22. I wanted to share my own story of the way the introduction of the Macintosh profoundly changed my path in life.

Perhaps a bit indulgent, bit it seemed worth a little backstory.  I think everyone from back then is feeling a bit of nostalgia over the anniversary of the commercial, the product, and what was created.

High School, pre-Macintosh

Like many Dungeons and Dragons players my age, my first exposure to post-Pong computing was an Atari 800 that my best friend was lucky enough to have (our high school was not one to have an Apple ][ which hadn’t really made it to suburban Orlando). While my friends were busy listening to the Talking Heads, Police, and B-52s, I was busy teaching myself to program on the Atari. Even though it had the 8K BASIC cartridge it lacked tape storage. Every time I went over to use the computer I had to start over. Thinking about business at an early age (I suppose) I would continue to code and refine what I thought would be a useful program for our family business, the ability to compute sales tax on purchases from different states. Enter the total sale, compute the sales tax for a state by looking up the rate in a table.

Atari 800

My father, an entrepreneur but hardly a technologist, was looking to buy a computer to “automate” our family business. In 1981, he characteristically dove head first into computing and bought an Osborne I. For a significant amount of money ($1,795, or $4,600 today) we owned an 8 bit CPU and two 90K floppy drives and all (five) of the business programs one could ever need. 

I started to write a whole business suite for the business (inventory, customers, orders) in BASIC which is what my father had hoped I would conjure up (in between SATs and college prep). Well that was a lot harder than I thought it would be (so were the SATs). Then I discovered dBase II and something called a “database” that made little sense to me in the abstract (and would only come to mean something much later in my education). In a short time I was able to create a character-based system that would be used to run the family business.

Osborne Ad

To go to college I had a matching Osborne I with a 300b modem so I could do updates and bug fixes (darn that shipping company–they changed the rate on COD shipments right during midterms which I had hard-coded!).

College Fall Semester

I loaded up the Osborne I and my Royal typewriter/daisy wheel/parallel port “letter quality” printer and was off to sunny Ithaca.

Computer savvy Cornell issued us our “BITNET electronic mail accounts”, mine was TGUJ@CORNELLA.EDU. Equal parts friendly, memorable, and useful and no one knew what to do with them. The best part was email ID came printed on a punch card. As a user of an elite Osborne I felt I went back in time when I had to log on to the mainframe from a VT100 terminal. The only time I ever really used TGUJ was to apply for a job with Computer Services.


I got a job working for the computer services group as a Student Terminal Operator (STO). I had two 4 hour shifts. One was in the main computer science major “terminal room” in Upson Hall featuring dozens of VT100 terminals. The other shift was Friday night (yes, you read that correctly) at the advanced “lab” featuring SGI graphics workstations, IBM PC XTs, an Apple Lisa, peripherals like punch card machines, and a 5′ tall high-speed printer. For the latter, I was responsible for changing the ribbon, a task that required me to put on a mask and plastic arm-length gloves.


It turned out that Friday night was all about people coming in to write papers on the few IBM/MS-DOS PCs using WordPerfect. These were among the few PCs available for general purpose use. I spent most of the time dealing with graduate students writing dissertations. My primary job was keeping track of the keyboard templates that were absolutely required to use WordPerfect. This experience would later make me appreciate the Mac that much more.

In the computer science department I had a chance to work on a Xerox Star and Alto (see below) along with Sun Workstations, microVAX mini, and so on. The resources available were an incredible blessing to the curious. The computing world was a cacophony of tools and platforms with the vast majority of campus not yet tapping into the power of computing and those that did were using what was most readily accessible. Cornell was awash in the sea of different computing platforms, and to my context that just seemed normal, like there were a lot of different types of cars. This was especially apparent from my vantage point in the computer facilities.


One experience with a new, top-secret, computer was about to change all that.

I ended up getting to use a new computer from an unidentified company. One night after my shift, a fellow STO dragged me back to Upson Hall and took me into a locked room in the basement. There I was able to see and use a new computer. It was a wooden box attached to a wall with an actual chain. It had a mouse, which used on the Xerox and Sun workstations. It had a bitmap screen like a workstation. It had an “interface” like the Xerox. There was a menu bar across the top and a desktop of files and folders. It seemed small and much more quiet than the dorm-refrigerator sized units I was used to hearing.

What was really magical about it was that it had a really easy to use painting program that we all just loved. It had a “word processor”. It was much easier to use than the Xerox which had special keys and a somewhat overloaded desktop metaphor. It crashed a lot even after a short time using it.  It also started up pretty quickly. Most everything we did with it felt new and different compared to all the other computers we used.

The end of the semester and exams approached. The few times and couple of hours I had to play with this computer were exciting. In the sea of computing options, it was definitely the most exciting thing I had experienced. Perhaps being chained to the wall added to the excitement, but there was something that really resonated with us. When I try to remember the specifics, I mostly recall an emotional buzz.

My computing world was filled with diversity, and complexity, which left me unprepared for the way the world was going to change in just the next six weeks.


To think about Apple’s commercial, one really has think about the context of the start of the year 1984. The Orwellian dialog was omnipresent. Of course as freshman in college we had just finished our obligatory compare/contrast the dystopian messages in Animal Farm, Brave New World, and 1984 not to mention the Cold War as front and center dialog at every turn. The country emerging from recession gave us all a contrasting optimism.

At the same time, IBM was omnipresent. IBM was synonymous with computing. Sure the Charlie Chaplin ads were great, but the image of computing to almost everyone was that of the IBM mainframe (CORNELLA was located out by the Ithaca airport). While IBM was almost literally the pillar of innovation (just a couple of years later, scientists at IBM would spell IBM with Xenon atoms), there was also great deal of distrust given the tenor of the time. The thought of a globally dominant company, a computer company, was uncomfortable to those familiar with fellow Cornellian Kurt Vonnegut’s omnipresent RAMJAC.


Then the Apple commercial ran. It was truly mesmerizing (far more so to me than the Superbowl). It took me about one second to stitch together all that was going on right before my eyes.


Apple was introducing a new computer.

It was going to be a lot different from the IBM PC.

The world was not going to be like 1984.

And most importantly, the computer I had just been playing with weeks earlier was, in fact, the Apple Macintosh.

I was so excited to head back to the terminal rooms and talk about this with my fellow STOs and to use the new Apple Macintosh.


Upon returning to the terminal room in Upson, Macs had already started to replace VT100s. First just a couple and then over time, terminal access moved to an emulation program on Macs (rumor had it that the Macs were actually cheaper than terminals!).

128k Mac

My Friday night shift was transformed. Several Macs were added to the lab. I had to institute a waiting list. Soon only the stalwarts were using the PCs. I started to see a whole new crowd on those lonely computer nights.


I saw seniors in Arts & Sciences preparing resumes and printing them on the ImageWriter (note, significantly easier to change the ribbon, which I had to do quite often every night). Those in the Greek System came by for help making signs for parties. Students discovered their talent with MacPaint pixel art and fat bits. All over campus signs changed overnight from misaligned stencils to ImageWriter printouts testing the limits of font faces per page.



I have to admit, however, I spent an inordinate amount of time attempting to recover documents that were lost to memory corruption bugs on the original MacWrite. The STOs all developed a great trouble shooting script and signs were posted with all sorts of guesses (no more than 4 fonts per document, keep documents under 5 pages, don’t use too many carriage returns). We anxiously awaited updates and students would often wait in line to update their “MacWrite disks” when word spread of an update (hey, there was no Internet download).

In short order, Macintosh swept across campus. Cornell along with many schools was part of Apple’s genius campaign on campuses. While I still had my Osborne, I was using Macintosh more often than not.


Completing College

The next couple of years saw an explosion of use of Macintosh across campus. The next incoming class saw many students purchasing a Mac at the start of college. Research funds were buying Macs. Everywhere you looked they were popping up on desks. There was even a dedicated store just off campus that sold and serviced Macs. People were changing their office furniture and layout to support using a mouse. Computer labs were being rearranged to support local printers and mice. The campus store started stocking floppy disks, which became a requirement for most every class.

Document creation had moved from typewriters and limited use of WordPerfect to near ubiquitous use of MacWrite practically by final exams that Spring. Later, Microsoft Mac Word, which proved far more robust became the standard.

Mac Word 1.0

The Hotel School’s business students were using Microsoft Mac Excel almost immediately.

via pingdom and Mike Koss

The Chemistry department made a wholesale switch to Macintosh. The software was a huge driver of this. It is hard to explain how difficult it was to prepare a chemistry journal article before Macintosh (the department employed a full time molecular draftsman to prepare manuscripts). The introduction of ChemDraw was a turning point for publishing chemists (half my major was chemistry).

It was in the Chemistry department where I found a home for my fondness of Macintosh and an incredibly supportive faculty (especially Jon Clardy). The research group had a little of everything, including MS-DOS PCs with mice which were quite a novelty. There were also Macs with external hard drives.

I also had access to MacApp and the tools (LightSpeed Pascal) to write my own Mac software. Until then all my programming had been on PCs (and mainframes, and Unix). I had spent two summers as an intern (at Martin Marietta, the same company dBase programmer Wayne Ratliff worked!) hacking around MS-DOS writing utilities to do things that were as easy as drag and drop on a Mac or just worked with MacWrite and Mac Excel. As fun as learning K&R, C, and INT 21h was, the Macintosh was calling.


My first project was porting a giant Fortran program (Molecular Mechanics) to the Mac. Surprisingly it worked (perhaps today, equally surprising was the existence of a Fortran compiler). It cemented the lab’s view that the Macs could also be for work, not just document creation. Next up I just started exploring the visualizations available on the Mac. Programming graphics was all new to me. Programming an object-oriented event loop seemed mysterious and indirect to me compared to INT 21h or stdio.

But within a few hacking sessions (fairly novel to the chemistry department) the whole thing came together. Unlike all of the previous systems I used, the elegance of the Mac was special. I felt like the more I used it the more it all made sense. When I would bury myself in Unix systems programming it seemed more like a series of things, tricks, you needed to know. Macintosh felt like a system. As I learned more I felt like I was able to guess how new things would work. I felt like the bugs in my programs were more my bugs and not things I misunderstood.

Macintosh Revealed

The proof of this was that through the Spring semester my senior year I was able to write a program that visualized the periodic table of the elements using dozens of different variables. It was a way to explore periodicity of the elements. I wrote routines for an X-Y plot, bar charts, text tables, and the pièce de résistance was a 2.5-dimensional perspective of the periodic table showing a single property (commonly used to illustrate the periodic nature of electron affinity). I had to ask a lot of friends who were taking computer graphics on SGIs for help! Still, not only had I just been able to program another new OS (by then this was my 5th or 6th) but I was able to program a graphical user interface for the first time.

MacMendeleev was born.


The geek in all of us has that special moment when at once you feel empowered and marvel at a system. That day in the spring of 1987 when I rendered a perspective drawing from my own code on a system that I had seen go from a chained down plywood box to ubiquity across campus was magical. Even my final report for the project was, to me, a work of art.

The geek in all of us has that special moment when at once you feel empowered and marvel at a system.

It wasn’t just the programming that was possible. It wasn’t just the elegance and learnability of the system. It wasn’t even the ubiquity that the Macintosh achieved on campus. It was all of those. Most of all it represented a tool that allowed me to realize some of my own potential. I was awful at Chemistry. Yet with Macintosh I was able to contribute to the department and probably showed a professor or two that in spite of my lack of actual chemistry aptitude I could do something (and dang, my lab reports looked amazing!). I was, arguably, able to learn some chemistry.

I achieved with Macintosh what became one of the most important building blocks in my education.

I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.

I’m forever thankful for the empowerment that came from using a “bicycle of the mind”.

What came next

Graduate school diverged in terms of computing. We used DEC VMS, though SmallTalk was our research platform. So much of the elegance of the Macintosh OS (MacApp and Lisa before that) was much clearer to me as I studied the nuances of object-oriented programming.

I used my Macintosh II to write papers, make diagrams, and remote into the microVAX at my desk. I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word I won at an ACM conference for my work on MacMendeleev.

I also used Macintosh to create a resume for Microsoft with a copy of Microsoft Word…

When I made it to Microsoft I found a great many shared the same experience. I met folks who worked on Mac Excel and also had Macs in boxes chained to tables. I met folks who wrote some of those Macintosh programs I used in college. So many of the folks in the “Apps” team I was hired into that year grew up on that unique mixture of Mac and Unix (Microsoft used Xenix back then). We all became more than converts to MS-DOS and Windows (3.0 was being developed when I landed at Microsoft).

There’s no doubt our collective experiences contribute to the products we each work on. Wikipedia even documents the influence of MacApp on MFC (my first product contribution), which was by design (and also by design was where not to be influenced). It is wonderful to think that through tools like MFC and Visual Basic along with ubiquitous computing, Windows brought to so many young programmers that same feeling of mastery and empowerment that I felt when I first used Macintosh.

Fast-forwarding, I can’t help but think about today’s college students having grown up hacking the web but recently exposed as programmers to mobile platforms. The web to them is like the Atari was to meprogrammable, understandable, and fun. The ability to take your ideas, connect them to the Internet, touch your creation, and make your own experience must feel like building a Macintosh program from scratch felt like to me. The unique combination of mastery of the system, elegance of design, and empowerment is what separates a technology from a movement.

Macintosh certainly changed my path in life…

For me, Macintosh was an early contributor to my learning, skills, and ultimately my self-confidence. Macintosh certainly changed my professional path in life. For sure, 1984 was not at all like 1984 for me.

Happy Anniversary

Yes, of course I’m a PC (and definitely a Surface).  Nothing contributed more to my professional life than the PC!

–Steven Sinofsky (@stevesi, stevesi@mac.com)

PS: How far have we come? Check out this Computer Chronicles from 1985 where the new Macintosh is discussed.

Written by Steven Sinofsky

January 23, 2014 at 7:30 am

Posted in posts

Tagged with , , ,

10 Observations from Tokyo’s CE scene

YodobashiI love visiting Tokyo and have been lucky enough to visit dozens of times over many years.  The consumer electronics industry has certainly had ups and downs recently, but a constant has been the leading edge consumer and business adoption of new technologies.  From PCs in the workplace to broadband at home and smartphones (a subject of many humorous team meetings back pre-bubble when I clearly didn’t get it and was content with the magic of my BB 850!) Japan has always had a leading adoption curve even when not necessarily producing the products used globally.

This visit was about visiting the University of Tokyo and meeting with some entrepreneurs.  That, however, doesn’t stop me from spending time observing what CE is being used in the workplace, on the subway, and most importantly for sale in the big stores such as Yodobashi, Bic, and Labi and of course the traditional stalls at Akihabara.  The rapid adoption, market size, and proximity to Korea and China often mean many of the products seen are not yet widely available in the US/Europe or are just making their way over.  There’s a good chance what is emphasized in the (really) big retail space is often a leading indicator for what will show up at CES in January.

If you’re not familiar with Yodobashi, here’s the flagship store in Akihabara — over 250,000 sq ft and visited by 10’s of millions of people every year.  I was once fortunate enough to visit the underground operations center, and as a kid who grew up in Orlando it sure feels a lot like the secret underground tunnels of the Magic Kingdom

With that in mind here are 10 observations (all on a single page).  This is not statistical in any way, just what caught my eye.

  • Ishikawa Oku lab.  The main focus of the trip was to visit University of Tokyo.  Included in that was a wonderful visit with Professor Ishikawa-san and his lab which conducts research on exploring parallel, high-speed, and real-time operations for sensory information processing. What is so amazing about this work is that it has been going on for 20 years starting with very small and very slow digital sensors and now with Moore’s law applied to image capture along with parallel processing amazing things are possible such as can be seen in some of these Youtube videos (with > 5 million views), see  http://www.youtube.com/ishikawalab.  More about the lab http://www.k2.t.u-tokyo.ac.jp/index-e.html.

Prof Ishikawa

  • 4K Displays.  Upon stepping off the escalator on the video floor, one is confronted with massive numbers of massive 4K displays. Every manufacturer has displays and touts 4K resolution along with their requisite tricks at upscaling.  The prices are still relatively high but the selection is much broader than readily seen in the US.  Last year 4K was new at CES and it seems reasonable to suspect that the show floor will be all 4K.  As a footnote relative to last year, 3D was downplayed significantly.  In addition, there are numerous 4K cameras on sale now, more so than the US.


  • Digital still.  The Fuji X and Leica rangefinder digital cameras are getting a lot of floorspace and it was not uncommon to see tourists snapping photos (for example in Meiji Garden).  The point and shoot displays feature far fewer models with an emphasis on attributes that differentiate them from phones such as waterproof or ruggedized.  There’s an element of nostalgia, in Japan in particular, driving a renewed popularity in this form factor.
  • Nikon Df.  This is a “new” DSLR with the same sensor as the D-800/D4 that is packaged in a retro form factor.  The Nikon Df is definitely only for collectors but there was a lot of excitement for the availability on November 21.  It further emphasized the nostalgia elements of photography as the form factor has so dramatically shifted to mobile phones.
  • Apple presence in store.  The Apple presence in the main stores was almost overwhelming.  Much of the first floor and the strategic main entry of Yodobashi were occupied by the Apple store-within-a-store.  There were large crowds and as you often see with fans of products, they are shopping the very products they own and are holding in their hands.  There has always been a fairly consistent appreciation of the Apple design aesthetic and overall quality of hardware but the widespread usage did not seem to follow.  To be balanced, one would have to take note of the substantial presence of the Nexus 5 in the stores, which was substantially and well-visited.
  • PCs. The size of the PC display area, relative to mobile and iOS accessories, definitely increased over the past 7 months since I last visited. There were quite a large number of All-In-One designs (which have always been popular in Japan, yet somehow could never quite leap across the Pacific until Windows 8).  There were a lot of very new Ultrabooks running Haswell chips from all the major vendors in the US, Japan, and China.  Surface was prominently displayed.
  • iPhone popularity.  There was a ubiquity of the iPhone that is new.  Android had gained a very strong foothold over the national brands that came with the transition to nationwide LTE.  Last year there was a large Android footprint through Samsung handsets that was fairly visible on display and in use.  While the Android footprint is clearly there, the very fast rise of iPhone, particularly the easily spotted iPhone 5s was impressive.  The vast expanse of iPhone accessories for sale nearly everywhere supports the opportunity.  A driver for this is that the leading carrier (DoCoMo) is now an iPhone supplier.  Returning from town, I saw this article speaking to the rise of iOS in Japan recently, iPhone 5S/C made up 76% of new smartphone sales in Japan this October.
  • Samsung Galaxy J.  Aside from the Nexus 5, the Android phone being pushed quite a bit was the Samsung Galaxy J.  This is a model only in Asia right now.  It was quite nice.  It sports an ID more iPhone-like (squared edges), available in 5c-like colors, along with the latest QC processor, 5″ HD display, and so on.  It is still not running Kitkat of course.  For me in the store, it felt better than a Galaxy S.  Given the intricacies of the US market, I don’t know if we’ll see this one any time soon. The Galaxy Note can be seen “in the wild” quite often and there seems to be quite a lot of interest based on what devices on display people would stop and interact with.

Galaxy J

  • Tablets.  Tablets were omnipresent.  They were signage in stores, menus in restaurants, in use on the subway, and in use at every place where people were sitting down and eating/drinking/talking.  While in the US we are used to asking “where are all the Android tablets”, I saw a lot of 7″ Android tablets in use in all of those places.  One wouldn’t expect the low-priced import models to be visible but there are many Japan OEMs selling Android tablets that could be spotted.  I also saw quite a few iPad Minis in use, particularly among students on the trains.
  • Digital video.  As with compact digital cameras, there was a rather extreme reduction in the number of dedicated video recorders.  That said, GoPro cameras had a lot of retail space and accessories were well placed.  For example, there were GoPros connected to all sorts of gear/showing off all sorts of accessories at Tokyu Hand (the world’s most amazing store, imho).  Professional HD and UHD cameras are on display in stores which is cool to see, for example Red and Arri.  One of the neatest uses of video which is available stateside but I had not seen is the Sony DEV-50 binoculars/camera.  It is pricey (USD$2000) but also pretty cool if you’ve got the need for it.  They have reasonable sensors, support 3D, and more.  The only challenge is stability which make sense given the equivalent focal length, but there is image stabilization which helps quite a bit in most circumstances.

There were many other exciting and interesting products one could see in this most wired and gadget friendly city.  One always is on the lookout for that unique gift this holiday season, so I found my stocking-stuffer.  Below you can see a very effective EMF shielding baseball hat (note, only 90% effective).  As a backup stocking-stuffer, all gloves purchased in Japan appear to be designed with resistive touch screens in mind :-)

Hat (front)

Hat (back)  Gloves

–Steven Sinofsky

PS: Here’s me with some super fun students in a class on Entrepreneurship and Innovation at the University of Tokyo.

Classroom @ University of Tokyo

Written by Steven Sinofsky

November 29, 2013 at 4:00 pm

Posted in posts

Tagged with , , , ,

Thoughts on reviewing tech products

Borat-thumbs-upI’ve been surprised at the “feedback” I receive when I talk about products that compete with those made by Microsoft.  While I spent a lot of time there, one thing I learned was just how important it is to immerse yourself in competitive products to gain their perspective.  It helps in so many ways (see http://blog.learningbyshipping.com/2013/01/14/learning-from-competition/).

Dave Winer (@davewiner) wrote a thoughtful post on How the Times reviews tech today. As I reflected on the post, it seemed worth considering why this challenge might be unique to tech and how it relates to the use of competitive products.

When considering creative works, it takes ~two hours to see a film or slightly more for other productions. Even a day or two for a book. After which you can collect your thoughts and analysis and offer a review. Your collected experience in the art form is relatively easily recalled and put to good use in a thoughtful review.

When talking about technology products, the same approach might hold for casually used services or content consumption services.  In considering tools for “intellectual work” as Winer described (loved that phrase), things start to look significantly different.Software tools (for “intellectual work”) are complex because they do complex things. In order to accomplish something you need to first have something to accomplish and then accomplish it. It is akin to reviewing the latest cameras for making films or the latest cookware for making food. While you can shoot a few frames or make a single meal, tools like these require many hours and different tasks. You shouldn’t “try” them as much as “use” them for something that really matters.  Only then can you collect your thoughts and analysis.Because tools of depth offer many paths and ways to use them there is an implicit “model” to how they are used. Models take a time to adapt to. A cinematographer that uses film shouldn’t judge a digital camera after a few test frames and maybe not even after the first completed work.

The tools for writing, thinking, creating that exist today present models for usage.  Whether it is a smartphone, a tablet, a “word processor”, or a photo editor these devices and accompanying software define models for usage that are sophisticated in how they are approached, the flow of control, and points of entry.  They are hard to use because they do hard things.

The fact that many of those that write reviews rely on an existing set of tools, software, devices to for their intellectual pursuits implies that conceptual models they know and love are baked into their perspective.  It means tools that come along and present a new way of working or seeing the technology space must first find a way to get a clean perspective.

This of course is not possible.  One can’t unlearn something.  We all know that reviewers are professionals and just as we expect a journalist covering national policy debates must not let their bias show, tech reviewers must do the same.  This implicit “model bias” is much more difficult to overcome because it simply takes longer to see and use a product than it does to learn about and understand (but not necessarily practice) a point of view in a policy debate.  The tell-tale sign of “this review composed on the new…” is great, but we also know right after the review the writer has the option of returning to their favorite way of working.

As an example, I recall the tremendous difficulty in the early days of graphical user interface word processors.  The incumbent WordPerfect was a character based word processor that was the very definition of a word processor.  The one feature that we heard relentlessly was called reveal codes which was a way of essentially seeing the formatting of the document as codes surrounding text (well today we think of that as HTML).  Word for Windows was a WYSIWYG word processor in Windows and so you just formatted things directly.  If it was bold on screen then it was implicitly surrounded by <B> and </B> (not literally but conceptually those codes).

Reviewers (and customers) time and time again felt Word needed reveal codes.  That was the model for usage of a “word processor”.  It was an uphill battle to move the overall usage of the product to a new level of abstraction.  There were things that were more difficult in Word and many things much easier, but reveal codes was simply a model and not the answer to the challenges.  The tech  world is seeing this again with the rise of new productivity tools such as Quip, Box Notes, Evernote, and more.  They don’t do the same things and they do many things differently.  They have different models for usage.

At the business level this is the chasm challenge for new products.  But at the reviewer level this is a challenge because it simply takes time to either understand or appreciate a new product.  Not every new product, or even most, changes the rules of the predecessor successfully.  But some do.  The initial reaction to the iPhone’s lack of keyboard or even de-emphasizing voice calls shows how quickly everyone jumped to the then current definition of smartphone as the evaluation criteria.Unfortunately all of this is incompatible with the news cycle for the onslaught of new products or the desire to have a collective judgement by the time the event is over (or even before it starts).This is a difficult proposition. It starts to sound like blaming politicians for not discussing the issues. Or blaming the networks for airing too much reality tv. Isn’t is just as much what peole will click through as it is what reviewers would write about. Would anyone be interested in reading a Samsung review or pulling another ios 7 review after the 8 weeks of usage that the product deserves?

The focus on youth and new users as the baseline for review is simply because they do not have the “baggage” or “legacy” when it comes to appreciating a new product.  The disconnect we see in excitement and usage is because new to the category users do not need to spend time mapping their model and just dive in and start to use something for what it was supposed to do.  Youth just represents a target audience for early adopters and the fastest path to crossing the chasm.

Here are a few things on my to-do list for how to evaluate a new product. The reason I use things for a long time is because I think in our world with so many different models

  1. Use defaults. Quite a few times when you first approach a product you want to immediately customize it to make it seem like what you’re familiar with.  While many products have customization, stick with the defaults as long as possible.  Don’t like where the browser launching button is, leave there anyway.  There’s almost always a reason.  I find the changes in the default layout of iOS 6 v. 7 interesting enough to see what the shift in priorities means for how you use the product.
  2. Don’t fight the system.  When using a new product, if something seems hard that used to seem easy then take a deep breath and decide it probably isn’t the way the product was meant to do that thing.  It might even mean that the thing you’re trying to do isn’t necessarily something you need to do with the new product.  In DOS WordPerfect people would use tables to create columns of text.  But in Word there was a columns feature and using a table for a newsletter layout was not the best way to do that.  Sure there needed to be “Help” to do this, but then again someone had to figure that out in WordPerfect too.
  3. Don’t jump to doing the complex task you already figured out in the old tool.  Often as a torture test, upon first look at a product you might try to do the thing you know is very difficult–that side by side chart, reducing overexposed highlights, or some complex formatting.  Your natural tendency will be to use the same model and steps to figure this out.  I got used to one complicated way of using levels to reduce underexposed faces in photos and completely missed out on the “fill flash” command in a photo editor.
  4. Don’t do things the way you are used to.  Related to this is tendency to use one device the way you were used to.  For example, you might be used to going to the camera app and taking a picture then choosing email.  But the new phone “prefers” to be in email and insert an image (new or just taken) into a message.  It might seem inconvenient (or even wrong) at first, but over time this difference will go away.  This is just like learning gear shift patterns or even the layout of a new grocery store perhaps.
  5. Don’t assume the designers were dumb and missed the obvious. Often connected to trying to do something the way you are used to is the reality that something might just seem impossible and thus the designers obviously missed something or worse.  There is always a (good) chance something is poorly done or missing, but that shouldn’t be the first conclusion.

But most of all, give it time.  It often takes 4-8 weeks to really adjust to a new system and the more expert you are the more time it takes.  I’ve been using Macs on and off since before the product was released to the public, but even today it has taken me the better part of six months to feel “native”.  It took me about 3 months of Android usage before I stopped thinking like an iPhone user.  You might say I am wired too much or you might conclude it really does take a long time to appreciate a design for what it is supposed to do.  I chuckle at the things that used to frustrate me and think about how silly my concerns were at day 0, day 7, and even day 30–where the volume button was, the charger orientation, the way the PIN worked, going backwards, and more.

–Steven Sinofsky

Written by Steven Sinofsky

October 29, 2013 at 12:00 pm

Posted in posts

Tagged with , ,

%d bloggers like this: