The transformative potential for mobile communications is upon us in every aspect of life. In the developing world where infrastructure of all types is at a premium, few question the potential for mobile, but many wonder whether it should be a priority.
Note: This post originally appeared in Re/code on April 29, 2015.
Many years of visiting the developing world have taught me that, given the tools, people — including the very poor — will quickly and easily put them to uses that exceed even the well-intentioned ideas of the developed world. Poor people want to and can do everything people of means can do, they just don’t have the money.
Previously, I’ve written about the rise of ubiquitous mobile payments across Africa, and the work to bring free high-speed Wi-Fi to the settlements of South Africa. One thing has been missing, though, and that is access to reliable sources of power to keep these mobile phones and tablets running. In just a short time — less than a year — solar panels have become a commonplace site in one relatively poor village I recently returned to. I think this is a trend worth noting.
Could it be that solar power, potentially combined with large-scale batteries, will be the “grid” in developing markets, perhaps in the near future? I think so.
It is also the sort of disruptive trend we are getting used to seeing in developing markets. The market need and context leads to solutions that leapfrog what we created over many years in the developed world. Wireless phones skipped over landlines. Smartphones skipped over the PC. Mobile banking skipped over plastic cards and banks.
Could it be that solar power, potentially combined with large-scale batteries, will be the “grid” in developing markets, perhaps at least in the near future? I think so. At the very least, solar will prove enormously useful and beneficial and require effectively zero-dollar investments in infrastructure to dramatically improve lives. Solar combined with small-scale appliances, starting with mobile phones, provides an enormous increase in standard of living.
Historically, being poor in a developing economy put you at the end of a long chain of government and international NGO assistance when it comes to infrastructure. While people can pull together the makings of shelter and food along with subsistence labor or farming, access to what we in the developing world consider basic rights continues to be a remarkable challenge.
For the past 50 or more years, global organizations have been orchestrating “top down” approaches to building infrastructure: Roads, water, sewage and housing. There have been convincing successes in many of these areas. The recent UN Millennium Development Goals report demonstrates that the percentage of humans living at extreme poverty has decreased by almost half. In 1990, almost half the population in developing regions lived on less than $1.25 a day, the common definition of extreme poverty. This rate dropped to 22 percent by 2010, reducing the number of people living in extreme poverty by 700 million.
Nevertheless, billions of people live every day without access to basic infrastructure needs. Yet they continue to thrive, grow and improve their lives.
While the efforts to introduce major infrastructure will continue, the pace can sometimes be slower than either the people would like or what those of us in the developing world believe should be “acceptable.”
A village I know of, about 10 miles outside a major city in southern Africa, started from a patch of land contributed by the government about six years ago, and grew to a thriving neighborhood of 400 single-family homes. These homes are multi-room, secure, cement structures with indoor connections to sewage. The families of these homes earn about $100-$200 a month in a wide range of jobs. By way of comparison, these homes cost under $10,000 to build.
While the roads are unpaved, this is hardly noticed. But one thing has become much more noticeable of late is the lack of electrical power. Historically, this has not been nearly as problematic as we in the developing world might think. Their economy and jobs were tuned to daylight hours and work that made use of the energy sources available.
In an effort to bring additional safety to the village, the citizens worked with local government to install solar “street lights,” such as the one pictured here. This simple development began to change the nighttime for residents. These were installed beginning about nine months ago (as seen in the first photo, with a closer to production installation in the second).
Historically, this type of infrastructure, street lighting, would come after a connection to the electrical grid and development of roads. Solar power has made this “reordering” possible and welcome. Lighting streets is great, but that leads to more demands for power.
Mobile phones, the new infrastructure
These residents are pretty well off, even on relatively low wages that are three to five times the extreme poverty level. While they lack electricity and roads, they are safe, secured and sheltered.
One of the contributors to the improved standard of living has been mobile phones. Over the past couple of years, mobile phone penetration in this village has reached essentially 100 percent per household, and most adults have a mobile.
The use of mobiles is not a luxury, but essential to daily life. Those that commute into the city to sell or buy supplies can check on potential or availability via mobile.
Families can stay connected even when one goes far away for a good job or better work. Safety can be maintained by a “neighborhood watch” system powered by mobile. Students can access additional resources or teacher help via mobile. Of course, people love to use their phones to access the latest World Cup soccer results or listen to religious broadcasts.
All of these uses and infinitely more were developed in a truly bottom-up approach. There were no courses, no tutorials, no NGOs showing up to “deploy” phones or to train people. Access to the tools of communication and information as a platform were put to uses that surprise even the most tech-savvy (i.e., me). Mobile is so beneficial and so easy to access that it has quickly become ubiquitous and essential.
Last year, when I wrote for Re/code about mobile banking and free Wi-Fi, I received a fair number of comments and emails saying how this seemed like an unnecessary luxury, and that smartphones were being pushed on people who couldn’t afford the minutes or kilobytes, or would much rather have better access to water or toilets. The truth is, when you talk to people who live here, the priority for access unquestionably goes to mobile communication. In their own words, time and time again, the priority is attached to mobile communications and information.
Fortunately, because of the openness most governments have had to investments from multinational telecoms such as MTN, Airtel and Orange, most cities and suburban areas of the continent are well covered by 2G and often 3G connectivity. The rates are competitive across carriers, and many people carry multiple SIMs to arbitrage those rates, since saving pennies matters (calls within a carrier network are often cheaper than across carriers).
Mobile powered by solar
There has been one problem, though, and that is keeping phones charged. The more people use their phones (day and night), the more this has become a problem. While many of us spend time searching for outlets, what do you do when the nearest outlet might be a few miles away?
When there is an outlet, you often see people grouped around it, or one person volunteers to rotate phones through the charging cycles. Above a picture of an outlet in the one building connected to power, the community center. This is a pretty common sight.
An amazing transformation is taking place, and that is the rise of solar. What we might see as an exotic or luxury form of power for hikers and backpackers, or something reasonably well-off people use to augment their home power, has become as common a sight as the water pump.
The plethora of phones sharing a single outlet has been replaced by the portable solar panel out in front of every single home.
An interesting confluence of two factors has brought solar so quickly and cheaply to these people. First, as we all know, China has been investing massively in solar technology, solar panels and solar-powered devices. That has brought choice and low prices, as one would expect. In seeking growth opportunities, Chinese companies are looking to the vast market opportunity in Africa, where people are still not connected to a grid. There’s a full supply chain of innovation, from the solar through to integrated appliances with batteries.
Second, China has a significant presence in many African countries, and is contributing a massive amount of support in dollars and people to build out more traditional infrastructure, particularly transportation. In fact, many Chinese immigrants in country on work projects become the first customers of some of these solar innovations.
People are exposed to low-cost, low-power portable solar panels and they are “hooked.” In fact, you can now see many small stores that sell 100w panels for the basics of charging phones. You can see solar for sale in the image below. I left the whole store in the photo just to offer a bit of culture. The second photo shows the solar “for sale” offers.
Like many significant investments, there’s a vibrant market in both used panels and in the repair and maintenance of panels and wiring. Solar is a budding industry, for sure.
But people want more than to charge their phones once they see the “power” of solar. Here is where the ever-improving and shrinking of solar, LED lights, lithium batteries and more are coming together to transform the power consumption landscape and the very definition of “home appliances.”
In the developed world, we are transitioning from incandescent and fluorescent lighting in a rapid pace (in California, new construction effectively requires LED). LED lights, in addition to lasting “forever,” also consume 80 percent less power. Combining LED lights, low-cost rechargeable batteries and solar, you can all of a sudden light up a home at night. Econet is one of the largest mobile carriers/companies in Africa, and has many other ventures that improve the lives of people.
Here are a few Econet-developed LED lanterns recharging outside a home. This person has three lights, and shares or rents them with neighbors as a business. Not only are these cheaper and more durable than a fossil-fuel-based lantern, they have no ongoing cost, since they are powered by the sun.
With China bringing down the cost of larger panels, and the abundance of trade between Africa and China, there’s an explosion in slightly larger solar panels. In fact, many of the homes I saw just nine months ago now commonly sport a large two-by-four-foot solar panel on the roof or strategically positioned for maximal use.
Panels are often on the ground, because they move between homes where the investment for the panel has been shared by a couple of families. This might seem inefficient or odd to many, but the developing world is the master of the shared economy. Many might be familiar with the founding story of Lyft based on experiences with shared van rides in Zimbabwe, Zimride.
Just the first step
We are just at the start of this next revolution at improving the lives of people in developing economies using solar power.
Three sets of advances will contribute to improved standards of living relative to economics, safety and comfort.
First, more and more battery-operated appliances will make their way into the world marketplace. At CES this year, we saw battery-operated developed-market products for everything from vacuum cleaners to stoves. Once something is battery-powered, it can be easily charged. These innovations will make their way to appliances that are useful in the context of the developing world, as we have seen with home lighting. The improvement in batteries in both cost and capacity (and weight) will drive major changes in appliances across all markets.
Second, the lowering of the price of solar panels will continue, and they will become commonplace as the next infrastructure requirement. This will then make possible all sorts of improvements in schools, work and safety. One thing that can then happen is an improvement in communication that comes from high speed Wi-Fi throughout villages like the one described here. Solar can power point-to-point connectivity or even a satellite uplink. Obviously, costs of connectivity itself will be something to deal with, but we’ve already seen how people adapt their needs and use of cash flow when something provides an extremely high benefit. It is far more likely that Wi-Fi will be built out before broad-based 3G or 4G coverage and upgrades can happen.
Third, I would not be surprised to see innovations in battery storage make their way to the developing markets long before they are ubiquitous in the developed markets.
Developed markets will value batteries for power backup in case of a loss of power and solar storage (rather than feeding back to the grid). But in the developing markets, a battery pack could provide continuous and on-demand power for a home in quantity, as well as nighttime power allowing for studying, businesses and more. This is transformative, as people can then begin to operate outside of daylight hours and to use a broader range of appliances that can save time, increase safety in the home and improve quality of life.
Our industry is all about mobile and cloud. With the arrival of low-cost solar, it’s no surprise that the revolution taking place in developing markets these days is rooted in mobile-sun.
Photos by the author unless otherwise noted.
As a technical or product-focused CEO/founder of a growing company, a challenge one eventually faces is making that first product manager hire. Like most founders, there’s a good chance that you’re seeing ongoing challenges balancing the ever-increasing workload as CEO and starting to feel that sense of distance from the product as the needs of sales, marketing, hiring, and more pull you away. You might be wondering how you can spend more time on what you love, the product, while recognizing as CEO you must grow the strength of the organization while also focus on the contributions that you can make uniquely from the CEO role.
This is where making the first product manager hire is necessary and also a unique challenge. Most of the time, this hire is postponed as long as possible. You can cover-up for this missing resource with additional late night mails, some extra last minute meetings, and so on. The time this really hits home is when feature work or decisions turn into re-work or re-thinking. That’s the sign it is time to hire some help. Engineering resources are precious and timelines are always tight—being the founder-pm-bottleneck is no way to iterate your way to product-market fit.
The short answer to this next step is two-fold. First, at an emotional level this is extremely difficult for most every founder (btw, in a big company if you are starting a new product team it turns out this same dynamic applies). It is likely you will talk to quite a few candidates and no one will ever seem quite right. Second, you are going to have to trust your team a great deal on the fit and in doing so adjust your own approach to product management as part of the on-boarding process.
Every situation will be unique and there is no single rule for what type of skillset, experience, or seniority will be right for you and your company. The most important thing to think through before you start the process is to agree between your co-founder(s) and team on the profile of the role you wish the new PM to play in the organization.
Traditionally this would be framed along the lines of a job description:
Seniority. Are you looking for a VP, Director, Lead? Most typically you start here but these descriptions might fall short of really defining the role and so you end up seeing a lot of candidates.
Domain. Perhaps you are looking to fill in a specific technology background as part of this hire? As the product evolves you might be expanding into product areas that could use additional strength from product management. The question is really how much this will change the bottleneck you might be seeing.
Skillset. Are you looking for a candidate with more of a design, project management, or engineering background?
Each of these are examples of necessary but not sufficient criteria in kicking off the search for your first product manager. Because the first product manager hire is so unique for a product-focused founder, it is worth offering a framework or characterization of the role that you might start with.
Once you arrive at characterizing the role, then you are in a better position to narrow the search by more traditional experience and skills descriptors. Each of these below can work with the key being clear on hire and in management what the expectations are for the new product manager.
Extra hands. Most every founder I speak with starts the description of needing an extra set of hands, eyes, ears—someone to offload some work to, follow-up, track down, etc. This is often a positive stop-gap and can certainly work short term. Medium to long-term it can also starve you of the opportunity to grow the organization or might mean you set yourself up for yet another product management challenge down the road. If you go with this approach then the important thing to watch is that you did not solve you bottleneck problem by just moving it to another person or adding a level of bottleneck-indirection.
Process chief. Are you looking to offload the unpleasant or less intellectual aspects of product management such as the details of tracking, documenting, and other “process” issues? It is quite natural to be very narrowly focused in hiring the first product manager to want this set of skills added to the team. It isn’t uncommon for engineering to also seek this addition. The good news is this is almost always helpful. The challenge is it also adds another person to the team for the short term but might not really reduce the load or bottleneck but could unintentionally add friction to the process. A careful balance is required.
Apprentice. Many times the goal of bringing on the first product manager is to reduce the risk of hiring a senior or experienced person and going with a relatively junior hire and working to grow him/her into the role that you actually need. This can often be the most comfortable approach for the team because there’s a clear view of who the boss is and relatively clear expectations with the new PM. Generally the challenge can be down the road when you bring in more product managers and have to decide if the apprentice is “senior enough”. To avoid this challenge the best thing to do is really put in place a true training and growing situation. You need to provide the right level of responsibility and feedback/mentoring. Sometimes the difficulty is in hiring at this level but expecting too much, too soon.
Mini-me. Another model I have seen is searching for that first PM that is a reflection of your own skillsets and experience—a mini-me. For many this younger-sibling approach can be comfortable and easy to model and understand. It is a matter of finding the candidate that shares your product point of view and vision, and then a way of getting it done. The interesting challenge with this approach is not the way it works, but the way it might not work. Will the new PM amplify not just your strengths but your weaknesses? Will this miss out on the chance to being more perspective, diversity of thought and approach, or new ideas into the team? Are you being imitated or copied?
Successor. The most difficult or bravest first PM hire is to hire your successor. In hiring this person you’re bringing on a person who will simply be the final voice in product management. This is a scary approach and depending on the direct responsibilities you have and where you are in the product-market fit journey this might be the best approach for you and the team. The challenge I’ve seen most often with this first hire is that the seniority level doesn’t quite match the organization yet. The new hire’s first step is to build out a team and bring on more people. This might be exactly why you’re bringing on the person (just as when you bring on less familiar functions) but generally isn’t the case for product management as most often the first hire needs to be hands on and will buy you some time or runway. On the other hand, often the right candidate comes along—one who has the right level of person contribution, domain knowledge, and scale experience—and that might make for the right fit.
Of course hiring the person that fits this description as well as all the right product skills could turn into a unicorn hire, so definitely be careful about over-constraining the challenge. Of course, hiring is just the first step. As you onboard, assign and delegate work, and manage a new product manager you will also need to be incredibly deliberate in your own evolving role in product management. All too often the most-fitting hires can run into challenges when there is a mismatch between intention and execution of product management responsibility.
One word of caution. If you are concerned or even “afraid” of hiring a strong product person with a point of view, perspective, or just streak of stubbornness then think for a moment. Are you labeling the person a poor fit for “culture” or are you actually more concerned that they might make your own personal transition more challenging? If you’re working to always bring on strong people, don’t compromise at this juncture.
Hiring the first or early product managers for technical/product focused founder(s) is always a big step and a difficult one. When done correctly it can be a positive and rapid accelerator for engineering and a positive for the company overall as it makes room for you as that leader to focus on the work you can do uniquely.
For a just over a year, Tanium Corporation has been impressing enterprise customers with its special brand of Tanium magic — the ability to instantly learn anything you need to know about the PCs, servers, VMs, and embedded devices such as ATMs and Point of Sale devices on your network. About nine months ago Andreessen Horowitz was offered the opportunity to partner with Tanium and the founders David and Orion Hindawi, and we could not be more impressed with the progress and growth of the company. This week Tanium is adding some more magic to an amazing product.
Growing and Scaling
The Tanium team has been hard at work on the platform and in creating a great company. It is worth sharing a little bit about the progress they have made in less than a year:
- Tanium is deployed on over 10,000,000 endpoints, with individual customers managing hundreds of thousands of endpoints.
- Tanium is in broad deployment in over half the Fortune 100.
- Tanium is rapidly growing (and hiring) with a particular focus on expanding internationally.
- Even with growth on every metric, Tanium has stayed a cash-generating and profitable business.
Tanium’s product magic is matched by the team’s amazing leadership and execution.
Reimagining Systems Management and Endpoint Security
When customers first see Tanium, they are blown away by the speed at which IT can learn what is going on with the endpoints on the network. Tanium’s capability to navigate, interrogate, and act on even the largest enterprise network in seconds is the magic that fires up customers –networks comprised of millions of endpoints made up of PCs, Servers, VMs, and embedded devices. This 15-second capability is the foundation of Tanium magic and is unprecedented for large scale environments.
Traditionally, enterprises deploy Systems Management (SM) platforms to control their environments. Prior to Tanium, even the state-of-the-art tools require immense investment in agents, logon scripts, policies, signature files, databases, dedicated infrastructure (servers and networking), and more, just to provide base level information. These tools frustrate end-users and CIOs alike by choking endpoints, burdening networks, and offering up information that is approximate at best and at worse irrelevant, because it is outdated.
Tanium surpasses the state-of-the-art in systems management, which you’d expect from founders whose previous company built the leading tools of this generation before being acquired by IBM. Not content to stop there, Tanium’s ambition is much greater than improving on their previous solution, even if it is already “10,000 times better.”
That ambition is based on an important observation regarding today’s challenges in enterprise security, particularly the realities faced by the nature of attacks. Malicious attacks are no longer brute force attempts to penetrate the network firewall or simply blunt viruses or malware that indiscriminately seize endpoints. We’re all aware that today’s attacks are multi-step, socially enabled or engineered, and by definition circumvent network-based and traditional end-point protection. We’ve seen that in all the recent breaches across Target, Home Depot, JP Morgan, Sony and more, including Anthem most recently.
In every case, once a breach becomes known, the most critical job of the security team is to scope the breach, identify compromised endpoints, and shut them down. Traditionally security teams relied on network-based management solutions since those have the fastest and most familiar tools. In practice, quickly identifying all the endpoints with an unpatched OpenSSL version or all that match a known indication of compromise, for example, look much less like network security efforts and more like endpoint challenges, historically the domain of systems management. The problem is that systems management tools were designed for an era when most of their work took place logging on or during off-hours “sweeps” of assets, with results gathered over the course of weeks.
CIOs recognize that having a systems management team using one set of tools that can barely keep up with traditional demands and having a security team using tools that are only focused on the network edge isn’t ideal by any measure. Systems management is now an integral part of incident detection and response. Conversely, security and protection require full knowledge and control of end-points. Neither set of existing tools deployed in most environments is up to the task.
Tanium has been working with customers from the CIO and CISO and throughout the management and response teams in enterprises to deploy Tanium as a frontline and first response platform that reimagines the traditional categories of systems management and endpoint security. In a world of unprecedented security risks, BYO devices, and ever-changing software needs nothing short of a rethinking of the tools and approaches is required.
Tanium is a new generation of security and systems management capabilities that meets two modern criteria:
- Provide 15-second information on all endpoints. Open your browser, type in a natural language query, and know instantly every endpoint that meets a particular criteria or indication of compromise, IOC, (for example, running a certain process, recently modified system state matching a pattern, particular network traffic, or literally anything you can imagine asking the endpoint). Aside from instant information, the key new capability is being able to learn about any aspect of the running system even if it is something unforeseen or unplanned. Results are real-time, live, and refreshable instantly.
- Remedy problematic situations immediately. Given the set of endpoints matching the criteria, take action immediately by shutting down endpoints, modifying the system configurations, quarantining devices, alerting users, or patching the appropriate modules, all in seconds rather than days. Aside from being able to immediately deploy the remedy, the key new capability is being able to implement any possible remedy across all endpoints, even within the largest networks in the world using minimal infrastructure.
The most innovative products are those that provide new ways of thinking about problems or new approaches that break down the traditional category boundaries. Tanium is such a platform, and that is why enterprises are so enthusiastic about what Tanium provides.
Shipping New Capabilities
This week Tanium is releasing some significant new capabilities that further the vision of a new category of product that serves the needs of both systems management and security professionals.
Tanium IOC Detect. Open to a wide variety of highly-regarded third-party threat intelligence data and indicators of compromise templates, Tanium takes this data and continuously seeks to identify endpoints at risk in real-time. Tanium is able to match the widest possible range of system attributes and patterns without downloading client-side databases or signature files. Security operations no longer needs to sift through all of the intelligence feeds manually or script signatures to feed into legacy systems management tools. Instead, Tanium makes it possible to detect and remediate threats immediately at massive scale.
Tanium Patch. Tanium transforms a process that’s error-prone and time-consuming with the ability to deploy patches across hundreds of thousands of endpoints in seconds, with 99%+ reliability and no scripting required by the IT team. Using two of Tanium’s key architectural elements, the communications layer and the data transport layer, patches are deployed and installed with unprecedented speed and unrivaled minimal impact on network infrastructure. Since many security breaches require updates to endpoints to truly remedy them, Tanium brings together the needs of both security and management processes.
Tanium Connect. Tanium integrates its 15-second data into third-party security and management tools to make those tools more accurate and actionable. For example, Tanium’s ability to quickly see anomalies on endpoints can be used to create alerts in security information and event management (SIEM) systems. Traditionally this data would be impossible to collect or would be routed through existing systems management infrastructures, which are labor intensive and high-latency data sources. Tanium Connect provides the security operations data required to ascertain the threat and, because the data is only seconds old, the team knows it is worthy of investigation.
These are just a few of the improvements to Tanium’s 6.5 platform available this week.
Tanium’s magic innovation uniquely positions the company at the modern crossroads of systems management and security tools. Tanium’s platform reimagines these categories, while seamlessly working with existing infrastructure, and adds a new level of value and capability to forward-leaning IT teams.
Given this superb team, amazing growth, and unparalleled innovation, we could not be more happy than to lead a new round of investment in this wonderful company. Andreessen Horowitz is incredibly excited to be partnering with David, Orion, and the Tanium team, and I could not be more thrilled with continued service on Tanium’s Board of Directors.
Note: This post also appeared on http://a16z.com/blog.
No one wants friction in their products. Everyone works to reduce it. Yet it sneaks in everywhere. We collectively praise a service, app, or design that masterfully reduces friction. We also appreciate minimalism. We love when products are artfully distilled down to their essence. How do we achieve these broadly appreciated design goals?
Frictionless and minimalism are related but not necessarily the same. Often they are conflated which can lead to design debates that are difficult to resolve.
A design can be minimal but still have a great deal of friction. The Linux command line interface is a great example of minimal design with high friction. You can do everything through a single prompt, as long as you know what to type and when. The minimalism is wonderful, but the ability to get going comes with high friction. The Unix philosophy of small cooperating tools is wonderfully minimal (every tool does a small number of things and does them well), but the learning and skills required are high friction.
- Minimalist design is about reducing the surface area of an experience.
- Frictionless design is about reducing the energy required by an experience.
When debating a design choice, feature addition, or product direction it can help to clarify whether a point of view originates from a perspective of keeping things minimal or reducing friction. If people discussing a decision start from this common understanding, I bet a decision will be reached sooner. Essentially, is the debate about adding a step or experience fork, or is it about adding something at all?
Product managers need to choose features to add. That is what makes all of this so difficult. As great as it is to stay pure and within original intent, if you and the team don’t enhance the capabilities of your product then someone will do what you do, but with a couple of more things or a different factoring and you’ll be left in the dust.
Therefore the real design challenge is not simply maintaining minimalism, but enhancing a product without adding more friction. Let’s assume you built a product that does something very exciting and has a very low friction to usage and does so with a minimal feature set. The next efforts are not about just watching your product, but about deciding how to address shortcoming, enhance, or otherwise improve the product to grow users, revenue, and popularity. The risk with every change is not simply failing to maintain minimalism, but introducing friction that becomes counterproductive to your goals.
When you look back you will be amazed at how the surface area of the product has expanded and how your view of minimalism has changed. Finding the right expression of new features such that you can maintain a minimalist approach is a big part of the design challenge as well.
There’s an additional design challenge. The first people who use your product will likely be the most enthusiastic, often the most technical, and in general the most desirous of features that introduce friction. In other words you will get the most positive feedback by adding features that ultimately will result in a product with a lot more friction.
Product managers and designers need to find the right balance as the extremes of doing nothing (staying minimal) and listening to customers (adding features) will only accelerate your path to replacement either by a product with more features or a product with less friction.
Low-Friction Design Patterns
Assuming you’re adding features to a product, the following are six design patterns to follow, each essentially reducing friction in your product. They cause the need to learn, consider, futz, or otherwise not race through the product to get something done.
- Decide on a default rather than options
- Create one path to a feature or task
- Offer personalization rather than customization
- Stick with changes you make
- Build features, not futzers
- Guess correctly all the time
Decide on a default rather than options. Everything is a choice. Any choice can be A/B tested or debated as to whether it works or not. The more testing you do the more likely you are to find a cohorts of people who prefer different approaches. The natural tendency will be to add an option or setting to allow people to choose their preference or worse you might interrupt their flow to ask preference. Make a choice. Take a stand. Every option is friction in the system (and code to maintain). When we added the wheel to the mouse in Office 97 there was a split in the team over whether the wheel should scroll down or whether it should zoom in/out. From the very first release there was an option to appease the part of the team that felt zoom was more natural. Even worse, the Word team went and did a ton of work to make zoom performant since it was fairly unnatural at the time.
Create one path to a feature or task. You add a new feature all is good—you’re in X in your product and then you can do Z. Then someone points out that there are times when you are doing Y in your product and you also want to do Z. Where there was once one path to get to a feature you now think about adding a second path. Maybe that sounds easy enough. Then a few iterations down the road and you have 5 different ways to get to Z. This whole design process leads to shortcuts, floating buttons, context menus, and more. Again all of which are favored by your early adopters and add friction for everyone else, and also add code. Pick the flow and sequence and stick with it. The most famous debate of all between Windows and Mac was over right click and it still rages. But the design energy to populate context menus and the cognitive load over knowing what you can or cannot do from there is real. How many people have right clicked on a file in the Windows desktop and clicked “Send” only to be launched into some Outlook configuration dialog when it would have been frictionless to always know that insert attachment in mail works and nothing will fail.
Offer personalization rather than customization. Early adopters of a product love to customize and tweak. That’s the nature of being a tech enthusiast. The theory is that customization makes a product easier to use because every use case is different enough that the time and effort saved by customization is worth it and important. In managing a product over time, customization becomes an engineering impossibility to maintain. When you want to change behavior or add a feature but it isn’t there or moved you introduce an engineering impossibility. The ability in Office to reorganize all the toolbars and menus seemed super cool at the time. Then we wanted to introduce a new scaleable structure that would work across resolutions and input devices (the ribbon). The problem was not just the upgrade but the reality that the friction introduced in using Office by never knowing where the menus might be (at the extreme, one could open a document that would rearrange the UX) was so high the product was unusable. Enterprise customers were rearranging the product such that people couldn’t take courses or buy books on how to use Office. The constraint led to the addition of a single place for personalization (Quick Access Toolbar) which ultimately allowed for a much lower friction design overall by enabling personalized efficiency without tweaking the whole experience.
Stick with changes you make. The ultimate design choice is when you change how a feature used by lots of customers works. You are choosing to deliberately upend their flow and add friction. At the same time the job of designing a product is moving it forward to new scenarios and capabilities and sometimes that means revisiting a design choice perhaps one that is the standard. It takes guts to do this, especially because you’re not always right. Often the path is to introduce a “compatibility mode” or a way to turn your new product into the old and comfortable product. This introduces three problems. First, you have to decide what the default will be (see the first rule above). Second, you have to decide if/how to enhance the old way of doing things while you’re also adding new things. Third, you have to decide when down the road you remove the old way, but in reality that will be never because you already told customers you value it enough to keep it around. But adding compatibility mode seems so easy and customer friendly! Ultimately you’re creating a technical debt that you can never dig out of. At the same time, failing to make big changes like this almost certainly means your product will be surpassed in the marketplace. See this HBS case on the Office 2007 Ribbon design http://www.hbs.edu/faculty/Pages/item.aspx?num=34113 ($).
Build features, not futzers. Tools for creativity are well-known to have elaborate palettes for formatting, effects, and other composition controls. Often these are built on amazing “engines” that manage shapes, text, or image data. Historically, tools of creativity have prided themselves on exposing the full range of capabilities enabled by these engines. These vast palettes of features and capabilities came to define how products and compete in the marketplace. In today’s world of mobility, touch interfaces, and timely/continuous productivity people do not necessarily want to spend time futzing with all the knobs and dials and seek to minimize time from idea to presentation—call this the Instagram effect. Yet even today we see too many tools that are about debugging your work, which is vastly different than getting work done. When a person needs a chart, a table, a diagram or an image how can you enable them to build that out of high-level concepts rather than the primitives that your engine supports? I was recently talking to the founder of an analytics company struggling with customer input on tweaking visualization which was adding complexity and taking engineering time away from adding whole new classes of visualization (like maps or donut charts). You’ll receive a lot of input from early customers to enable slightly different options or adjustments which will both challenge minimalism and add friction to your product without growing the breadth of scenarios your product enables. Staying focused on delivering features will enable your product to do more.
Guess correctly all the time. Many of the latest features, especially those based on machine learning or statistical models involve taking action based on guessing what comes next. These types of features are magical, when they work. The challenge is they don’t always work and that drives a friction-filled user experience. As you expand your product to these areas you’re going to want to find the right balance of how much to add and when, and patience with guessing too much too soon is a good practice. For better or worse, customers tend to love features that guess right 100% of the time and even if you’re wrong only 1% of the time, that 1% feels like a much higher error rate. Since we know we’re going to be learning and iterating in this regard, a best practice is to consider how frictionless you can make incorrect guesses. In other words, how much energy is required to skip a suggestion, undo an action, or otherwise keep the flow going and not stop to correct what the software thought was right but wasn’t. Let’s just call this, lessons from “bullets and numbering” in Word :-)
Finally, a word of caution on what happens as you expand your customer base when it comes to adding features. Anything you want to do in a product can be “obvious” either from usage data or from customer input. The challenge in product management is to create a core set of principles or beliefs about how you want to move the product forward that allow you to maintain the essential nature of your product while adding new features. The tension between maintaining existing customers via stability or incremental improvements versus keeping pace with where the marketplace is heading is the classic design challenge in technology products.
It shouldn’t be much of a surprise, but a great deal of product bloat comes from adding the obvious feature or directly listening to customers, or by failing to stick with design patterns. Ironically, efforts to enhance products for today’s customers are often the very features that add friction, reduce minimalism, and lead to overall bloat.
Bauhaus to Bloatware
This march from Bauhaus to Bloatware is well-known in our industry. It is part of a cycle that is very difficult to avoid. It is not without irony that your best and most engaged customers are often those pushing you to move faster down this path. Most every product in every segment starts minimal and adds features over time. At each juncture in the evolution of the product there is a tension over whether additions are the right marketplace response or simply bloat.
This march (and tension) continues until some complete rethinking introduces a new minimal product addressing most of the same need but from a different perspective. The cycle then starts again. Operating systems, databases, instruction sets, peripheral connection, laptops, interfaces, word processors, and anything you can name has gone through this cycle.
This re-evolution or reimagination of a product is key to the long term viability of any technology.
By adhering to a set of design principles you are able to expand the breadth of use cases your product serves while working to avoid simply adding more friction to the core use cases.
After publication three typos were fixed and the example of personalization clarified.
One of the biggest changes for an early-stage and growing company is when hiring transitions from technical/product founders to the first sales or marketing hires. It is an exciting time of course but also one that can be very stressful. As much as that can be the case, there are a few patterns and practices one can follow to successfully cross that chasm or at the very least reduce the risk to the same as any technical hire.
It goes without saying that the challenge is rooted in learning how to recognize and evaluate talented people that possess talents and skills that you do not have and really can’t relate to from an experience level. Quite a few roles in companies are going to be “close” or adjacent to your own skill set, speaking from the perspective of a technical founder. If you’re an engineer then QA or product management aren’t far off from what you do on a daily basis. If you tilt towards product management, you’re interactions with designers are perfectly natural. In fact for technical founders the spectrum from design to product management to engineering and then QA all feel like your wheelhouse.
Branching out further to sales, marketing, communications, business development, customer service, operations, supply chain, manufacturing, finance, and more can get uncomfortable very quickly. I remember the first time I had to interview a marketing person and I realized I didn’t even know what questions I should ask to do the interview. Yet, I had worked with marketing closely for many years. Fortunately, I had a candidate pipeline and an interview loop of experienced interviewers to draw from. That’s not alway the case with a startup’s first hires.
The following are four challenges worth considering and a step you can take to mitigate the challenges if you find yourself in this spot.
Look only within your network. When sourcing your first potential sales or marketing hire, you might tend to tap into your network the same way you would for an engineering hire. You might have a very broad network but it might not be a first person network. For example with engineering you might know people from the same school program or projects you worked on or are deeply familiar with. But with sales and marketing you probably lack that much common context and your network might reflect people you came across with in work or projects, but not necessarily worked with in the same way you would have with technical hires. You might be worried about taking too much time to source candidates or concerned that you will burn a lot of time on introductions and people you don’t “know” well. Approach. The first step in a breakthrough hire process is to make sure you cast a wide net and tap into other networks. This process itself is an investment in the future as you will broaden your network in a new domain.
Define the job by way you know from the outside. Walk a mile in other’s shoes is an age-old expression and is very fitting for your first sales or marketing hire. Your initial job description for a job you never done might be taken from another company or might be based on your view of what the job needs to get done. The challenge is that your view of what needs to get done is informed by your own “outsider” view of what a job you haven’t done before might mean. Being a sales or marketing person is vastly different from what it looks like from the outside, looking in. If you haven’t done the job you tend to think of the job through the lens of outputs rather than the journey from input to output. Most jobs are icebergs and the real work is the 90% under water. Until you’ve watched and worked with an enterprise sale end to end or developed and executed on a consumer go to market plan, your view of what the job looks like might be a sales presentation or SEO. Getting to those deliverables is a whole different set of motions. Approach. Find a way to have a few “what do you do” conversations with senior people in the job function. Maybe take some time to ask them to define for you what they think the steps would be to get to the outcome you are looking for, rather than to discuss the outcome. These “what would it take” conversations will help you to craft a skills assessment and talent fit.
Hire too senior or too junior. Gauging the seniority of a candidate and matching that to the requirement for the role are often quite tricky early on. In the conversations I’ve had I tend to see founders head to one extreme or another. Some founders take the outcome or deliverable they might want (white paper, quota) and work backwards to find a hire to “execute” on that. Some take the other extreme and act on the basis of not knowing so bringing in a senior person to build out the whole system. The reality is that for a new company you often are best off with someone in the middle. Why is that? If you hire to too junior the person will need supervision on a whole range of details you haven’t done before. This gets back to defining the job based on what you know—your solution set will be informed only by the experience you have had. If you hire someone too senior then they will immediately want to hire in the next round of management. You will quickly find that one hire translates into three and you’re scaling faster than you’re growing. I once talked to a company that was under ten engineers and hired a very senior marketing leader with domain experience who then subsequently spent $200K on consulting to develop a “marketing plan”. Yikes. Approach. Building on the knowledge you gained by casting a wide net and by taking the time to learn the full scope of work required, aim for the right level of hire that will “do the work” while “scaling the team”.
Base feedback on too small a circle. Once you have a robust job description and candidate flow and ways to evaluate, it is not uncommon to “fall back” on a small circle of people to get feedback on and evaluate the candidate. You might not want to take up time of too many people or you might think that it is tricky for too many people to evaluate a candidate. At the other end you might want these first hires to be a consensus based choice among a group that collectively is still learning these multi-disciplinary ropes. Culture fit is always a huge part of hiring, especially early on, but you’re also concerned about bringing in a whole new culture (a “sales culture” or “business culture”) and that contributes to the desire to keep things contained. Approach. Getting feedback from at least one trusted senior person with experience and success making these specific hires is critical. You can tap into your board or investors or network, but be sure to lean on those supporting you for some validation and verification.
One interesting note is that these challenges and approaches aren’t unique to startups. It turns out these follow similar patterns in large companies as well as you rise from engineering/product to business or general management. While you might think in a big company the support network insulates you from these challenges, I’ve seen (and experienced personally) all of the above.
The first sales or marketing hires can be pretty stressful for any technologist. Branching out to hire and manage those that rely more than you on the other side of their brain is a big opportunity for growth and development not only for the company but for you personally. It is a great time to step back and seek out support, advice, and counsel.
The “Internet of Things” or IoT is cool. I know this because everyone tells everyone else how cool it is. Ask anyone and they will give you their own definition of what IoT means and why it is cool. That’s proof we are using a buzzword or are in a hype-cycle.
Much is at stake to benefit from, contribute to, or even control this next, next-generation of computing. If a company benefitted from 300 million PCs a year, that’s quite cool. If another company benefitted from 1 billion smartphones a year, then that’s pretty cool.
You know what is really cool, benefitting from 75 billion devices. That certainly explains the enthusiasm for the catch phrase.
Missing out on this wave is uncool. Just take a look at the CNBC screen shot to the left. That’s what we talked about in the Digital Innovation class at HBS last week and what motivated this post.
In an effort to quantify the opportunity, claim leadership, or just be included amongst those who “get it” we are all collectively missing the fact that we really don’t know how this will play out in any micro sense. It is safe to say everything will be connected to the internet. That’s about it. As Benedict Evans says, counting connected devices is a lot like counting how many electric motors are in your home. In the first days this was cool. Today, that seems silly. Benedict’s excellent post also goes into details asking many good questions about what being connected might mean and here I enhance our in-class discussion.
One way to view the history of “devices” is through two generations in the 20th century. For the first 50 years we had “analog motor” devices that replaced manual mechanical devices. This was the age of convenience brought by motors of all kinds from giant gas motors that produced electricity to tiny DC motors that powered household gadgets and everything in between. People very quickly learned the benefits of using motors to enhance manual effort. Though if you don’t think it was a generational shift, consider the reactions to the first labor saving home appliances (see Disney’s Carousel of Progress).
The next 50 years was about “digital electronics” which began with the diode, then the transistor, and then the microprocessor. What is amazing about this transition is how many decades past before the full transformation took place. Early on electronics replaced analog variants. Often these were viewed as luxuries at best, or inferior “gadgets” at worse. I recall my father debating with a car dealer the merits of “electronic fuel injection”. Many of us reading this certainly recall (or still believe) the debate over the quality of digital music relative to analog LP and cassette. Interestingly, the benefit we all experience today of size, weight, power consumption, portability, and more took years to gain acceptance. We used to think about “repairing” a VCR and how awful it was that you could not repair a DVD player. Go figure. The key innovation insight is that the benefits of electronics took decades to play out and were not readily apparent to all at the start.
We find ourselves at the start of a generation of invention where everything is connected. We are at the early stages where we are connecting things that we can connect, just like we added motors to replace the human turning the crank on a knitting loom. Some inventions have the magic of the portable radio—freedom and portability. Some seem as gimmicky as that blender.
Here are a few things we all know and love today that have already been transformed by “first generation” connectivity:
For the next few years, thousands of innovators will embark on the idea maze (Chris Dixon summarizes Balaji Srinivasan’s lecture). This is not just about product-market fit, but about much more basic questions. Every generational change in technology introduces a phase of crazy inventing, and that is where we are today with IoT.
This means that for the next couple of years most every product or invention, at first glance, might seem super cool (to some) and crazy to most everyone else. Then after a little use or analysis, more sober minds will prevail. The journey through the idea maze and engineering realities will continue.
This also means that every “thing” introduced will be met with skepticism of the broader, less tech-enthused, market (like our diverse classroom). Every introduction will seem more expensive, more complex, more superfluous than what is currently in use. In fact it is likely that even the ancillary benefits of being connected will be lost on most everyone.
That almost reads like the definition of innovator’s dilemma. Nothing sums this up more than how people talk about smart “watches”, connected thermostats, or robots. One either immediately sees the utility of strapping to your wrist a sub-optimal smartphone you have to charge midday or you ask why you can’t just look at your phone’s lock screen for the time. One looks at Nest thermostat and asks why paying 10X for the luxury of having a professional HVAC installer get stumped or having to “train” something you set and forget is such a good idea.
We find ourselves in the midst of a generational change in the technology base upon which everything is built. It used to be that owning an “electric” or “electronic” thing sounded modern and cool, well because they were so unique. That’s why adding “connected” or “smart” to a product is going to sound about as silly as saying “transistor radio” or “electronic oven”.
Every thing will be connected. The thing is we, collectively, have neither mastered connecting a thing without some downside (cost, weight, complexity) nor even figured out what we would do when something is connected. What are the equivalents of size, weight, reliability, ease of manufacturing, and more when it comes to connectivity? Today we do the “obvious” such as use the cloud for remote relay, access, storage. We write an app to control something over WiFi rather than build in a physical user interface. We collect and analyze data to inform usage or future products. There is more to come. How will devices be connected to each other? How will third parties improve the usage of things and just make them better? Where do we put the “smarts” in a thing when we have thousands of things? How might we find we are safer, healthier, faster, and even just happier?
We just don’t know yet. What we do know is that a lot of entrepreneurs and innovators across companies are going to try things out and incrementally get us to a new connected world, which in a few years will just be the world.
The Internet of Things is not about the things or even the platform the same way we thought about motors or microprocessors. The big winners in IoT will be thinking about an entirely different future, not just connecting to things we already use today in ways we already use them.
CES 2015 was another amazing show. Walking around the show one can only look with wonder about the amazing technologies being invented and turned into products. Few things are as energizing or re-energizing as systematically walking the booths and soaking it all in. I love CES as a reminder of the amazing opportunity to work in this industry.
Taking a moment to share what I walk away with is always helpful to me—writing is thinking. Every day we have the chance to talk to new companies about products under development and ideas being considered and CES provides a great cross-industry context about what is going on. This is especially important because of the tendency to look too much to the massive companies that might dominate our collective point of view. My experience has been that spending energy on what is going on CES unlocks potential opportunities by forcing you to think about problems and solutions from different perspectives.
While this post goes through products, there are many better sources for the full breadth of the show. I try to focus on the broader themes that I walk away with after spending a couple of days letting everything sort of bake for a bit. This year I wanted to touch on these 5 major themes and also include a traditional view of some of the more “fun” observations:
- Batteries, wires, simplicity
- Displays popping up everywhere
- Cameras improving with Moore’s law
- Sensors sensing, but early (and it’s all about the data)
- Connectivity gaining ubiquity
- Fun Products
Ever the product manager (PM) I try to summarize each of these sections with some top-line PM Learning to put the post into action.
Click on images for larger version. All photos by me unless noted.
Batteries, wireless, simplicity
PM Learning: Of course optimize your experiences to minimize impact on battery life, but don’t assume your competitors will be doing the same. Think about the iPhone OS and built in apps navigating that fine line. In you’re making new hardware, assume standard connectors for charging betting on Type-C and HDMI.
The best place to start with CES is batteries and wires, because that’s what will follow you around the entire show—everyone walks the show floor in search of outlets or with an auxiliary battery and cable hanging off their phone. Batteries created the portable consumer electronics revolution, but we’re also tethered to them far too often. The good news is that progress is real and continues to be made.
Behind the scenes a great deal of progress is being made on power management with chipsets even wireless ones. On display at the show were Bluetooth keyboards can go a year on a single charge or wireless headphones are good for days of normal usage.
Progress is also being made on battery technology that is making it possible for smaller, lighter, and faster charging batteries. While these are not dramatic 2 or 3X improvements, they are real.
The first product I saw was an LG cordless vacuum that had 70 minutes of usage and the cleaning power passing the classic bowling ball suction test. Truly something that makes everything easier.
Batteries are an important part of transportation and Panasonic is the leading manufacturer right now of large-scale batteries for transport. On display was the GoGoRo urban scooter. This is not just a battery-operated scooter that can go 95 km/h and is cloud connected with GPS locator maps. It can go 100km on a pair of batteries. All that would be enough. But the batteries can be swapped out in seconds and you’re on the go. The company plans to build a network of charge stations to go with a business model of subscription. I love this whole concept.
Panasonic also makes batteries for the Tesla so here is a gratuitous picture of the gratuitous Tesla Model X on display.
While all consumer electronics have aimed for simplicity since the first blinking 12:00 on a VCR, simplicity has been elusive due to the myriad of cables, connectors, remotes, and adapters. Normally a CES trip report would include the latest in cable management, high tech cables, or programmable remotes. Well, this year it is fair to say that these whole categories have basically been subsumed in a wave of welcome simplicity.
Cables, to the degree they are needed, have mostly been standardized on HDMI for video and USB for charging and peripherals. With the forthcoming USB Type-C even USB will be standardized. The Apple connectors are obviously all over though easily adapted to micro-USB for now (note to makers of third party batteries—margins are tight, but using a MFI logo and an Apple cable end would be welcome). When you do need cables they are getting better. It was great to see an awesome fiber-optic cable from Corning that worked for USB (also displayport). It makes the cable much thinner and more flexible along with increasing the signal travel distance since it uses active powered ends. HDMI in the works.
While most attention went into Smart Watches with too many features, Casio’s latest iteration offered a new combination of better battery life and low power radios. The new watch uses solar charging along with a GPS receiver (and also the low power radio waves) to set the time based on location. And it is not even huge.
Bringing this theme is no wires and improved batteries to a new extreme, the wireless earbuds from Bragi are aggressive in the feature set by incorporating not just BT for audio but a microphone for talking and sensors for heart rate (though not likely very reliable) and temperature (not sure of the use as a practical matter). But certainly worth looking at when they are available. Photo by Bragi.
Displays popping up everywhere
PM Learning: Curved is here. Too much energy is going into this. Expect to find new scenarios (like signage) and thus new opportunities. Resolution at 4K and beyond is going to be normal very quickly and with a price premium for a very short time. Pay close attention to web page design on high resolution and high DPI (assets). Many opportunities will exist for new screens that will run one app in a fixed function manner for line of business or in consumer settings—these are replacing static signs or unmanageable PCs. We’re on the verge of broadly deployed augmented reality and totally soft control screen layouts, starting with cars.
More than anything, CES continues to be the show about TV.
Curved screens are getting a lot of attention and a lot of skepticism, some of which is warranted. Putting them in an historical context, each generation of screen innovation has been greeted in a similar manner. Whether too expensive, too big, too incremental, or just not useful the reasons a new screen technology wasn’t going to take off have been plentiful. While curved seems weird to most of us (and frankly even maker is trying too hard to justify it, as seen in the pseudo scientific Samsung depictions below) it has compelling utility in a number of scenarios. Skeptics might be underestimating the architectural enthusiasm for the new screens as well.
The most immediate scenario is one that could be called the “Bloomberg desktop” and here you can see it on display. It is very compelling as a single user, a “mission control” station, or as a group monitoring station.
Signage is also incredibly important and the architectural use of curved screens as seen below will become relatively commonplace because of the value in having interactive and programmable displays for advertising and information.
Speaking of signage, for years we’ve seen the gradual migration of printed signs to signage driven by PCs to even one year where all the screens were simply JPEGs being played in those ever-present photo frames. This year saw a number of compelling new signage products that combined new multi-screen layouts with web-based or app-based cloud platforms for creating dynamic layouts, incorporating data, and managing a collection of screens. Below we can see an example of an active menu display and the tool for managing it. Following that is a complex multi-screen 4K layout (narrow bezel) and associated tool.
For home or entertainment, there were dozens of cinematic 21:9 4K curved screens at massive sizes. Maybe this transition will be slower (as the replacement cycle for TVs is slow anyway) due to the need for new thoughts on where to put these. This year at least was showing some wall mounting options.
Curved screens are also making their way into small devices. Last year saw the LG Flex and an update was available this year. Samsung introduced a Galaxy Note Edge with a single curved edge. They went to great lengths in the software to use this as an additional notification band. I’m a bit skeptical of this as it was difficult to use without thinking hard about where to put your hand (at last in a booth minute of use).
I don’t want to gloss over 4K, but suffice it to say every screen was 4K or higher. I saw a lot of skeptical coverage about not being able to see the difference or “how far away are you”. Let’s all just move on. The pixels are here and pretty soon it will just be as difficult to buy an HD display as it is to buy 512MB SIMMs or 80GB HDDs. That’s just how manufacturing at scale works. These screens will soon be cheaper than the ones they are replacing. Moore’s law applies to pixels too. For the skeptics, this exhibit showed how resolution works.
Screens are everywhere and that’s the key learning this year. There were some awesome augmented reality displays that have been talked about for a long time but are quickly becoming practical and cost-effective. Below is a Panasonic setup that can be used to cosmetics either in store or in salon. It was really amazing to see.
Continuing with augmented or head’s up displays, this was an amazing dashboard in a concept car from Toyota that showed off a full dash of soft-controls and integrated augmented screens.
At a practical level, Sharp and Toshiba were both showing off ready-made dashboard screens that will make it into new cars as OEM components or as aftermarket parts.
Cameras improving with Moore’s law
PM Learning: Cameras continue to gain more resolution but this year also showed a much clearer focus (ha) on improving photos as they are captured or improving video by making it smarter. Cameras are not just for image capture but also becoming sensors in their own right and integrated into sensing applications, though this is just starting. My favorite advance continues to be the march towards more high dynamic range as a default capture mode.
Digital cameras made their debut in the early 1990’s with 1MP still images that were broadly mocked by show attendees and reviews. Few predicted how Moore’s law would rapidly improve image quality while flash memory would become cost effective for these large CCDs and then mobile phones would make these sensors ubiquitous. Just amazing to think about.
High Dynamic Range started off as a DSLR trick and then something you could turn on and is now an Auto feature on most phones. In cameras it is still a bit of a trick. There are complexities in capturing moving images with HDR that can be overcome. Some find the look of HDR images to be “artificial” but in reality they are closer to the human eye range—this feels a bit like the debate during the first music CDs v. vinyl. Since the human eye has anywhere from 2 to 5 times the range of today’s sensors it only makes sense to see this more and more integrated into the core capture scenario. Below is a Panasonic 4K professional video camera with HDR built in.
Facility security is a key driver of camera technology because of the need for wide views, low light, and varying outdoor conditions. A company that specializes in time-lapse imaging (for example construction sites) introduced a time-lapsed HDR camera.
Low light usually kicks in infrared cameras in security settings. For many the loss of color has always been odd. Toshiba was showing off the first 720P infrared camera that generates a color image even in 0 Lux. This is done using software to map to a colorized palette. You can see a traditional infrared and the color version side by side in a cool interactive booth.
In thinking about cameras as ubiquitous, this very clever camera+LED bulb combination really struck me. Not only is it a standard PAR LED bulb, but it adds a Wi-Fi web camera. Lots of potential uses for this.
DSLRs still rule for professional use and their capabilities are still incredible (and should be for what you carry around). Nikon surprised even their folks in the booth by announcing their first Phase Fresnel lens with a 300mm f4. Cannon has a 400mm lens (their “DO” designation). These lenses result in remarkable (better) image quality and immense size and weight reduction. Seen below, is the classic 300mm f4 and the new “PF” version. Add to cart :-)
Finally, Nikon repeated their display of 360-degree stop action Matrix-like photography. It is really am amazing demo with dozens of cameras snapping a single image providing a full walk around. Just love the technology.
Sensors sensing, but early (and it is all about data!)
PM Learning: We are just starting on sensors. While many sensors are remarkably useful today, the products are still first generation and I believe we are in for an exponential level of improvement. For these reasons, I continue to believe that the wearable sensors out there today are interesting for narrow use cases but still at the early part of the adoption curve. Innovation will continue but for the time being it is important to watch (or drive the exponential) changes. Three main factors will contribute to this:
- Today’s sensors are usually taking one measurement (and often that is a proxy for what you want). These are then made into a single purpose product. The future will be more direct measurements as sensors get better and better. There’s much to be invented, for example, for heart rate, blood sugar, blood pressure, and so on.
- Sensors are rapidly improving in silos but will just as rapidly begin to be incorporated into aggregate units to save space, battery life, and more. There are obvious physical challenges to overcome (not every sensor can be in the same place or in contact with the same part of a body or device).
- Data is really the most important element and key differentiator of a sensor. It is not the absolute measurement but the way the measurement is put in context. The best way to think of this is that GPS was very useful but even more useful when combined with maps and even more useful when those maps add local data such as traffic or information on a destination.
Many are still working to bring gesture recognition to different scenarios. There remains some skepticism, perhaps rooted in the gamer world, but for many cases it can work extremely well. These capabilities can be built into cameras or depending on the amount of recognition into graphics chipsets. I saw two new and neat uses of gesture recognition. First, this LG phone was using a gesture to signal the start of a self-timer for taking selfies (just hold out your hand, recognize, squeeze, then timer starts). This was no selfie-stick (which I now carry around all the time due to the a16z selfie-stick investments) but interesting.
This next demonstration was showing gestures used in front of an automobile screen. There were a lot of potential gestures shown in this proof of concept but still there are interesting possibilities.
The incorporation of image recognition into the camera turns a camera into a sensor to be used for a variety of uses. This was a camera that ended up looking like the TV show Person of Interest.
There were quite a few products demonstrating eye tracking. This is not a new technology but it has become very cheap very quickly. What used to take very specialized cameras can now be done with off the shelf parts and some processing. What are missing are use cases beyond software usability labs and medical diagnostics :-)
This take on eye tracking called the Jins Meme integrated eye tracking and other sensors into hipster glasses. Again the scenarios aren’t quite there yet but it is very interesting. They even package this up in multi-packs for schools and research.
There were many products attempting to sense things in the home. I feel most of these will need to find better integration with other scenarios rather than being point solutions but they are all very interesting to study and will still find initial use cases. This is how innovation happens.
One of the more elaborate sensors is called Mother. It packages up a number of sensors that connect wireless to a base station. There are temperature and motion sensors among them. You just place these near where you want to know something (these little chips). Then they have a nice app that translates sensing events into notifications.
There were even sensors for shoes and socks. If you’ve ever had foot issues you know the need to attempt to replicate your pain while being monitored by a high-speed camera or even fluoroscope/x-ray. These sensors, such as this one in a sock, have immediately interesting medical use under physician supervision. Like many of the sensors, I feel this is a best practice use case and don’t think the home-use case is quite right yet because of the lack of accessible scientific data.
The Lillypad floats around in your pool and takes measurements of the water and wirelessly sends them to an app. It also measures UV light as a clever bonus.
Speaking of pools, this was such a clever sensor. It is a Bluetooth radio that you pair with your phone. You get kids to wear this around a pool. When the kid is submerged it will notify you. You can get notified immediately or after a set time (I learned the national standard for under water distress is 25 seconds). The big trick—there’s no technology here; just that Bluetooth doesn’t travel under water. Awesome!
In this previous post, the notion of ingredients versus products at CES was discussed. To emphasize what this means in practice, this montage below is from a vendor that literally packaged up every point-sensor into a “product”. This allows for a suite of products, which is great in a catalog but awfully complex for a consumer. There were a dozen manufacturers displaying a similar range of single-sensor products. I don’t know if this is sustainable.
Connectivity gaining ubiquity
PM Learning: Duh, everything will be connected. But unlike previous years, this is now in full execution mode. The biggest challenge is what “things” get connected to what things or networks. When do you put smarts somewhere? Where does data go? What data is used?
Everything is going to be connected. This has been talked about for a long time, but is really here now. The cost of connectivity is so low and, at least in the developing world, assuming either Wi-Fi or WWAN (via add-on plans) is rational and economical. This will introduce a lot of complexity for hardware makers who traditionally have not thought about software. It will make for room for new players that can re-think scenarios and where to put the value. Some devices will improve quickly. Others will struggle to find a purpose to connect. We’ve seen the benefits of remote thermostats and monitoring cameras. On the other hand, remote controlled clothes washers (that can’t load the clothes from the basket or get the clothes into the dryer) might be still searching. I would add that this dual load washer from LG is very clever.
Many products were demonstrating their “Works with Nest”. This is a nice API and and it is attracting a lot of attention since like any platform is saves the device makers from doing a lot of heavy lifting in terms of software. While many of the demonstrations were interesting there can still be a little bit of a gimmick aspect to it (washing machines). This alarm clock was interesting to me. While many of us just use phones now (which can control nest) this clock uses voice recognition for alarm functions. When connected to a Nest it can also be used to change the temperature or to alter the home/away settings of the thermostat.
A company called Cannon Security relatively new security safe company (most are very old) and I loved this “connected” safe. It isn’t connected the way I thought (an app to open it or alert you of a break in). Instead it is a safe that also has a network cable and two USB ports. So one use might be to store a network connected drive in the safe and use it for backup. You could also keep something in the safe charging via USB. Pretty cool. The jack pack is in the lower right of the image.
My favorite product of the whole show, saving the best for last, is not yet released. But talk about a magic collection of connectivity and data…wow. These founders set out to solve the problem of getting packages delivered to your house. Most communities prevent you from getting a delivery box out front and in many places you can’t have something left on your doorstep and expect it to remain. This product, called “Track PIN” solves the problem. Here’s what it does:
- Insert a small module inline in the tree wires that control your garage door.
- Add a battery operated PIN box to the front of your garage somewhere.
- When you receive a package tracking number email just forward it to trackpin.com (sort of like the way TripIt works).
- THEN, when the delivery person shows up (UPS, FedEx, USPS, and more) they will automatically know in their handheld what code to punch. Upon punching the code your garage door opens a short amount to slide the package in. No signature required. The PIN is invalidated. The driver is happy. You are happy. Amazon is happy. And the cloud did all the work.
I know it sounds somewhat mundane, but these folks really seem to have developed a cool solution. It beats bothering the neighbors.
Every CES has a few fun products that you just want to call attention to without snark or anything, just because we all know product development is not a science and one has to try a lot of things to get to the right product.
Power Pole. This is my contribution to selfies. This one even has its own power source.
Emergency jump starter/laptop charger/power source. This was a perfectly fine product. The fun part was seeing the exact same product with different logos in 5 different booths. Amazing placement by the ODM.
USB Charger. This is the best non-commercial USB charger I’ve seen. It even includes a way out of spec “high voltage port.
Fake TV. This is a home security system that flashes multi-colored LED lights that trick a burglar into thinking you are home watching TV. Best part about it was that when I took the picture the person staffing the booth said “Don’t worry the Wi-Fi Drone version is coming in late 2015”. Gotta love that!!
Surface influence. And finally, I’ve been known to be a fan of Microsoft Surface but I guess I was not alone. The Typo keyboard attempts to bring a Microsoft TypeCover to the iPad and the Remix Ultra-Tablet is a rather uncanny resemblance to Surface 2 running an Android skin (developed by several former Google employees).
Phew. That’s CES 2015 in a nutshell.