Posts Tagged ‘competition’
Thoughts on reviewing tech products
I’ve been surprised at the “feedback” I receive when I talk about products that compete with those made by Microsoft. While I spent a lot of time there, one thing I learned was just how important it is to immerse yourself in competitive products to gain their perspective. It helps in so many ways (see https://blog.learningbyshipping.com/2013/01/14/learning-from-competition/).
Dave Winer (@davewiner) wrote a thoughtful post on How the Times reviews tech today. As I reflected on the post, it seemed worth considering why this challenge might be unique to tech and how it relates to the use of competitive products.
When considering creative works, it takes ~two hours to see a film or slightly more for other productions. Even a day or two for a book. After which you can collect your thoughts and analysis and offer a review. Your collected experience in the art form is relatively easily recalled and put to good use in a thoughtful review.
When talking about technology products, the same approach might hold for casually used services or content consumption services. In considering tools for “intellectual work” as Winer described (loved that phrase), things start to look significantly different.Software tools (for “intellectual work”) are complex because they do complex things. In order to accomplish something you need to first have something to accomplish and then accomplish it. It is akin to reviewing the latest cameras for making films or the latest cookware for making food. While you can shoot a few frames or make a single meal, tools like these require many hours and different tasks. You shouldn’t “try” them as much as “use” them for something that really matters. Only then can you collect your thoughts and analysis.Because tools of depth offer many paths and ways to use them there is an implicit “model” to how they are used. Models take a time to adapt to. A cinematographer that uses film shouldn’t judge a digital camera after a few test frames and maybe not even after the first completed work.
The tools for writing, thinking, creating that exist today present models for usage. Whether it is a smartphone, a tablet, a “word processor”, or a photo editor these devices and accompanying software define models for usage that are sophisticated in how they are approached, the flow of control, and points of entry. They are hard to use because they do hard things.
The fact that many of those that write reviews rely on an existing set of tools, software, devices to for their intellectual pursuits implies that conceptual models they know and love are baked into their perspective. It means tools that come along and present a new way of working or seeing the technology space must first find a way to get a clean perspective.
This of course is not possible. One can’t unlearn something. We all know that reviewers are professionals and just as we expect a journalist covering national policy debates must not let their bias show, tech reviewers must do the same. This implicit “model bias” is much more difficult to overcome because it simply takes longer to see and use a product than it does to learn about and understand (but not necessarily practice) a point of view in a policy debate. The tell-tale sign of “this review composed on the new…” is great, but we also know right after the review the writer has the option of returning to their favorite way of working.
As an example, I recall the tremendous difficulty in the early days of graphical user interface word processors. The incumbent WordPerfect was a character based word processor that was the very definition of a word processor. The one feature that we heard relentlessly was called reveal codes which was a way of essentially seeing the formatting of the document as codes surrounding text (well today we think of that as HTML). Word for Windows was a WYSIWYG word processor in Windows and so you just formatted things directly. If it was bold on screen then it was implicitly surrounded by <B> and </B> (not literally but conceptually those codes).
Reviewers (and customers) time and time again felt Word needed reveal codes. That was the model for usage of a “word processor”. It was an uphill battle to move the overall usage of the product to a new level of abstraction. There were things that were more difficult in Word and many things much easier, but reveal codes was simply a model and not the answer to the challenges. The tech world is seeing this again with the rise of new productivity tools such as Quip, Box Notes, Evernote, and more. They don’t do the same things and they do many things differently. They have different models for usage.
At the business level this is the chasm challenge for new products. But at the reviewer level this is a challenge because it simply takes time to either understand or appreciate a new product. Not every new product, or even most, changes the rules of the predecessor successfully. But some do. The initial reaction to the iPhone’s lack of keyboard or even de-emphasizing voice calls shows how quickly everyone jumped to the then current definition of smartphone as the evaluation criteria.Unfortunately all of this is incompatible with the news cycle for the onslaught of new products or the desire to have a collective judgement by the time the event is over (or even before it starts).This is a difficult proposition. It starts to sound like blaming politicians for not discussing the issues. Or blaming the networks for airing too much reality tv. Isn’t is just as much what peole will click through as it is what reviewers would write about. Would anyone be interested in reading a Samsung review or pulling another ios 7 review after the 8 weeks of usage that the product deserves?
The focus on youth and new users as the baseline for review is simply because they do not have the “baggage” or “legacy” when it comes to appreciating a new product. The disconnect we see in excitement and usage is because new to the category users do not need to spend time mapping their model and just dive in and start to use something for what it was supposed to do. Youth just represents a target audience for early adopters and the fastest path to crossing the chasm.
Here are a few things on my to-do list for how to evaluate a new product. The reason I use things for a long time is because I think in our world with so many different models
- Use defaults. Quite a few times when you first approach a product you want to immediately customize it to make it seem like what you’re familiar with. While many products have customization, stick with the defaults as long as possible. Don’t like where the browser launching button is, leave there anyway. There’s almost always a reason. I find the changes in the default layout of iOS 6 v. 7 interesting enough to see what the shift in priorities means for how you use the product.
- Don’t fight the system. When using a new product, if something seems hard that used to seem easy then take a deep breath and decide it probably isn’t the way the product was meant to do that thing. It might even mean that the thing you’re trying to do isn’t necessarily something you need to do with the new product. In DOS WordPerfect people would use tables to create columns of text. But in Word there was a columns feature and using a table for a newsletter layout was not the best way to do that. Sure there needed to be “Help” to do this, but then again someone had to figure that out in WordPerfect too.
- Don’t jump to doing the complex task you already figured out in the old tool. Often as a torture test, upon first look at a product you might try to do the thing you know is very difficult–that side by side chart, reducing overexposed highlights, or some complex formatting. Your natural tendency will be to use the same model and steps to figure this out. I got used to one complicated way of using levels to reduce underexposed faces in photos and completely missed out on the “fill flash” command in a photo editor.
- Don’t do things the way you are used to. Related to this is tendency to use one device the way you were used to. For example, you might be used to going to the camera app and taking a picture then choosing email. But the new phone “prefers” to be in email and insert an image (new or just taken) into a message. It might seem inconvenient (or even wrong) at first, but over time this difference will go away. This is just like learning gear shift patterns or even the layout of a new grocery store perhaps.
- Don’t assume the designers were dumb and missed the obvious. Often connected to trying to do something the way you are used to is the reality that something might just seem impossible and thus the designers obviously missed something or worse. There is always a (good) chance something is poorly done or missing, but that shouldn’t be the first conclusion.
But most of all, give it time. It often takes 4-8 weeks to really adjust to a new system and the more expert you are the more time it takes. I’ve been using Macs on and off since before the product was released to the public, but even today it has taken me the better part of six months to feel “native”. It took me about 3 months of Android usage before I stopped thinking like an iPhone user. You might say I am wired too much or you might conclude it really does take a long time to appreciate a design for what it is supposed to do. I chuckle at the things that used to frustrate me and think about how silly my concerns were at day 0, day 7, and even day 30–where the volume button was, the charger orientation, the way the PIN worked, going backwards, and more.
–Steven Sinofsky
Learning by slipping
“Slipping” or missing the intended completion or milestone date of software projects is as old as software itself. There’s a rich history of our industry tracking intended v. actual ship dates and speculating as to the length of the slip and the cause. Even with all this history, slipping is a complex and nuanced topic worth a bit of discussion about slipping as an engineering concept.
Slipping
I’ve certainly had my fair share of experience slipping. Projects I’ve worked on have run the full spectrum from landing exactly on time to slipping 20-30% from the original date. There’s never a nice or positive way to look at slipping since as an engineer you’re only as good as your word. So you can bet the end of every project includes a healthy amount of introspection about the slip.
Big software projects are pretty unique. The biggest challenge is that large scale projects are rarely “repeated” so the ability to get better through iteration keeping some things constant is limited. This is different than building a bridge or a road where many of the steps and processes can be improvements from previous projects. In large scale software you rarely do the same thing with the same approach a second or third time.
While software is everywhere, software engineering is still a very young discipline that rapidly changes. The tools and techniques are wildly different today than they were just a few years ago. Whether you think about the languages, the operating systems, or the user experience so much of what is new software today is architected and implemented in totally new ways.
Whenever one talks about slipping, at some basic level there is a target date and a reality and slipping just means that the two are not the same (Note: I’ve yet to see a software project truly finish early). There’s so much more to slipping than that.
What’s a ship date
In order to slip you need to know the ship date. For many large scale projects the actual date is speculation and of course there are complexities such as the release date and the availability date to “confuse” people. This means that discussions about slipping might themselves be built on a foundation of speculation.
The first order of business is that a ship date is in fact a single date. When people talk about projects shipping “first quarter” that is about 90 different dates and so that leaves everyone (on the team and elsewhere) guessing what the ship date might be. A date is a date. All projects should have a date. While software itself is not launching to hit a Mars orbit, it is important that everyone agree on a single date. Whether that date is public or not is a different question.
In the world of continuously shipping, there’s even more of a challenge in understanding a slip. Some argue that “shipping” itself is not really a concept as code flows to servers all the time. In reality, the developers on the team are working to a date—they know that one day they come to work and their code is live which is a decidedly different state than the day before. That is shipping.
Interestingly, the error rate in short-term, continuous projects can often (in my experience) be much higher. The view of continuously shipping can lead to a “project” lasting only a month or two. The brain doesn’t think much of missing by a week or two, but that can be a 25 – 50% error rate. On a 12 month project that can mean it would stretch to 15-18 months, which does sound like a disaster.
There’s nothing about having a ship date that says it needs to be far off. Everything about having a date and hitting it or slipping can apply to an 8 week sprint or a 3 year trek. Small errors are a bigger part of a short project but small errors can be amplified over a long schedule. Slipping is a potential reality regardless of the length of the schedule.
The key thing from the team’s perspective about a ship date is that there is one and everyone agrees. The date is supported by the evidence of a plan, specifications, and the tools and resources to support the plan. As with almost all of engineering, errors early in the process get magnified as time goes by. So if the schedule is not believable or credible up front, things will only get worse.
On the other hand, a powerful tool for the team is everyone working towards this date. This is especially true for collaboration across multiple parts of the team or across different teams in a very large organization. When everyone has the same date in mind then everyone is doing the same sorts of work at the same time, making the same sorts of choices, using the same sorts of criteria. Agreeing on a ship date is one of the most potent cross-group collaboration tools I know.
Reasons to slip
Even with a great plan, a team on the same page, and a well-known date, stuff can happen. When stuff happens, the schedule pressure grows. What are some of the reasons for slipping?
- Too much work, aka “we picked too much stuff”. The most common reason for slipping is that the team signed up to do more work than could be done. The most obvious solution is to do less stuff. In practice it is almost impossible to do less once you start (have you ever tried to cut the budget on a kitchen remodel once it starts? You cut and cut and end up saving no money but costing a lot of time.) The challenge is the inter-connected nature of work. You might want to cut a feature, but more often than not it connected to another feature either upstream or downstream.
- Some stuff isn’t working, aka “we picked the wrong architecture”. This causal factor comes from realizing that the approach that is halfway done just won’t work, but to redo things will take more time than is available. Most architecturally oriented developers in this position point to a lack of time up front thinking about the best approach. More agile minded developers assume this is a normal part of “throw away the first version” for implementing new areas. In all cases, there’s not much you can do but stick with what you have or spend the time you don’t have (i.e. slipping).
- Didn’t know what you know now, aka “we picked the wrong stuff”. No matter how long or short a project, you’re learning along the way. You’re learning about how good your ideas were or what your competitors are doing, for example. Sometimes that learning tells you that what you’re doing just won’t fly. The implications for this can run from minimal (if the area is not key) to fairly significant (if the area is a core part of the value). The result in the latter case can be a big impact on the date.
- Change management, aka “we changed too much stuff”. As the project moves forward, things are changing from the initial plans. Features are being added or removed or reworked, for example. This is all normal and expected. But at some point you can get into a position where there’s simply been too much change and the time to get to a known or pre-determined is more than the available time.
The specifics of any slip can also be a combination of these and it should be clear how these are all interrelated. In practice, once the project is not on a schedule all of these reasons for slipping begin to surface. Pretty soon it just looks like there’s too much stuff, too much is changing, and too many things aren’t “right”.
That is the nature of slipping. It is no one single thing or one part of a project. The interrelationships across people, code, and goals mean that a slip is almost always a systemic problem. Recognizing the nature of slipping leads to a better understanding of project realities.
Reality
In reality, slips are what they are and you just have to deal with them. In software, as in most other forms of engineering, once you get in the position of missing your date things get pretty deterministic pretty quickly.
In the collective memories of most large projects that slipped are the heroes or heroic work that saved a project. That could very well happen and does, but from a reliable or repeatable engineering perspective these events are circumstantial and hard to reproduce project over project. Thus the reality of slipping is that you just have to deal with it.
The most famous description of project scheduling comes from Frederic P. Brooks who authored “The Mythical Man-Month” in 1975. While his domain was the mainframe, the ideas and even the metrics are just as relevant almost 40 years later. His most famous aphorism about trying to solve a late project by adding resources is:
When a task cannot be partitioned because of sequential constraints, the application of more effort has no effect on schedule. The bearing of a child takes nine months, no matter how many women are assigned.
Software projects are generally poorly partitioned engineering – much like doing a remodel in a tiny place you just can’t have all the different contractors in a tiny place.
There are phases and parts of a project in large scale software that are very amenable to scale with more resources, particularly in testing and code coverage work, for example. Adding resources to make code changes runs right up against the classic man-month reality. Most experienced folks refer to this as “physics” implying that these are relatively immutable laws. Of course as with everything we do, context matters (unlike physics) and so there are ways to make things work and that’s where experience in management and most importantly experience as a team working together on the code matters.
The triad of software projects can be thought of as features, quality, and schedule. At any given point you’re just trading off against each of those. But if it were only that easy.
Usually it is easy to add features at the start, unaware of precisely how much the schedule or quality will be impacted. Conversely, changing features at other times becomes increasingly difficult and obviously so. From a product management/program management perspective, this is why feature selection, feature set understanding, and so on is so critical and why this part of the team must be so crisp at the start of a project. In reality, the features of a product are far less adaptable than one might suspect. Products where features planned are not delivered can sometimes feel incomplete or somehow less coherent.
It is almost impossible to ever shorten a schedule. And once you start missing dates there is almost no way to “make up for time”. If you have an intermediate step you miss by two weeks, there’s a good chance the impact will be more than two weeks by the end of a project. The developers/software engineers of a project are where managing this work is so critical. Their estimates of how long things will take and dependencies across the system can make or break the understanding of reality.
Quality is the most difficult to manage and why the test leadership is such a critical part of the management structure of any project. Quality is not something you think about at the end of the project nor is it particularly malleable. While a great test manager knows quality is not binary at a global level, he/she knows that much like error bars in physics a little bit of sub-par quality across many parts of the project compounds and leads to a highly problematic, or buggy, product. Quality is not just bugs but also includes scale, performance, reliability, security, and more.
Quality is difficult to manage because it is often where people want to cut corners. A product might work for most cases but the boundary conditions or edge cases show much different results. As we all know, you only get one chance to make a first impression.
On a project of any size there are many moving parts. This leads to the reality that when a project is slipping, it is never one thing—one team, one feature, one discipline. A project that is slipping is a product of all aspects of a project. Views of what is “critical path” will need to be reconciled with reality across the whole project, taking into account many factors. Views from other parts of the organization, the rumor mill, or just opinions of what is holding up the project are highly suspect and often as disruptive to the project as the slip itself. That’s why when faced with a slipping project, the role of management and managing through the slip is so critical.
What to do
When faced with a slip, assuming you don’t try to toss some features off the side, throw some more resources at the code, or just settle for lower quality there are a few things to work on.
First and foremost, it is important to make sure the team is not spending energy finger pointing. As obvious as that sounds, there’s a natural human tendency to avoid having the spotlight at moments like this. One way to accomplish that, improperly, is to shine the light on another part of the project. So the first rule of slipping is “we’re all slipping”. What to do about that might be localized, but it is a team effort.
What else can be done?
- Don’t move the goalposts (quality, features, architecture). The first thing to do is to avoid taking drastic actions with hard to measure consequences. Saying you’re going to settle for “lower quality” is impossible to measure. Ripping out code that might not work but you understand has a very different risk profile than the “rewrite”. For the most part, in the face of slipping the best thing to do is keep the goals the same and move the date to accomplish what you set out to do.
- Think globally, act locally. Teams will often take actions that are very local at times of slipping. They look to cut or modify features that don’t seem critical to them but have important upstream or downstream impact, sometimes not well understood on a large project. Or feature changes that might seem small can have a big impact on planned positioning, pricing, partnerships, etc. The approach of making sure everyone is checking/double checking on changes is a way to avoid these “surprises”.
- Everyone focuses on being first, not avoiding being last. When a project has more than a couple of teams contributing and is faced with a tight schedule, there’s a tendency for a team to look around to just make sure they are not the team that is worse off. A great leader I once worked with used to take these moments to remind every part of the project to focus on being first rather than focusing on being “not last”. That’s always good advice, especially when followed in a constructive manner.
- Be calm, carry on. Most of all, slipping is painful and even though it is all too common in software, the most important thing to do during crunch time is to remain calm and carry on. No one does good work in a panic and for the most part the quality of decisions and choices degrades if folks are operating under too many constraints that can’t get met. It is always bad for business, customers, and the team to slip. But if you are slipping you have to work with what you’ve got since most of the choices are usually even less desirable.
Managing a software project is one of the more complex engineering endeavors because of the infinite nature of change, complexity of interactions, and even the “art” that still permeates this relatively new form of engineering. Scheduling is not yet something we have all mastered and slipping is still a part of most large projects. The more that Software Eats the World ($), the more the challenges of software project management will be part of all product and service development.
Given that, this post tried to outline some of the causes, realities, and actions one could take in the face of learning by slipping.
–Steven Sinofsky
Learning from Competition
Gotcha. Traitor. Snicker. Those were some of the reactions when people discovered that I was using an iPhone. I stand before you accused of using a competitor’s [sic] product and I plead guilty.
Moving beyond the gotcha blogs, there’s an actual reason for using technology products and services other than the ones you make (or happen to be made by the company where you work/ed). I think everyone knows that, even a thousand tweets later. The approach in many industries to downplay or even become hostile to the competition are well-documented and studied, and generally conclude that experiencing the competition is a good thing.
Learning from the competition is not just required of all product development folks, but can also be somewhat of a skill worth honing. Let’s look at the ins and outs of using a competitive product.
Why?
Obviously you should use a competitive product. You should know what you’re up against when a consumer (or business) ultimately faces a buying decision. They will weigh a wide array of factors and you should be aware of those not only for the purposes of sales and marketing but when you are designing your products.
It is easy to fall into the trap of checklists or cloning competitive products and so that might be why you follow the competition. That’s a weak way to compete. Part of planning your product/service is establishing your unique value proposition—is it features, pricing, implementation or execution for example?
Simply being the same as your competitor that might be more established is not good enough and so knowing the competition for the sake of skimming their value proposition won’t work. Generally speaking, just adding the cool thing from a competitor at the last minute to your product’s plan is not going to work well for customers and will be readily transparent. That’s checklist product development.
Product development is a complex endeavor. There is no magic. Worse than that, most people making products in a “category” are pulling insights, ideas, and technologies from the same sources. What separates wildly successful products from distant number two products can often be just a few of thousands of choices.
Studying your competitor, well, gives you a chance to evaluate your choices in an entirely different context. When you make a product choice you are making it in the context of your company, strategy, business model, and people/talents. What if you change some of those? That is what knowing the competition allows you to do, and basically for free (no consultants or top secret research).
What does it mean to study the competition well? What are some common mistakes made when studying the competition?
Common Mistakes
Even when you study the competition, there are some techniques that are often employed, and with the best of intentions. There are ways of executing on a competitive analysis that leave some information on the table.
Worse, it is possible to execute on a competitive analysis from a dismissive or get this over with posture that takes away what is perhaps the most valuable source of information in forward looking product design.
While there are many potential challenges, here are a couple of examples of patterns I’ve seen.
- Using the product in a lightweight manner. All too often analyzing the competition itself becomes a checklist work item. Go to the store and play with it for a few minutes. Maybe ask a friend or neighbor what they think. The usage of a competitive product needs to be in depth and over time. You need to use the product like it is your primary product and not switch or fall back to your old way of working. Often this is weeks or more of usage. Even as a reviewer this applies. Walt Mossberg famously took an iPad on a 10 day trip to France—no laptop at all. That’s how to use a product.
- Thinking like yourself, not the competition. When using a competitive product you need to use it like it was intended to be used by the designers. Don’t get the product and use the customization tools to morph it into the familiar. Even if a product has a mode to make it work like the familiar (as a competitive bridge they offer) don’t use it. Use native file formats. Use defaults in the UI and functionality. Follow the designed workflow. They key is to let loose of your muscle memory and develop new memory.
- Betting competitors act similarly (or even rationally). If you think like a competitor you have to make future decisions like they might. Of course you can’t really do that or really know and this is where product intuition comes to mind (and also why blogs predicting product directions are often off the mark). You have to really wrap yourself around the culture, constraints, resources, and more of a competitor. The reality is that your competitor is not going “fix” their product to turn it into your product. So then the question is what would a competitors do in their context, not what would you do if you were designing the follow-on product in your context. This might actually feel irrational to you. One of the most classic examples of this is whether or not the Mac OS should have been licensed to other PC makers or not. Arguments could be made either way, both then and now. But what is right or assumed in one context simply doesn’t make sense in another. That context can also include a time dimension where the answer actually changes.
- Assume the world is static. Even after you’ve reviewed a competitor through usage you might feel confident because they are missing some features or might have done some things poorly. That’s a static view of the world. Keep in mind analyzing the competition is a two-way street. If you noticed a weakness there’s a really good chance the competitor knew about it. When everyone pointed out that a phone was missing copy/paste, don’t make a mistake thinking that was news to the development team and would remain a competitive advantage.
Approaches
There’s a reason Patton often made reference to Thucydides treatise on the History of the Peloponnesian War. It is a thorough and thoughtful analysis that goes beyond who won which battles but gets inside the minds of the men, the culture, and the thought processes. Competing in business is not war and should not be treated as such, either literally or as a metaphor (the stakes are relatively insignificant and business is an endless series of skirmishes and battles rather than aiming for an ‘end’, at the very least). However, the idea of being thoughtful and understanding tactics, decision making, resource allocation, and more are important.
There are a few techniques that are often used in conjunction with using the product. The most important thing is of course to use the product and to adopt it as your primary tool for all the uses it was intended in the manner in which it was intended. With that raw data, there are number of potential tools for sharing that learning:
- Feature comparisons. The most obvious tool is to make a long list of features and compare products. This works well for some folks and especially when handed to customers. It is also the least useful in product design. A feature checklist is only as good as the features you put on it. We all know how easy it is to make a product look feature rich or feature poor simply by picking the right set of features to check off. You can even pick weird measures for whether a feature is in or not—you could have a “WiFi” check item or you could make it “WiFi a/b/g/n” and change who won or lost. You can also lead to a false conclusion if you try to score or just count features—it is doubling the error rate of your check list because you’re doubly-assuming your context.
- SWOT. A common “single slide” approach to competitors is to distill down competition into something like strengths, weaknesses, opportunities, threats. These are very hard to do well, again because of the context, but by including all of these you force yourself to confront your own product shortcoming and missteps. Personally, I don’t like the use of “threats” because it starts to conjure up war and sports metaphors but you can think of them as risks that customers won’t choose your product/service. SWOT is often used by the marketing team because you can intermix near term tactics in the marketplace (opportunities).
- Scenario comparisons. A nice way to take a more end-to-end approach to competitive analysis is to consider more complete scenarios. If you’re testing for battery life then don’t just play a movie, but play a movie with the radios on and an active mailbox, as an example. As with everything, it is important to pick scenarios that are truly representative of what a product was designed to do and how it was designed to do it, not how your product/service does. Measuring a scenario comparison could involve clicks/gestures, clock time, resource consumption, and more.
- Competitive review or blog. My favorite test of really getting in the mind of the competitor is to challenge yourself to review your own product as though you were the competitor. Alternatively, you could write a press release or blog for a competitive product—the product that competes with you. I remember once writing a whole “press kit” for what became Visual C++ as though I were the Borland C++ team. It was great fun. Rather than focus on Windows (3.0!) development, I focused on compiler speed, code size, array of command line options, and more. Those were the things that Borland would focus on. I then ripped into Visual C++ as a Borland person, highlighting what options were missing, the slowness of the tools, and so on. Even though VC++ 1.0 had a Windows dev environment, resource editor, class library and more—all assets relative to Borland.
Of course no matter what your approach, be sure to write down your work and analysis (writing is thinking!) and share it with the team (learning by sharing).
Be Obsessed
Those are a few common pitfalls and approaches to competitive analysis. There are many more. Feel free to share your favorite approaches in the comments.
Finally, studying the competition is the job of everyone on a team. Importantly, the people doing the work need to study the competition. It is not a job for a staff function or those outside product development. Management studies the competition not just receives reports on the competition. Experts in domains that are parts of a product should drill into the details of the competition (hardware, software, subsystems, peripherals, APIs, etc.)
Be obsessed with the competition. Always. This has never been truer than the fast paced and dynamic world of products where the flow of information is instant and the scale and complexity are greater than ever before.
–Steven
PS: My plan was not to publish so frequently/rapidly, but even in my old MS blog if something came up that was so timely relative to the ongoing dialog, I’d do a quick post. I’m still going to pace myself :-)
###