I am a gadget freak, often purchasing new technologies in their first release. And my closet is full of such gadgets, from early pen-based computers to early brick-sized cell phones to an electronic handwriting recognition pad received as a gift to test. These early dives into new technologies serve a purpose for me. They keep me at the leading edge of new development as it is productized, even before mass production. They allow me to preview new devices and technologies before release so that I might write about and speak about them in my “Tech Trends” keynotes. And they are always the center of attention with my tech-savvy friends, some of whom are in the habit of asking, “So what’s new today” each time we meet.
Cost to the user and to the producer
The cost of such attention-seeking and research is relatively low for me, and certainly the reward of being sought out as a speaker addressing the trends in technology is enough in itself.
But I have been on the other side of the early adopter development process several times in the past, and can attest that it is great fun, but rarely profitable to be the first with any new product that requires simultaneously evangelizing or teaching the masses within the industry and marketing the new product offering. The costs in playing such a dual role are many times that of those in positioning a new product in a niche already opened by another. And statistically, the first product into a niche is very rarely the one to succeed.
Some great examples of second-to-market
Apple, for example, was nowhere near the first to introduce an MP3 handheld music player. I had owned several before the first iPod was released. Apple learned that their leverage was in the simultaneous creation of an easy-to-use retail music store for seamlessly downloading songs, podcasts and later applications to the device. It did not hurt that Apple always seemed to trump the competition in design of the product and the product’s user interface too.
My “way too early” story
[Email readers, continue here…] First Hyatt Hotels, then Marriott Corporation called me to their respective headquarters to consult with their executives on this subject under a non-disclosure with each. I became even more excited about the concept applied to the hotel industry. Both of those chains had very primitive modules in their respective reservation systems. Marriott called theirs “tier pricing”. If a future date was already booked at 80%, then they would eliminate any discounts below 10% from the “rack” or standard room rate. At 90% they would stop discounting entirely. These elementary steps were in the right direction but very primitive.
Seizing a too-early opportunity
So, I set about partnering with a small group of MIT graduates to produce specialized decision software for the hotel industry using the “LISP” programming language, created just for decision-making, allowing for coding inductive and deductive logic into the software. I partnered with Texas Instruments, producer of a LISP computer, the TI Explorer. We designed and produced special cards for the TI that would allow Apple Macintosh workstations to be used, with their handsome graphic user interface.
Then I designed what was then a ground-breaking new software system that could analyze tons of data from past guest stays on the same date a year earlier and other dates with the same day of week, add factoring for city-wide events on any future date such as pro football games and conventions, analyze the speed of increase in any night’s future reservations, and much more.
How it worked
Each night, the system was designed to perform its analysis using real advanced reservations data current to the moment and run the data through a series of rules I wrote which could be modified or added to by local hotel management, to automatically make pricing decisions and automatically implement them. The system coordinated decisions between the reservations department which accepted individual or transient reservations and the group sales department booking groups at a discounted rate, allocating available rooms between each to achieve maximum revenue for any future date.
Our test sites
I found a willing test site in the Royal Sonesta Hotel in Cambridge, Massachusetts, whose management was thrilled to participate in an industry-changing experiment with new technology. The system was priced at $150 thousand, but we calculated that the average decision implemented for the 300-room property should be worth $5 thousand, making payback within an amazing 60 days if all worked as planned. Our company installed the system, integrated it with the Sonesta computer system which our company had previously provided, trained the staff, and began to measure the results after turning on the system in a live environment. In the meantime, we sold a second system to a large timeshare resort in Orlando for the same price. By agreement, Sonesta withheld payment until completion of the beta test period.
The real test: The industry trade show
The industry’s annual technology trade show came up during these tests. The industry
Reality trounces early vision
And the next week we asked for payment from the test property, after sign-offs from all its managers but the top one. The general manager was also an executive of the family chain of hotels. He called me in, along with several of the second level managers so enthused about the system, and stated, “This is nice, but I could do this work on the back of a napkin,” shocking us all, and he refused the pay for the machine. I challenged him immediately.
“Let’s disconnect the system from influencing the reservation system for one week,” I offered, “and let the machine calculate but not implement its decisions. During that week, you make your decisions each night on the back of your napkin. At the end of the week, we’ll compare the effectiveness of each. If you agree that our system was more likely to positively affect revenues on those dates targeted, you pay for the system. If not, we’ll remove the test system and find another home for it.”
He agreed. Management added a few new rules to the rule base and watched over the system each evening, anxious that there be no contest between this number-crunching wonder and a single manager’s intuitive guesses.
The moment of truth: Facing the “no decision” competition
A week later, I again traveled across the country to the property to meet with the same team and the general manager. We printed out the week’s decisions, those not implemented. We presented our findings. And waited for the GM’s response. “I did not bother,” he stated. “I have no doubt that I could have done this better if I’d taken the time; but I was busy this week.”
The final lesson: Too early to market is a fatal mistake
We could not argue with the money, even if we were right and all other managers desperately wanted to keep the wonder machine. So, we removed the system. After all that great press, I realized that the industry just was not ready for such a leap, giving up authority to a computer, even if at least one major airline had successfully done so. I offered to repurchase the second machine from the hotel in Florida. After all, what company can maintain such a small number of unique systems?
Our mundane, but profitable solution
I turned to Tom, our chief programmer, and directed him to use as many of the features of the knowledge based (artificial intelligence) system as possible, but to be reprogrammed into our standard reservations module using our BASIC programming language. Tom’s team did just that, perhaps saving 70-80% of the functionality even if none of the leading-edge glitz. We priced the reduced “feature” at $8,000 – and sold many over the years as mere add-ins to the reservation system.
After spending over a half million on the project (and receiving at least that in great publicity), I learned a lesson. It is satisfying but rarely profitable to cater to early adopters.
It seems to be a common occurrence that if the technical developers of a new product think it is just great, they assume that the intended customers will, too. And that becomes the main (or the only) criteria that is used to decide to fully develop and market the new product. Another approach to developing new products is to first do market research. Ask potential customers “If you had a magic wand, what product would you conjure up to solve some of your business problems?” Or, if a new product concept had already been fleshed out, ask potential customers if they would buy it if it existed?
A friend of mine, Joel, had a similar experience at about the same time. He had an automated visual inspection system he was selling. He was able to get a number of sales into factories, but for only one unit each. He would get the most positive feedback he could ask for and be told about the other opportunities they had including parallel production lines. The problem was that the sales model required multiple sales to a customer and in spite of the rave reviews they never got a second order. One of manufacturing engineers Joel worked with won a award from his company for his efforts to bring the vision system into the company and was flown to a meeting where the Chairman of the Board gave him the award, but the company never bought one.
Trying to understand why, we dreamed up a bunch of explanations:
In court it sounds better to say “Our product is 100% visually inspected.” Even though we all know that the inspectors fatigued a few minutes into their shift and couldn’t see a defect on a bet.
The rejection rate increased, decreasing yield and increasing waste. The penalty for shipping defective parts was small while depressing yield is expensive.
And then there is simply change.
Kent
Hi Dave,
In 1975, Intel hired 7 of us to be the evangelists to get “early adopters” to design Intel microprocessors into their products. That program was highly successful. We all did a mix of training, helping, selling, and supporting, and whatever else it took to get the design wins.
Warmly,
Barry
Particularly if they make an agreement with you re: the testing and then don’t bother to live up to it.