A new technology arrives in the marketplace.
We understand that there are many factors that will determine whether this new product will survive in the marketplace and perhaps come to dominate and change our societal behaviors and expectations.
Witness the arrival of Facebook on the scene in 2004 as, at first, a means for college students to connect and share, it quickly became a new phenomenon, establishing major societal changes that continue to reverberate.
The same can be said for Twitter, Snapchat, and a host of other social media platforms that not only provided new means to interact with one another, but which changed how we behave as we move through the world.
I remember a time before Facebook and Twitter where I was not driven to post a picture of my meal before I ate it or share the latest meme with family and friends.
In retrospect, as we see the technology product successes and forget the failures (MySpace, anyone), we tend to think that the success of a world-changing technology is based on the idea, the people, and the nascent need for what it offers.
And a bit of luck.
But I’m going to suggest there’s another driver for technology success that is often overlooked and unappreciated, but which in the history of technology may be the most important factor of all.
To explain why, let’s go back to the years just before 1977, a time when television was one of the two systems of mass communication (radio being the other). Of course, this was a one-to-many, broadcast system of communication, owned by those could afford the not inconsiderable costs of providing material for the medium.
At that time, as had been the case since the invention of television and its widespread adoption, the viewer was forced to either view the program of interest at the time is was broadcast or miss it entirely.
No VCRs, no DVRs, no way to save the material for later viewing. And certainly no streaming.
This was the operational model for many years, as the technology to record over-the-air broadcasts did not yet exist.
That began to change in the years leading up to 1977, when a few companies attempted to apply the same magnetic recording technology that permitted the storage of audio to store and replay video.
The technological hurdles were immense–the data and bandwidth requirements for video storage were orders of magnitude more than for audio.
But, as we have come to accept as gospel, those problems would fall to continued technology innovation. And they did.
In 1977 Sony introduced a cassette-based video/audio recording system Betamax which, while expensive, was finally within the range of the non-professional user. Later that year JVC released their VHS format, and competition began.
But there was a major non-technological problem that appeared: the studios who produced the broadcast content were not happy with the idea that they might lose some measure of control over when their material was shown and by whom.
Further, both the Betamax and VHS systems permitted the user to fast-forward past commercials (though it was a manual process). The studios viewed this as an infringement on their right to reap the revenues the advertisements provided, and MCA/Universal and Walt Disney filed a copyright infringement suit against Sony.
(Interestingly, this lawsuit did not stop either Sony or JVC from selling their systems, undoubtedly optimistic in their belief that no really transformative technology could be stopped by what they saw as a frivolous move by greedy studios.)
And this is where the story gets interesting. Sony won the lawsuit in the lower court, but the decision was reversed in an appellate court and it appeared that home use of broadcast recording technology might not “take off”.
Imagine if the studios prevailed and they either prevented the sale of home-recording systems or controlled their sale? Likely the concept of time-shifting would never have taken off, or the cost of doing so would have been prohibitive.
And then, in January of 1983, Sony appealed the appellate court’s decision to the US Supreme Court.
We often think, based on our modern experience, that the Supreme Court would “ride to the rescue” and keep Sony, and end-users, in the clear. At least it might appear that way looking back almost five decades.
Initially, however, the Supreme Court refused to hear the case. Five Justices denied a hearing and the decision against Sony stood.
If things had ended at that point, we would likely have a much different culture today around what we, as end-users, can do with broadcast material, whether over-the-air or streaming. It’s hard to imagine.
However, based on the work of one member of the Court, Justice Stephens, enough other Justices were convinced to change their minds that in October of 1983 the case was accepted for review.
After much consideration, in January, 1984 the Court ruled that it was not a copyright infringement for home users to record and play–in their own homes and for non-commercial purposes–whatever broadcast material they chose.
After that, there was no looking back. Sales of recorders exploded and even the studios, after some time, found a way to earn in this new market, through prerecorded cassettes sold in stores.
The changes were profound–so much so that we do not even question today that this change was inevitable. Clearly it was not.
I share this story in an attempt to make clear that sometimes it’s not just the coolness of a new technology, or its clear value to the end-user, that determines whether it succeeds or fails. And whether, in the bigger picture, is survives in a form that has enormous societal impacts.
Sometimes it’s laws and regulations, and the process by which they are decided in courts, that make all the difference.
And it’s not always easy to predict what the outcome will be.
P.S. If you’re interested in another court case that had far-ranging impacts on development of technologies, check out the “Carterphone case“.