You might not think that a rusted out '98 Chevy Prizm and Adobe Photoshop have much in common, but you might be surprised. In 1970, economist George Akerlof described the used car market in “The Market for Lemons: Quality Uncertainty and the Market Mechanism”. In that paper, he posited that the informational asymmetry between buyers and sellers of used cars results in predominantly low-quality cars being offered for sale. I propose that this is also true of modern software markets, though the mechanism is a bit different.
To summarize the key points of the paper, I turn to Wikipedia, which offers this helpful explanation that I will examine point by point.
A lemon market will be produced by the following:
1. Asymmetry of information, in which no buyers can accurately assess the value of a product through examination before sale is made and all sellers can more accurately assess the value of a product prior to sale
2. An incentive exists for the seller to pass off a low-quality product as a higher-quality one
3.Sellers have no credible disclosure technology (sellers with a great car have no way to disclose this credibly to buyers)
4.Either a continuum of seller qualities exists or the average seller type is sufficiently low (buyers are sufficiently pessimistic about the seller's quality)
5. Deficiency of effective public quality assurances (by reputation or regulation and/or of effective guarantees/warranties)
Point One: Asymmetry of Information
When consumers look for a new software product, they can compare alternatives in several ways. The most common method appears to be a time-boxed trial of the software. This method allows the consumer to determine that the application is fit for its immediate purpose, but only in a limited way. The user fiddles around, attempts a few of the common use cases and mentally creates a value judgment based on those experiences. Often the user involved in the trial will be the ultimate end user of the product—and while their satisfaction is important, they usually lack the knowledge to objectively assess the quality of the software itself.
At this point I believe it will be useful to draw a distinction between end user satisfaction and software quality. User satisfaction can be measured by asking the user to rate the software on a scale of 1 to 10 compared to alternatives—but it is my belief that just because users are happy with the product in the immediate term does not prove that the software is of high quality. For example, a CRM that responds quickly and looks beautiful during the trial phase will likely receive a high rating from the testers, but that same application might struggle to handle the workload once all the data has been migrated over. The application, which during testing saved the end user an hour a day, may end up becoming a net time sink due to poor software engineering for scale.
In this case, it would be possible to eliminate information asymmetry if competitors outlined exactly what their software could handle at the outset, i.e. our software can handle 100 million customers over this timeframe, at this average rate of speed, given the hardware specifications previously defined. But this data is rarely, if ever, available to consumers. Often the company does not even generate these metrics internally for its own review—a hallmark of poor software quality, in my view.
If you were to ask the developers at these firms, they would probably tell you (outside of management earshot) that the product is of low quality. In addition, very few closed-source or proprietary solutions offer any form of public bug tracker. In this way, the company has access to a list of flaws in its product that the consumer, at the time of sale, does not.
Point Two: Incentives to Pass Off Low-Quality Software
At the organizational level, once a piece of software has been created and is ready for use by consumers, a great deal of money has already been spent. Often this money was invested by third parties—and those third parties expect to see results. Accordingly, there is a huge incentive to ship a low-quality product to consumers, in order to get further investment to keep the company running. The product can always be improved at a later date.
There is an additional incentive at the level of individual sales. The incentive structure for sales is often based on units sold, or a percentage of the gross revenue of the product. Furthermore, sales staff often do not have a good barometer of the product’s quality—which leads to the next point.
Point Three: Quality Disclosure
In point one, I touched on the quality disclosure mechanisms in place. Another way that quality is implied is via ratings agencies. Various private entities will reach out to software companies for a review of their product, which will later be aggregated and published. This model is typical of enterprise software. Consumer software ratings are often obtained from channels such as the app stores of various platforms. Both models have inherent conflicts of interest as well as flaws that allow them to be gamed.
For the enterprise model, the customers to be contacted are often selected from a list provided by the software company. This creates a clear selection bias where firms will pick their happiest customers—those who have use cases that are the closest to their assumptions.
For the consumer model, fraud is rampant. Firms can be hired to artificially inflate scores, for pennies per rating. Sometimes they will have armies of iPhones with automated jigs for the purpose. Outside of that, some consumer software will ask what users think of it prior to asking for a rating. If the user responds negatively to the first question, they will not be asked to rate; thereby selecting out the users with negative experiences.
Point Four: Pessimism of Quality
In my experience, public perception of the majority of software products is low. While some outliers are heralded for their quality (and that segment seems to be decreasing) they typically operate in large, highly competitive consumer segments. For the majority of software in existence, users have become accustomed to uncaught exceptions, memory leaks, and crashes on a regular basis. Even websites, visited daily by millions, are riddled with obvious and avoidable faults.
Security is another issue. Breaches of large, generally well-regarded companies are a weekly occurrence in the news—including banks, credit-rating agencies, retail stores, and payment processors.
Point Five: Lack of Public Quality Assurances
Generating objective ratings of software quality is an unsolved problem outside of formal verification methods. These are not practical for most software products and—where practical—are extremely time consuming.
Secondly, the government seems unable to purchase high-quality software for themselves, let alone effectively regulate it. Overages in public spending for software products can run into the hundreds of millions of dollars in individual cases.
In this uncertain marketplace consumers have found some solace in brands. "Nobody was ever fired for buying IBM" is a common refrain. Unfortunately, IBM like others has allowed it's previously unquestionable software quality to decline. When prestige brands struggle to offer seamless experiences to their customers, I believe it is not unreasonable that customers validate all software below it's inherent value. After all, if the multinational firms with decades of experience can't get it right, what hope does some small startup have?
I don't have a solution to this problem. In the used auto marketplace tools like CARFAX have been devised, though a similar mechanism is hard to conceive of for software. Ultimately when you need to understand the quality of a used car you take it to a trusted mechanic. As such open source software may be the only way forward for those rare firms who endeavor to produce high quality code and in turn high quality software.