In December of 2008 it was discovered that Bernie Madoff may have perpetrated a scheme that defrauded investors of as much as $50B USD. With a fraud so large, the scandal cut across a wide range of social classes, from the financial aristocracy to the merely comfortable. One of the many questions asked was “how could such a large fraud have escaped detection for so long?” It turns out that people had been trying to blow the whistle on Bernie Madoff for 10 years, but such whistle-blowing fell on deaf ears, perhaps because the regulators were simply too impressed with Madoff’s self-described success to do their jobs effectively.
In the wake of that embarrassment, regulators decided they might as well follow up on tips of another fraud that had been reported since at least 1999.
After more than ten years of inaction, the SEC decided to file charges against R. Allen Stanford. It was obviously a difficult decision: is it easier to believe a bunch of competitors who claim that one of the most successful investors in their field is a fraud? Or is it easier to believe a knighted billionaire, generous with his money, access to his private jet, and sponsor and host to some of the best sport seats in the house? Right up until the end of 2008, it was far easier (and a far better ride) to give deference to the latter and short shrift to the former.
Had it not been for Bernie Madoff, the Stanford fraud of $9B would have been a shocking amount to anybody who puts their trust in government authorities to ensure that the free market is a fair market. Combined with Madoff (and I am sure there will be more), we find $60B of fraud committed from the very people trusted most by the most sophisticated of investors. Aside from those who “lost everything” within this $60B circle, how many of us beyond that circle have lost something because it dragged down our family, our friends, our banks, our 401(k) and pension plans, our insurance companies, etc? What does this whole series of events and consequences teach us?
I believe the lessons from both of these stories is that we need to pay a lot more attention to the standard disclaimer that “past results are no guarantees of future results” and that nowhere is this more important than in the disruptive world of technology. Yet the very concept of proprietary vendor lock-in works in direct opposition to free or fair competition. Taken to its extreme, proprietary vendor lock-in can create a situation whereby some accident of history wins a competitive bake-off because the immediate pain of switching is greater than the immediate benefits from switching. Over time this pain-over-pleasure principles creates barriers so enormous that an objectively terrible solution (one which is better know for its crashes than stability or reliability, or one which is so easily compromised by crackers that it has become the OS of choice of the world’s zombie networks, or one which is so resource intensive that even with Moore’s Law, PCs seem slower and slower instead of faster and faster, or, heaven forbid, all three) becomes the only choice.
Of course I know they are a monopoly. But what choice do we have?
In a free and fair market, we expect competition to sort the bad from the good. We expect the market to work so well that our choices will not be between bad and good, but between good, better, and best (and the only real choice we have to make is how much we want to pay). But that hasn’t worked out in the world of hedge funds. And it hasn’t worked out in the world of software either. Hugely bad choices are prominent in the market because they have benefited from the unwillingness of regulators to act on meaningful claims and reports, and because, unfettered by reality and talked up by charismatic and generous billionaires, they have bent the rules of the game to lock in their customers and marginalize skeptics and dissenters. Lately it has only been when the whole system collapses that anybody starts asking any questions, and ofttimes that’s too late.
The UK government has just taken a major step toward regaining sovereignty over their IT systems and procurement processes. Yesterday they published their Open Source Action Plan. They have come to recognize that it no longer makes any sense whatsoever to pretend that proprietary software choice is any better than an open source software choice. It’s about time. As they say:
Open Source has been one of the most significant cultural developments in IT and beyond over the last two decades: it has shown that individuals, working together over the Internet, can create products that rival and sometimes beat those of giant corporations; it has shown how giant corporations themselves, and Governments, can become more innovative, more agile and more cost-effective by building on the fruits of community work; and from its IT base the Open Source movement has given leadership to new thinking about intellectual property rights and the availability of information for re-use by others.
Over the past five years many government departments have shown that Open Source can be best for the taxpayer – in our web services, in the NHS and in other vital public services.
But we need to increase the pace:
1. We want to ensure that we continue to use the best possible solutions for public services at the best value for money; and that we pay a fair price for what we have to buy.
2. We want to share and re-use what the taxpayer has already purchased across the public sector – not just to avoid paying twice, but to reduce risks and to drive common, joined up solutions to the common needs of government.
3. We want to encourage innovation and innovators – inside Government by encouraging open source thinking, and outside Government by helping to develop a vibrant market.
4. We want to give leadership to the IT industry and to the wider economy to benefit from the information we generate and the software we develop in Government.
So we consider that the time is now right to build on our record of fairness and achievement and to take further positive action to ensure that Open Source products are fully and fairly considered throughout government IT; to ensure that we specify our requirements and publish our data in terms of Open Standards; and that we seek the same degree of flexibility in our commercial relationships with proprietary software suppliers as are inherent in the open source world.
This open source strategy addresses these key points. It sets out the steps we need to take across Government, and with our IT suppliers, to take advantage of the benefits of open source.
MP Minister for Digital Engagement
That should go a long way to restoring true competition to the market. Not only that, but it may create the kind of positive competition whereby multiple approaches can be implmented and evaluated across a modular enterprise-wide architecture.
As billions of bail-out dollars become trillions, it is clear that we need to be realistic about the nature and the solutions to our world-wide software crisis. Today the world spends more than $3T USD per year on systems that are largely based on vendor lock-in, not value and not free and fair competition. What is most shocking about the $3T USD number is not its sheer size alone, but the fact that fully $1T USD of that number is written off ever year when people are forced to abandon their projects before putting them into production. The place to fix that problem is not in any specific piece of software (most of which has 20-30 defects per 1000 source lines of code), but in the fundamental system of competition that is responsible for ensuring that malignant software can be successfully removed in the first place. The UK Government’s decision is a strong step in the direction of properly restoring the right kind of competition in the marketplace. The days of rewarding past performance, especially the performance of amassing billions of dollars based on strategic lock-in, must be put behind us. And we should treat any use of such funds for furthering vendor lock-in as extremely suspect and worthy of an immediate and full investigation.