May 3, 2022•2,234 words
Marx is famous for saying "all that is solid melts into air". We can dispute that dictum under capitalism and whether it holds. I think its far harder to dispute that this happens with software.
Before the CoVID times, the NHS got hacked. Specifically a large number of NHS computers got crippled by malware called WannaCry because the NHS was running Windows XP. XP was deprecated, no longer supported and thus had security holes.
Of course the NHS should have upgraded their software. You should upgrade if you have something out of date. This answer comes from the expert world of Software Engineering. It makes sense in so much as the more recent software is liable (I stress, liable, not certain) to be more stable. Or at the very least, supported. XP was not supported so the damage done was unsurprising.
But think a little: is upgrading viable for an organisation, that's resource limited? Is it viable for everyone, everywhere, always? Do people even want it, should they want it, if its sufficient for their needs?
I wish to push this a little deeper and into more uncomfortable territory. Firstly, let me stress: some technical forms are so bad that they should be retired (anything abusing the singleton pattern for instance). Some things are poorly designed and crappy and need to die. Imagine a car that's released into the world who's engine occasionally just stalls randomly. A car like that has to go. Moreover we cannot seriously get around the problem that security holes are a danger. Once you start using IT systems, its important to make sure they stay protected.
This raises a genuine question, how much should you rely upon IT systems? This is a very broad question, so let’s focus it a bit. I’ll start with some questions I think a Software Engineer may always ask:
- Do you need to keep the data you’re storing? If you don’t keep it on an electronic system, it can’t be stolen from it.
- Does the device/computer/machine need Internet access? Without that access its far harder to break into it. A similar question is how much things need to be part of a network. Isolated machines are also harder to break into (or infect with worms)
Software has its inherent ethical properties, its principles of craft, its points of concern. No one wants to build a system that is bad technically, or leaks a persons private data and its a point of pride to have considered this carefully. Sometimes this means giving someone something with a little less utility for a lot more security. We can, however, ask a question that’s a little more radical and perhaps not allowable from the perspective of sales and business. Do you need the system at all?
How this question is answered, and also of importance, who answers it, is quite critical. Last year there were plans (now cancelled) to enable the sharing of GP records and data from the NHS with research companies. The nature of this program required one to opt out to avoid having ones data being shared. This was widely criticised in the UK, with it being seen at best as an unclear, poorly explained attempt to better enable medical research and at worst a cynical, shortsighted data grab.
The reason I raise this is that this is something enabled by the IT infrastructure. The ease of which data can be transfered allows for these actions to be taken. System rely in part upon the goodwill of its owners to not abuse the capacity inherent in it and is thus vulnerable to a simple political change. Stated like that, people may have been far more sanguine about allowing the digitisation of their records to happen. On the whole I think the benefits of the system which allow for a Doctor anywhere to see through that history and thus be treated more effectively and efficiently is important and outweighs this problem. I do admit that it has this potential to cause issues and that its the cost of any centralised system of this form. I don’t know how to resolve such a tension and am of the opinion there isn’t a viable way to resolve it. To some degree that tension has to be lived with.
However, its not simply my decision to make - it’s everyones. But such things are simply implemented without consultation, or even much forethought.
We're able to do impressive things with technology, but the question of whether that actually amounts to something that is of benefit to people is a harder question to answer. I think at least an honest question from Software Engineers that could be asked is: do you really need software to improve what you do? Is it necessary? Software incurs it's own costs that grow and doesn't necessarily improve things. Perhaps we should be circumspect in our use. It requires willingness to admit the limits of ones expertise, to say you may not need or want my help, please think about that. It would probably mean that stuff is sold less, but I think it is more ethical, more honest and better in the long run for Software Engineering. 
So far so good, we at least have a position that we can start from. But there’s something that inherent to code that I think is enabling the worst aspects of Capital. The iterative, rapid adaptability of code, the ease of which it can be shared, the strengths of code in fact. Code, constant upgrades and maintenance, can sometimes be a mask for a disabling predatory dependency, that traps users in a never ending reliance upon that system. As an example, think of Apple who make it incredibly hard to get data off of their products.
The constant upgrading of things, the need to drive "progress" sometimes results in all that is solid melting into air. Code does not necessarily lend itself to a stable existence for people who ultimately use the tools. How often do things get yanked that are useful, or in which someone's learned work flow is completely disrupted? How often is poor design foisted upon a user? Or how often has perfectly legitimate operation been disrupted by the need to 'update', the imperative of the designer overriding the user (I'm looking at you PS3). Code mostly lends itself to continually iteration. This is not quite the same thing as progress.
Here's another way of thinking of this if you're like me and in the world of Software Engineering: today's technical debt is tomorrow's profit. Technical debt is the aspects of a software project that are done cheaply, poorly and hence as a consequence cause problems further down the line in maintenance and updating the software. Its debt - you might need to get something done today, but the person you are tomorrow will pay for it. Good amounts of money are spent continuing to work on something that a prior person did poorly (perhaps that person is yourself). Software is bought, integrated, intermeshed in our lives and then often shown to be inadequate in some way. Then the real cost appears and thus the trap is sprung. If you’ve ever been in a large company which madly embraces a new “thing”, only for that thing to massively detrimental you know what I’m talking about.
What this creates is a kind of dependence and its one that bothers me. One of Winner's principles in Autonomous Technologies is "...that technologies be judged according to the degree of dependency they tend to foster, those creating a greater dependency being held to be inferior." (pg. 327) People and institutions buy into software in the hopes of achieving their aims better. Of course the nature of capitalism is that businesses need people to keep buying their products. A consequence is, you don't necessarily really own software you use, you lease it and it might be changed at the whims of the company you buy from.
The unique aspect of software as we've made it is that rapid iteration and ability to change things allows for a continual churn of products. It unleashes Capitalism in this sense, and I think we're experiencing the consequences of that. Where as solidity and stability of objects wasn't guaranteed under Capitalism before software, it was at least a possibility, a feature that could be found in things created (most likely purchased at a premium to be fair). But I think software, without restraint and thought, will simply undermine that possibility. Ship it and the ubiquitous networks we've made will always allow us to compensate for our mistakes. Ship it and we can always change it so that the terms of "ownership" are more favourable to us. Ship it and become dependent upon us to keep it alive and functional.
It would be too grand to say that software has in fact been overly detrimental to people's lives, that it has continually failed to produce the benefits it's claimed to have. These things need to be examined closely. To do this requires space for discrimination, in the sense of fair evaluation. This must put aside questions around the cost of systems, the investment involved, the degree of ego or legacy in implementing them. Has this or that thing been of use, been an aid? This involves understanding who these questions really need to be answered by.
This entails a political answer to these problems, as much as a change in professional practice outlined above. That political answer has to be challenge to Capital. Despite being a leftist, I don't have a particular straightforward answer to what that should look like. What does strike me though, is that the need for this answer has come from something that we might have thought un-political in nature. To think that it's just software that is "eating the world", is to mask the real nature of what is going on. Capitalism is eating the world.
: In fact the NHS is still using Windows XP. In the UK, the NHS is under continually strain, slowly being hollowed out and privatised. Parts of it are left to degrade, and it's lagging performance are then used to justify this privatisation. Seen in this way, its at least worth considering that the use of Windows XP is a matter of intended neglect and has a political motivation behind it.
: However, how often does this actually get caught and understood? Software engineering and the tech industry is buffeted by living somewhat consequence free (like say for instance, criminal convinctions being handed out due to accounting errors). One can argue that the loss of income is a consequence for certain failures, as consumers become aware of problems; but that fails to really grasp the issue of responsibility. The airline industry is (or at least was) held to a degree of responsibility that is frankly alien to Tech. And the danger there is in the sheer ubiquity. Airplanes are limited to the sky. Software, is quickly getting everywhere, even where it doesn't belong.
Let me put it to you a different way, with an example. Apple a few years ago had a security flaw so bad, that it should have immediately triggered some form of investigation and prompted talks in government as to how to properly legislate and regulate software. The security flaw is the result of simple inadequate QA testing and is unacceptable as the product from a company that produces software used pretty much everywhere. As it was, the story was not well understood, nor the magnitude of the error.
: One issue was the possibility that data would be given to insurance companies.
: A way of thinking about this is whether the good that is of chief concern is actually manifestly aided by the technological development (the Aristotelian in me is now showing). This requires you to have adequately grasped the nature of the good in question, which as an Engineer, its at least worth considering you might not have.
Here’s an example of that. If I wanted to introduce voting online for Government elections, aside from the usual questions of security and fairness (making sure the system is not exploitable), one would also insist that the old methods of voting were retained. This is because the good in question is democracy, which means that any and everyone eligible to vote have equal access to voting. Online voting might in fact be incredibly convenient and easy for most people, but it comes with assumptions such as easy access to a computer, technical proficiency with one, etc. Voting by paper is simple and requires little to no prior knowledge to explain.
: I mentioned the singleton pattern and technical debt. Software Engineering has kinds of established answers of how to do things and not do things, but this was obviously not the case a short while ago (let's say 2 decades at least). People tried approaches and later, much later, we found that those approaches were bad and caused all sorts of problems. In some cases, it's unsettling to think who paid the price for that learning. I don't believe it was the tech companies. I object to ordinary people suffering the cost for technological innovation or establishment of proper practice. I suspect that the same kinds of issue will play out as IoT devices become more prominent.