Automation is often a bad idea
September 29, 2024•862 words
I'm several months into managing a project to automate a few business processes. Automation is an area I don't like getting involved in, unless it actually makes life better for people, but this project is supposed to automate processes that require several hours of repetitive manual work, and spending that time with clients, which helps with retention, is equally as important. Sometimes, during the discovery phase of such a project, we discover processes that could be discarded or improved without any automation, so there's that.
I do have misgivings about automation in general, though. Corporations are automating pretty much everything they can, and often for the wrong reasons. So much so that we're often having a good moan about it in the pub:
'I was saying to the girl behind the counter, she was doing herself out of a job. And look what happened, with all the branches closing!', My partner said, as he recounted his visit to a bank, where staff were pushing him into doing his banking online.
'It's one of the reasons I refuse to do the self-service thing in Asda. They're doing the checkout girls out of a job, and Asda aren't paying me to do the scanning. I haven't filled up at the petrol station there since they made that self-service either. It's really not a good look when a company doesn't want to deal with its own customers.', I chimed in.
Almost everyone feels the same way, it seems. Vehicles were queued at the petrol station a few months ago, most mornings. These days I rarely see more than a couple of vehicles there. A large number of people are refusing to scan their own items in the store, opting instead to wait much longer at the staffed checkouts. When I need to make a trip into the city, I usually drive, because it's so inconvenient to use the automated self-service ticket system for the train, and so easy to get wrongly fined, even when the machines work.
When it comes to the technical side of things, there are multiple problems with automation: It generally works when the system and environment remains static enough, and breaks when there are minor changes. The more adaptable we make the automation, the less useful it becomes.
For example, if we used Coded UI Test to automate the regression testing of Web applications, the tests will fail if some UI element has slightly different content, or the element is a pixel or two away from where it was previously, even though there's no actual defect. We could modify the automation to ignore UI elements, or use ranges instead of fixed values, but the more we do that, the less useful the automation becomes. And if we modify the automation after each test, the testing is no longer automated.
A similar principle applies to self-service machines. They're programmed for a case where the barcodes are legible, the items are held a certain distance from the scanner, the customer is scanning each items within a specific number of seconds, the items are placed a certain way on a designated area, an attendant intervenes within a certain timeframe to authorise certain items... The system breaks if anything deviates too much from those conditions, and when they do break continuously, which does happen quite frequently, customers get frustrated and go elsewhere.
Or worse. Suppose, hypothetically, we decided to automate financial processes: The software might have a defect that introduces a large discrepancy in the accounts, which could then lead to wrongful accusations of fraud. Such a thing is possible with customer-facing systems also.
How thoroughly is the software tested, and what records are being kept of the testing? How can software and the data be verified at each stage of the processing? Is there an audit trail? Is there traceability at multiple points of the system? Is all that information readily available to expert witnesses, should someone get prosecuted?
A lot of the automation stuff is driven, I think, by the failure of 'AI'. CEOs were promised it could automate away peoples' jobs, and would be superior to humans at whatever tasks. Silicon Valley 'thought leaders' scaremongered about AI becoming conscious and taking over the world, like Skynet off the Terminator, or some shit like that. The scaremongering was largely a marketing ploy to oversell the capabilities of 'AI' to investors.
The corporations don't have much to show for that, really, aside from chatbots with much larger dictionaries. It turns out that 'AI' was already being used, for decades, pretty much everywhere it could be useful - digital cameras, spelling and grammar checking, translation, video game characters, etc. Any computer scientist could have pointed out, from the beginning, that a computer could never become less deterministic than a pocket calculator.
We have ChatGPT, which can generate working code, but engineering expertise is still required to define the problem adequately, and the result is still what a junior developer would produce, because it's borrowing from Stack Overflow and GitHub. I certainly wouldn't use ChatGPT to generate a blog post, because nobody wants to read soulless generative crap that's synthesised piecemeal from other peoples' work.