Digital Ethics Summit 2025: Trust, Risk, and Blame
December 4, 2025ā¢854 words
I attended the TechUK Digital Ethics Summit on 3 December and found it an odd and unsettling experience. It was very informative and I am grateful to TechUK for giving me, and a couple of our PhD researchers, free tickets. But looking back on the day as a whole there was a 'progress of sentiments' which needs to be commented upon.
For the first few sessions there was an interesting focus on assurance, how those deploying AI Systems can provide evidence to their internal governance and to their customers that those systems are safe and compliant. There was a general theme that 'assurance is the alignment of ethics and commercial interests' and that, as the Minister for AI and Online Safety put it, assurance enables trust and human agency. And a message that global businesses are happy to uniformly implement the 'gold standard' of regulation and there is no advantage for jurisdictions to have weaker regulations.
Then the theme of trust began to emerge as the dominant normative concept. However, this was subject to a subtle and unmentioned distortion: the focus was not on making AI Systems trustworthy but on making users trust them. Speakers again and again circled round and just avoided blaming users for not trusting AI. No one quite echoed Mustafa Suleyman's Inverse Ratner, but there was a growing sense that those people and businesses who were not enthusiastically taking up AI were somehow at fault.1
Then there was an unremarked but striking pivot moment. A speaker from a very large global consultancy remarked that when they first tried to roll out AI systems in their services, their existing governance and risk assessment protocols flagged it as too risky to go ahead. What was the response? 'We had to re-assess our appetite for risk'. This is a firm which audits some of the major global corporations. WTAF?
Then the final panel were asked to close the event with a take home message to delegates. Now I am going to focus on what one speaker said because they were clear and explicit and given their role they need to be held publicly accountable for statements like this, but no one on the panel or in the room was shocked or surprised, and they seemed to be expressing the general mood of the meeting by that point (it was a session looking forward to 2026).
During the panel discussion, Gopal Ramchurn echoed the earlier theme by saying 'we face a crisis of trust in AI world'. The crisis he was talking about is clearly the one of pesky users resisting the inevitable progress of AI because they don't trust it. Then, to close the event, the panel were asked to give a message to delegates for what they should focus on in 2026. Ramchurn said:
Take more risks on use of AI (in your business)
Reflecting back over the whole day, it was possible to see this narrative emerging. Our rulers, whether they be in government or global businesses, are deeply committed to the view that amelioration of the human condition, or perhaps just a good future for some,2 requires economic growth, as measured by GDP. And the only story they have about how we can have infinite growth on a finite planet is AI. As the Minister put it in his opening keynote:
Growth requires increased productivity and that, as the OBR has recently said, requires widespread AI adoption.
Against this set of background beliefs and values, people and businesses who do not rush to incorporate AI into their work are harming humanity and can be legitimately blamed for that. There is a moral imperative to use AI! Even if doing so is risky, to your business, your employees, your customers, and, let's be frank, society. You morally ought to take those risks. Of course, you shouldn't do so recklessly, you should do a risk assessment3 and put in place appropriate mitigations, but you are morally obliged to have an appetite for risk.
-
Discussing this afterwards with a friend who works in IT resilience and security for a bank, they told me of a recent presentation to staff which had the slogan: "Don't fear the use of AI in the company, fear the company that doesn't use AI". ā©
-
The speaker from Anthropic said 'there will be new winners and new losers', what he called 'socio-cultural realignment', as if having losers was inevitable. ā©
-
In the morning, when we were still talking about assurance, there was a question from the floor which was ignored by the panel, but raises interesting issues around this. It was about whether risk assessments needed a third dimension as well as likelihood and impact, namely velocity. It would have been nice to hear the final panel answer that. ā©