📅 Daily Note: July 11, 2025

Digital and mission-driven government: digital, burdens and networks – Richard Pope’s first essay of three looking at how his Platformland thinking “can provide a unifying role in the successful delivery of the government’s missions”.

In the digital age the answer is more subtle: using technology and digital-age design to systematically eliminate ‘administrative burdens’, one by one.

# – micropost 22941


How is it that I keep seeing these posts where people have made all these cool things with image generation AI, and I only ever get absolutely terrible results?!

# – micropost 22953


Is it worth bothering with LinkedIn articles any more? Seems easier and more engaging to just whack even longer form content into posts, as long as it fits into the character limit (3,000 or 500 words or so).

# – micropost 22954


James Plunkett: How to save bureaucracy from itself

I’m struck by how common it is these days to hear people working in government say some version of ‘bureaucracy is broken’, ranging from senior civil servants to political appointees.

These are thoughtful people, so their point isn’t that everything in government is broken. They’re just saying that the problem runs deep — that it’s not enough to try harder, or to run things better, because at least part of the problem relates to the logic by which bureaucracy functions.

If that’s right, what do we do about it? A principle I find helpful is the idea from systems theory that when a system fails we need to work at the level of the problem.

# – micropost 22957


Tom Loosemore: behind the scenes of the Universal Credit Reset – really interesting podcast episode.

# – micropost 22960


📅 Daily Note: December 11, 2024

Digitisation, politicisation and the civil service by Martha Lane Fox:

Today’s reality is clear: digital skills are no longer optional extras. Data analysis, digital service design, agile project management, let alone the nuance needed in understanding new AI tools, have become as essential to governance as policy writing and stakeholder management. This shift creates real tensions within our supposedly neutral institutions.

#


AI product management in high stakes domains – Alan Wright shares a bunch of approaches that have worked well for him.

#


Our positions on generative AI – Steve Messer details a sensible set of stances on the ethical and effective use of LLMs and so forth.

AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something even more ubiquitous like HTML.

#


Lloyd has written up how he is using Micro.blog and a custom script to deliver a daily summary of his micro-posting to his WordPress blog.

There’s more than one way to skin this cat!

#


📅 Daily note for 30 October 2024

Am thinking again about the structure of my blogging here. I’d much rather than the individual paragraphs in these daily notes existed as posts in their own right, as well as being collected together for the whole day. That way I could publish each item as soon as I type them in, rather than waiting til the end of the day. Main inspiration here is Dave Winer⬈, while Coté⬈ does it but keeping the posts separated rather than presented as daily collections. #


Richard Pope (again!) on services that work harder⬈. #


Dave Rogers: Toxic Technology⬈. Not come across this before (how!?) but Sarah Drummond⬈ linked to it so thanks to her 🙂 #


Paul Maltby: Why public sector procurement needs a serious rethink to deliver on the promise of AI and tech⬈. #


Sharon Dale⬈ shared TidyCal⬈ on LinkedIn – basically Calendly⬈ but more flexible and a lot cheaper. I have set mine up here⬈. #


📅 Daily note for 8 July 2024

In the middle of a house move, so am working on my laptop rather than my main computer, and am on the sofa – my new desk doesn’t arrive until Wednesday!


The computing revolution: How the next government can transform society with ethics, education and equity in technology – the British Computer Society’s vision for technology under the new government.

It mostly seems to involve more people being chartered… with the British Computer Society 🤷‍♂️


I Will F**king Piledrive You If You Mention AI Again” – my thoughts exactly. This post has been doing the rounds a lot, but that’s because it’s good!


Bear seems an interesting lightweight blogging platform.

📖 Countering the AI hype

This is a re-publish of a thing that went on LinkedIn, my newsletter, and the Digital Leaders newsletter. I’ve backdated the published date on this post to reflect this.

Summary: all this tech called ‘AI’ is genuinely exciting. But the impact of it is unlikely to be felt for several years. Don’t expect quick results, and don’t expect them to come without a hell of a lot of hard, boring work first.

It’s hard to look at LinkedIn these days without being instantly confronted by AI enthusiasts, almost foaming at the mouth as they share their vision for how the public sector can save millions, if not billions, of pounds by simply using AI.

It sounds so easy! As a chief executive I would be reading this stuff and thinking to myself, ‘why the hell aren’t my people doing this already?’.

In fact, I am hearing from digital and technology practitioners in councils all over the country saying that this is happening. That the AI hype is putting pressure on teams to start delivering on some of these promises, and to do so quickly. I find this troubling.

It’s always worth referring to my 5 statements of the bleedin’ obvious when it comes to technology in organisations:

  1. If something sounds like a silver bullet, it probably isn’t one
  2. You can’t build new things on shaky, or non-existent, foundations
  3. There are no short cuts through taking the time to properly learn, understand and plan
  4. There’s no such thing as a free lunch – investment is always necessary at some point and it’s always best to spend sooner, thoughtfully, rather than later, in a panic
  5. Don’t go big early in terms of your expectations: start small, learn what works and scale up from that

How does this apply to using AI in public services? Here’s my take on the whole thing. Feel free to share it with people in your organisation, especially if you think they may have been spending a little too long at the Kool Aid tap:

  • The various technologies referred to as ‘AI’ have huge potential, but nobody really understand what that looks like right now
  • Almost all the actual, working use cases at the moment are neat productivity hacks, that make life mostly easier but don’t deliver substantial change or indeed benefits
  • Before we can come close to understanding how these technologies can be used at scale, we need to experiment and innovate in small, controlled trials and learn from what works and what doesn’t
  • Taking the use of these technologies outside of handy productivity hacks and into the genuinely transformative change arena will involve a hell of a lot of housekeeping to be done first: accessing and cleaning up data, being a big one. Ensuring other sources for the technology to learn from is of sufficient quality (such as web page content, etc) is another. Bringing enough people up to the level of confidence and capability needed to execute this work at scale, for three – and there’s a lot more.
  • The environmental impact of these technologies is huge, and many organisations going ham on AI also happen to have declared climate emergencies! How is that square being circled? (Spoiler – it isn’t.)
  • The choice of AI technology partner is incredibly important and significant market testing will be required before operating at scale. There’s an easy option on the market that is picking up a lot of traction right now, because it’s just there. This is not a good reason to use a certain technology provider. Organisations must be very wary of becoming addicted to a service that could see prices rocket overnight. More importantly perhaps is whether you can trust a supplier, or those that supply bits of tech to them, to always do the right thing with your data. There’s always going to be an element of risk here: but at least identify it, and manage it.
  • Lastly, the quality of the outputs of these things cannot be taken on trust, and have to be checked for bias, inaccuracies and general standards. Organisations need to have an approach to ensuring checks and balances are in place, otherwise all manner of risks come into play, from the embarrassing to the potentially life-threatening.

This ended up being a lot longer than I first imagined. But I guess that just shows that this is a complex topics with a whole host of things that need to be considered.

Just remember – any messages you see claiming that AI is a technology that takes hard work away for minimal investment or effort, is at best just guesswork and at worst an outright lie.

Related to this post is a set of slides I presented to a conference in Glasgow: