📅 Daily Note: December 11, 2024

Digitisation, politicisation and the civil service by Martha Lane Fox:

Today’s reality is clear: digital skills are no longer optional extras. Data analysis, digital service design, agile project management, let alone the nuance needed in understanding new AI tools, have become as essential to governance as policy writing and stakeholder management. This shift creates real tensions within our supposedly neutral institutions.

#


AI product management in high stakes domains – Alan Wright shares a bunch of approaches that have worked well for him.

#


Our positions on generative AI – Steve Messer details a sensible set of stances on the ethical and effective use of LLMs and so forth.

AI is more of a concept, but generative AI as a general purpose technology has come to the fore due to recent developments in cloud-based computation and machine learning. Plus, technology is more widespread and available to more people, so more people are talking about generative AI – compared to something even more ubiquitous like HTML.

#


Lloyd has written up how he is using Micro.blog and a custom script to deliver a daily summary of his micro-posting to his WordPress blog.

There’s more than one way to skin this cat!

#


📅 Daily note for 30 October 2024

Am thinking again about the structure of my blogging here. I’d much rather than the individual paragraphs in these daily notes existed as posts in their own right, as well as being collected together for the whole day. That way I could publish each item as soon as I type them in, rather than waiting til the end of the day. Main inspiration here is Dave Winer⬈, while Coté⬈ does it but keeping the posts separated rather than presented as daily collections. #


Richard Pope (again!) on services that work harder⬈. #


Dave Rogers: Toxic Technology⬈. Not come across this before (how!?) but Sarah Drummond⬈ linked to it so thanks to her 🙂 #


Paul Maltby: Why public sector procurement needs a serious rethink to deliver on the promise of AI and tech⬈. #


Sharon Dale⬈ shared TidyCal⬈ on LinkedIn – basically Calendly⬈ but more flexible and a lot cheaper. I have set mine up here⬈. #


📅 Daily note for 8 July 2024

In the middle of a house move, so am working on my laptop rather than my main computer, and am on the sofa – my new desk doesn’t arrive until Wednesday!


The computing revolution: How the next government can transform society with ethics, education and equity in technology – the British Computer Society’s vision for technology under the new government.

It mostly seems to involve more people being chartered… with the British Computer Society 🤷‍♂️


I Will F**king Piledrive You If You Mention AI Again” – my thoughts exactly. This post has been doing the rounds a lot, but that’s because it’s good!


Bear seems an interesting lightweight blogging platform.

📖 Countering the AI hype

This is a re-publish of a thing that went on LinkedIn, my newsletter, and the Digital Leaders newsletter. I’ve backdated the published date on this post to reflect this.

Summary: all this tech called ‘AI’ is genuinely exciting. But the impact of it is unlikely to be felt for several years. Don’t expect quick results, and don’t expect them to come without a hell of a lot of hard, boring work first.

It’s hard to look at LinkedIn these days without being instantly confronted by AI enthusiasts, almost foaming at the mouth as they share their vision for how the public sector can save millions, if not billions, of pounds by simply using AI.

It sounds so easy! As a chief executive I would be reading this stuff and thinking to myself, ‘why the hell aren’t my people doing this already?’.

In fact, I am hearing from digital and technology practitioners in councils all over the country saying that this is happening. That the AI hype is putting pressure on teams to start delivering on some of these promises, and to do so quickly. I find this troubling.

It’s always worth referring to my 5 statements of the bleedin’ obvious when it comes to technology in organisations:

  1. If something sounds like a silver bullet, it probably isn’t one
  2. You can’t build new things on shaky, or non-existent, foundations
  3. There are no short cuts through taking the time to properly learn, understand and plan
  4. There’s no such thing as a free lunch – investment is always necessary at some point and it’s always best to spend sooner, thoughtfully, rather than later, in a panic
  5. Don’t go big early in terms of your expectations: start small, learn what works and scale up from that

How does this apply to using AI in public services? Here’s my take on the whole thing. Feel free to share it with people in your organisation, especially if you think they may have been spending a little too long at the Kool Aid tap:

  • The various technologies referred to as ‘AI’ have huge potential, but nobody really understand what that looks like right now
  • Almost all the actual, working use cases at the moment are neat productivity hacks, that make life mostly easier but don’t deliver substantial change or indeed benefits
  • Before we can come close to understanding how these technologies can be used at scale, we need to experiment and innovate in small, controlled trials and learn from what works and what doesn’t
  • Taking the use of these technologies outside of handy productivity hacks and into the genuinely transformative change arena will involve a hell of a lot of housekeeping to be done first: accessing and cleaning up data, being a big one. Ensuring other sources for the technology to learn from is of sufficient quality (such as web page content, etc) is another. Bringing enough people up to the level of confidence and capability needed to execute this work at scale, for three – and there’s a lot more.
  • The environmental impact of these technologies is huge, and many organisations going ham on AI also happen to have declared climate emergencies! How is that square being circled? (Spoiler – it isn’t.)
  • The choice of AI technology partner is incredibly important and significant market testing will be required before operating at scale. There’s an easy option on the market that is picking up a lot of traction right now, because it’s just there. This is not a good reason to use a certain technology provider. Organisations must be very wary of becoming addicted to a service that could see prices rocket overnight. More importantly perhaps is whether you can trust a supplier, or those that supply bits of tech to them, to always do the right thing with your data. There’s always going to be an element of risk here: but at least identify it, and manage it.
  • Lastly, the quality of the outputs of these things cannot be taken on trust, and have to be checked for bias, inaccuracies and general standards. Organisations need to have an approach to ensuring checks and balances are in place, otherwise all manner of risks come into play, from the embarrassing to the potentially life-threatening.

This ended up being a lot longer than I first imagined. But I guess that just shows that this is a complex topics with a whole host of things that need to be considered.

Just remember – any messages you see claiming that AI is a technology that takes hard work away for minimal investment or effort, is at best just guesswork and at worst an outright lie.

Related to this post is a set of slides I presented to a conference in Glasgow:

Daily note for 22 January 2024

I am running a 6 week online course about making a success of digital in your organisation. You can find out more and book on the SensibleTech website.

Neil Lawrence’s GovCamp write up (Medium, meh).


AI, data, and public services from Jerry Fishenden:

But technology alone can’t solve complex political, social, and economic problems. And that includes AI. Its evangelists conveniently overlook significant problems with accountability and discrimination, the inherent tendency of some AI models to hallucinate and falsify, and an eye-watering environmental impact. And then add into this toxic mix the inaccurate and derivative nature of systems like ChatGPT…

…Along with the need for a less hyperbolic and more scientific approach to AI itself, the current state of government data isn’t exactly ideal for implementing AI given it relies on access to high quality, accurate data and metadata. But the National Audit Office reports that government “data quality is poor” and “a lack of standards across government has led to inconsistent ways of recording the same data.”


User Centred IT: Why ‘best practice’ isn’t good enough in the domain of IT” (via NeillyNeil).

Sharing our learning from SDinGov 2023” – some lovely nuggets in here from the service transformation team at Essex County Council.

The stuff Jukesie uses.