Daily note for 19 January 2024

A minor innovation in these daily notes – pulling out the occasional quote from some of the links, and then using a horizontal line to provide some separation. Also using the lines to make it clear when a multiple-paragraph comment from me is over.

Like this!:

I Made This”:

In its current state, generative AI breaks the value chain between creators and consumers. We don’t have to reconnect it in exactly the same way it was connected before, but we also can’t just leave it dangling. The historical practice of conferring ownership based on the act of creation still seems sound, but that means we must be able to unambiguously identify that act. And if the same act (absent any prior legal arrangements) confers ownership in one context but not in another, then perhaps it’s not the best candidate.


Designing service at scale” – loads of good reflection and advice in here.

Cool? No. Useful? Probably!

Daily note for 22 November 2023

Raindrop is very good for social bookmarking it turns out. Mine are here.

As well as Neilly Neil’s welcome return to blogging, Lloyd is also publishing stuff on a more regular basis. This can only be a good thing. Tuesday’s was a good one, I thought.

Some awesome advice here on how to write a blog post.

Anne McCrossan is great at lots of things and one of those things is data. Found this post from her about data as a utility really interesting.

OpenAI’s Misalignment and Microsoft’s Gain – Ben Thompson’s take on the ongoing OpenAI kerfuffle. All this stuff just makes me nervous about the whole AI thing. Potentially game-changing, yes, but currently stewarded by bozos.

Daily note for 16 November 2023

Ouch, nearly a week since my last note on here.

I’ve been having a quiet week this week and it has done me a lot of good. Slowed down the pace a bit, spent a (little) bit more time outside, made some space to work on some things that are starting to come to fruition.

The main example of that is the Local Government Digital Quality Framework, which is my attempt at coming up with a scalable framework for councils to be able to figure out where they are at with digital design, data and technology. Most importantly, it also helps them decide where they want to get to, and how.

I’ll write a dedicated post about it though, as there’s a fair bit to say.

Was feeling sad about the dying art of social bookmarking reading this by Howard Harold Jarche. In the comments someone recommended Raindrop.io which looks neat and I am going to have a play.

Am finding my Google-powered emails are struggling to get through some organisations’ spam filters all of a sudden. Shane and Steph recommended taking a look at DKIM records and things like that, so I did.

The different ‘flavours’ of service design – by Emma Parnell (subscribed!).

The Future of the Blogosphere – “Yet, despite its very different political-economic DNA, the blogosphere has become enshittified as clearly as Facebook, Google, or Amazon. Not just at the level of aging software, but at the level of the aging people who inhabit it, maintain it, and continue to churn out content on it, though at a rapidly decelerating rate.” Ouch.

Trustworthy AI in Government + Public Services — A self assessment tool from Oxford Insights.

Daily note for 10 October 2023

Props to Doug for pointing out this free course called Mastering Systems Thinking. Am giving it a go!

Stockport Council published Towards a digital solution to reduce delays in transferring patients to social care.

Hurrah for Adele Gilpin and the West Northants digital folk for working in the open on their new blog!

Dan Hon writes in his newsletter about the imminent enshittification of Substack. This is not news I want to hear. I replied on LinkedIn:

I guess as well as Quora the other comparison is with Medium, which started out offering an amazing user experience for writers and an ok one for readers, but now seems to want people to log in just to read content.

The problem at the moment is that the experience for writers on every email platform I have tried recently has been so awful, it’s pushing people towards Substack, despite the fact that there are these warning signs for readers.

I’ve been coping with the slow death of Twitter by making more use of my blog, and maybe I ought to start archiving newsletters on there too, just to keep an open web version always available.

So expect to see a slew of posts on here soon, copied and pasted from my newsletters 😀

AI isn’t a drill, and your users don’t want holes

The Tyranny of the Marginal User – this is excellent:

What’s wrong with such a metric? A product that many users want to use is a good product, right? Sort of. Since most software products charge a flat per-user fee (often zero, because ads), and economic incentives operate on the margin, a company with a billion-user product doesn’t actually care about its billion existing users. It cares about the marginal user – the billion-plus-first user – and it focuses all its energy on making sure that marginal user doesn’t stop using the app. Yes, if you neglect the existing users’ experience for long enough they will leave, but in practice apps are sticky and by the time your loyal users leave everyone on the team will have long been promoted.

Brief notes on why I am cautious on AI/LLMs

I was asked the other day for my quick view on the current buzz around AI and large language models, machine learning etc.

Pasting here for posterity!

I think my slightly cautious view on LLMs etc is based on two things:

First, it’s being latched onto by people as a way of leap-frogging over doing hard work. Like it will solve a load of problems without anyone having to put any effort into it. It won’t. And it also won’t stop you having to do all the other hard work that needs to be done. People’s expectations need managing around it.

Also, related to this, is that organisations with Word documents on their websites or staff rekeying data from one system to another should stop farting about on thinking they can do AI and instead get the basics right first.

Second, it’s a very new technology with huge ethical implications, and nobody knows what they are doing. It’s a bit of a wild west out there, a lot of the companies behind this tech, like OpenAI who run ChatGPT are under no obligation to do the right thing, and are run and owned by some pretty shady individuals and corporations. Where are the controls? How do we know how the information we put into these things is then recycled into the machine, and being churned out to other users?

None of this means don’t use it, and none of it says that LLMs etc aren’t very exciting and potentially game changing. But the idea that we could, say, unleash LLM powered chatbots on our website, without first writing the decent content for them to learn from; and without assurances on what happens to what our customers type into them, is both nonsensical and dangerous.