I was asked the other day for my quick view on the current buzz around AI and large language models, machine learning etc.
Pasting here for posterity!
I think my slightly cautious view on LLMs etc is based on two things:
First, it’s being latched onto by people as a way of leap-frogging over doing hard work. Like it will solve a load of problems without anyone having to put any effort into it. It won’t. And it also won’t stop you having to do all the other hard work that needs to be done. People’s expectations need managing around it.
Also, related to this, is that organisations with Word documents on their websites or staff rekeying data from one system to another should stop farting about on thinking they can do AI and instead get the basics right first.
Second, it’s a very new technology with huge ethical implications, and nobody knows what they are doing. It’s a bit of a wild west out there, a lot of the companies behind this tech, like OpenAI who run ChatGPT are under no obligation to do the right thing, and are run and owned by some pretty shady individuals and corporations. Where are the controls? How do we know how the information we put into these things is then recycled into the machine, and being churned out to other users?
None of this means don’t use it, and none of it says that LLMs etc aren’t very exciting and potentially game changing. But the idea that we could, say, unleash LLM powered chatbots on our website, without first writing the decent content for them to learn from; and without assurances on what happens to what our customers type into them, is both nonsensical and dangerous.