Open source and government

Another post I have been sat on and chewing over for a little while…

Charles Arthur in the Guardian highlighted an interesting area of discussion in the use of open source in government a little while ago. He reports on the views of Liam Maxwell, the councillor responsible for IT policy at the Royal Borough of Windsor & Maidenhead, who’d like to see a move away from proprietary software such as Microsoft Office within local authorities.

Cllr Maxwell would like to see the Cabinet Office mandate the use of the Open Document file format within all levels of government. This would be as opposed to the file formats used by Microsoft’s products, as well as other systems in use in the public sector.

Cllr Maxwell states:

If one council goes to a service provider such as Capita and asks for a change to its Revenues and Benefits system so it works with OpenOffice and ODF instead of Microsoft Office, Capita will tell them to go away. But if government mandates it, then Capita or any of these other companies that do this work for councils could get it done in six months. It’s a dysfunctional market because it’s set by standards which are set at the centre.

A bit of background for the non-dorks out there. The Open Document Format (ODF) is a non-proprietary file standard for use in office productivity suites, which include things like word processors, spreadsheets and slideshow presentations.

The flagship software to use ODF is OpenOffice.org, as alluded to by Cllr Maxwell. OpenOffice.org was developed predominantly by Sun Microsystems as an open source office suite, which then fed into their proprietary offering, StarOffice.

Now, I am a fan of free and open source software and I try to use it wherever I can. But there is so much misunderstanding out there about the benefits – especially around cost – that I do worry about whether people’s minds are filled with free-as-in-beer.

Here are some of the issues with this particular proposal. I do want to make clear that none of these are insurmountable, nor am I in the business of spreading fear, uncertainty and doubt. I’m certainly no apologist for Microsoft, their software or their business practices. I want government to make better use of open source, just that it needs to do so with its eyes open.

The idea of the Cabinet Office mandating use of ODF sounds good, but the naivety to think that this would happen for free is remarkable – there’s no way those big boys would do that much work and not make customers pay for it somewhere down the line.

Then there is training – the idea that the majority of council workers could use OpenOffice as well as they use MS Office right away is rather optimistic. In my experience, folks can’t even cope with upgrades between versions of Word, let alone a whole new system! The costs need to added in: training, writing documentation, loss of productivity while people figure out how to do stuff, or what they can’t do anymore that they used to, etc etc.

Next up with OpenOffice is the Oracle issue – they’ve already made significant changes to OpenSolaris since they bought Sun and there is no guarantee they won’t do the same to OpenOffice. Part of the pro-open source argument is sustainability, but if the sponsoring corporation (which owns the IP and drives development) doesn’t want to know then it would be very hard in practice for the community to get things up and running again.

(Actually, we kind of know what is happening here, as a separate organisation appears to have been formed to managed a fork of OpenOffice.org called LibreOffice. Confusion abounds!)

Next, support. Where is the organisation that can provide support to large organisations when it comes to switching over office suites? It would drown a council ICT department and I can’t think off the top of my head of any company providing this sort of service at scale for it to be outsourced to.

Finally – do we even know if ODF is better than the current alternatives? Where’s the benefit for the switch?

Now what I have written sounds like a massive anti-open source rant, but it isn’t. It’s just highlighting some of the issues. I suspect, for example, that the total cost of ownership of an open source ICT solution – certainly on the desktop – would be roughly the same as the Microsoft (or whoever) one, especially when you take into account select agreements etc.

The arguments in favour of open source need to be on the basis that the software is better, more reliable and stable, quicker and feature rich, and that it works for the government context – adapted for the sector in a cost effective, maintainable and supportable manner.

This brings in a number of issues, around business models for suppliers, procurement, understanding of licensing, copyright and IP, having actual coding knowledge within organisations.

Learning Pool is also a good example of taking open source, contextualising it, then implementing, supporting and maintaining it. We were recently asked to come up with a few bullet points outlining our approach and experiences, which I drafted up as:

  • There are cost savings to be made with open source, but only when the vendor can provide a genuinely comprehensive service that includes implementation and support as well as code. Otherwise the total cost of ownership can spiral.
  • The argument for open source must be based on better, not cheaper, software. We benefit from hundreds of people tracking bugs, developing plugins and testing betas which helps give our product the edge over proprietary rivals.
  • The flexibility of cloud based applications saves significant amounts of time and therefore money in providing upgrades and new features to customers – who don’t have the bother of installing patches etc.
  • Building sharing and collaboration between our customers into the business model has achieved far greater cost savings than either the open source foundation of our software, or the cloud based delivery of it. The fact that we don’t just tolerate, but rather encourage, our customers to share and redistribute resources means government is redesigning fewer wheels every day.

Having said that, we use the LAMP stack which is pretty much a won argument on open source in many ways, it’s other technology, especially on the desktop, where the debate needs to be refined and informed.

Discussions around open source use in government have to be based on pragmatism: is the OSS solution as good as the competition? Is it comaptible with other systems? What are the training overheads? What are the support, maintenance and development arrangements?

The truth is that replacing enterprise IT systems with open source alternatives is a lot more complicated than deciding to build a new website in WordPress. I quickly Googled for ‘open source ERP’ (ERP is Enterprise Resource Planning, those big internal systems made by people like SAP and Oracle, that run HR, finance, CRM and everything else) this afternoon, and the top result was something called Openbravo. I tweeted about it, and none of my contacts – even the open source IT analyst folks – had even heard of it.

It’s probably not surprising that people procuring this stuff run into the arms of the traditional vendors and system integrators.

Hackers for government (and a dollop of open source)

A hacker

A lovely story of sharing, reusing and creative hacking in government today. There’s a whole post to be written on hacker culture and why government needs people who are able to program computers on the payroll. You just can’t outsource this stuff. The first chapter of this book explains it far better than I ever could, as Andrea DiMaio explains:

Innovative and courageous developers are what is needed to turn open government from theory to reality, freeing it from the slavery of external consultants, activists and lobbyists. People who work for government, share its mission, comply with its code of conduct, and yet bring a fresh viewpoint to make information alive, to effectively connect with colleagues in non-government organizations, to create a sense of community and transform government from the inside.

Anyway, whilst he was still at BIS, Steph Gray produce a nice little script to publicly publish various stats and metrics for the department’s website. A great example of having someone around who has both ideas and the ability to hack something together that puts them into action.

This was picked up during an exchange on Twitter by Stuart Harrison – webmaster at large for Lichfield District Council and another member of the league of extraordinary government hackers. Stuart asked nicely and was granted permission by Steph to take the code and improve it – never really an issue because the code was published under an open licence that encouraged re-use.

So Stuart did exactly that, and produced a page for his council that report live web statistics. Even better, he then shared his code with everyone using a service called GitHub.

Two things come out of this very nice story.

Firstly, the importance as mentioned above of having people able to code working within government. Say if Steph had this idea but had neither the skills himself nor access to them within his team to implement it. He would have had to write a business case, and a formal specification, and then tendered for the work… it would never have happened, frankly.

Leading on from that, the second point is around the efficacy of sharing code under open source licenses. Steph would probably admit to not being the world’s most proficient hacker, but the important thing is that he was good enough to get the thing working. By then sharing his code, it was available for others to come in and improve it.

The focus on open source software and its use in government is often based around cost. In actual fact open source solutions can be every bit as expensive as proprietary ones, because the cost is not just in the licensing but in the hosting, the support and all the rest of it.

The real advantage in open source is access to the code, so people can understand and improve the software. But this advantage can only be realised if there are people within government who can do the understanding and improving.

After all, what’s the point of encouraging the use of open source software if the real benefit of open source is inaccessible? Having access to the code is pointless if you have to hire a consultant to do stuff with it for you every time.

So three cheers to Steph and Stuart for this little collaboration and lovely story of the benefits of sharing and hacking. Let’s make sure there can be more of them in the future by encouraging the art of computer programming, and of being open with the results.

Photo credit: Joshua Delaughter

Update on the Knowledge Hub

Knowledge Hub

I spent an enjoyable afternoon at the advisory group for the Knowledge Hub (KHub) last Tuesday (sorry for the delay in writing this up…). Steve Dale chaired the day which featured a number of updates about the project, in terms of procurement and project management; technology platform and supplier; and communications and engagement.

Remember – the Knowledge Hub is the next generation of the Communities of Practice. Think of it as CoPs with an open API, plus some extra functionality.

The Knowledge Hub is going to be built by an outfit called PFIKS – who I must admit I had never heard of before. Their approach is heavily open source based and apparently they have about 80% of the Knowledge Hub requirements already working within their platform.

I’ve come away with a load of thoughts about this, most of which I have managed to summarise below.

1. Open platform

One of the strongest improvements that the Knowledge Hub will bring as compared to the current Communities of Practice platform is the fact that is will be open. This means that developers will be able to make use of APIs to use Knowledge Hub content and data to power other services and sites.

One compelling example is that of intranets – a suggestion was made that it would be possible to embed Knowledge Hub content in a council intranet – without the user knowing where the information came from originally. Later in this post I’ll talk about the engagement challenges on this project, but perhaps creative use of the API will enable some of these issues to be sidestepped.

Another aspect of this is the Knowledge Hub app store. I’m not quite sure whether this will be available within the first release, but it should come along pretty soon afterwards – it’s something Steve Dale seems pretty excited about. Developers will be able to create apps which make use of content and data stored within the Knowledge Hub to do cool stuff. I’m guessing it will be a two way thing, so content etc externally stored can be pulled into the Knowledge Hub and mashed up with other content.

It’s certainly something for Learning Pool, and I guess other suppliers to local gov, to consider – how can our tools and content interact with the Knowledge Hub?

2. Open source

The open source approach is to integrate various components into a stable, cohesive platform. This appears to be based on the Liferay publishing platform, with others bits added in to provide extra functionality – such as DimDim, for example, for online meetings and webinars; and Flowplayer for embedding video.

On the backend, the open source technology being used includes the Apache Solr search platform which is then extended with Nutch; and Carrot2, which clusters collections of documents – such as the results of a search query – into thematic categories. I think it is fair to say that the search bit of the KHub should be awesome.

What is also cool is that PFIKS publish their code to integrate all this stuff as open source as well – so not only are they using open source, they are also contributing back into the community. This is good.

Open Source, as I have written earlier, is not as simple a thing to understand as it might first appear. There are numerous complications around licensing and business models that have to be considered before a project commences. It certainly isn’t the case that by using open source tools that you can just rely on the community to do stuff for you for free – which seems a common misunderstanding.

Still, from the early exchanges, it appears that PFIKS get open source and are taking an active involvement in the developer communities that contribute to their platform. Hopefully the Knowledge Hub will end up as being a great example of collaboration between government, a supplier, and the open source community.

3. Data

One of the original purposes of the Knowledge Hub was that it would be a tool to help local authorities share their data. This was a couple of years ago, when Steve first started talking about the project, when data.gov.uk didn’t exist and the thought of publishing all purchases over £500 would have been anathema.

It would appear that the data side of things is taking a bit of a back seat at the moment, with the revamp of the communities taking centre stage. My understanding up until this point was that the Knowledge Hub would act as a repository for local government data to be stored and published. It would appear from some of the responses at the meeting that isn’t going to be the case now.

This is, in many ways, probably a good thing, as authorities like Lincoln, Warwickshire and Lichfield (amongst others) are proving that publishing data isn’t actually that hard.

However, all those authorities are those with really talented, and data-savvy people working on their web and ICT stuff. Are all councils that lucky? Perhaps not.

Hadley Beeman’s proposed project seems to be one that pretty much does what I thought the Knowledge Hub might do, and so again, maybe a good reason for the KHub not to do it.

When a question was asked about data hosting on KHub, the response was that it could be possible on a time-limited basis. In other words (I think), you could upload some data, mash it up with something else on the KHub, then pull it out again. Does that make sense? I thought it did, but now I have typed it up it seems kind of stupid. I must have got it wrong.

4. Engagement

You could count the number of people who actually came from real local authorities on one hand at the meeting, which for an advisory group is slightly worrying – not least because this was the big ‘reveal’ when we found out what the solution was going to be and who the supplier was. Actually – maybe that’s not of huge interest to the sector?

Anyway, it’s fair to say that there hasn’t been a huge level of interest from the user side of things throughout this project. Again, maybe that’s fair enough – perhaps in this age of austerity, folk at the coal face need to be concentrating on less abstract things. But now the KHub is becoming a reality I think it will become increasingly important to get people from the sector involved in what is going on to ensure it meets their needs and suits the way they work. By the sound of the work around the ‘knowledge ecology’ that Ingrid is working on, plenty of effort is going to be put in this direction.

It will also be vital for the Knowledge Hub to have some high quality content to attract people into the site when it first launches, to encourage engagement across the sector.

For all the talk of open APIs and the Knowledge Hub being a platform as much as a website, it still figures that for it to work, people need to actually take a look at it now and again. To drag eyeballs in, there needs to be some great content sat there waiting for people to find and be delighted by.

Much of this could be achieved by the transfer of the vast majority of the existing content on the Communities of Practice. There’s an absolute tonne of great content on there, and because of the way the CoPs are designed, quite a lot of it is locked away in communities that a lot of people don’t have access to. By transferring all the content across and making it more findable, the whole platform will be refreshed.

5. Fragmentation

The issue of fragmentation occurred to me as the day went on, and in many ways it touches on all of the points above. For while the Knowledge Hub both pulls in content from elsewhere and makes its own content available for other sites, there are still going to be outposts here and there which just don’t talk a language the KHub understands or indeed any language at all.

It’ll be great for dorks like me to automatically ping my stuff into Knowledge Hub, whether posts from this blog, or my Delicious bookmarks, shared Google Reader items, or videos I like. But those sites which publish stuff without

One striking example of this are the Knowledge Forums on the LG Improvement and Development website, which have continued despite the existence of the functionally richer Communities of Practice. My instinct would always to have been to close these forums and port them to the CoPs to both reduce the fragmentation of content and the confusion to potential users.

What about the content and resources on the rest of the LG Improvement and Development website – will that continue to exist outside of the rest of the platform, or will it be brought inside the KHub?

There are plenty of other examples of good content existing in formats which can’t easily be resused in the KHub, and for it to be the window on local government improvement, it’s going to need to drag this stuff in. Maybe a technology like ScraperWiki could help?