Hackers for government (and a dollop of open source)

A hacker

A lovely story of sharing, reusing and creative hacking in government today. There’s a whole post to be written on hacker culture and why government needs people who are able to program computers on the payroll. You just can’t outsource this stuff. The first chapter of this book explains it far better than I ever could, as Andrea DiMaio explains:

Innovative and courageous developers are what is needed to turn open government from theory to reality, freeing it from the slavery of external consultants, activists and lobbyists. People who work for government, share its mission, comply with its code of conduct, and yet bring a fresh viewpoint to make information alive, to effectively connect with colleagues in non-government organizations, to create a sense of community and transform government from the inside.

Anyway, whilst he was still at BIS, Steph Gray produce a nice little script to publicly publish various stats and metrics for the department’s website. A great example of having someone around who has both ideas and the ability to hack something together that puts them into action.

This was picked up during an exchange on Twitter by Stuart Harrison – webmaster at large for Lichfield District Council and another member of the league of extraordinary government hackers. Stuart asked nicely and was granted permission by Steph to take the code and improve it – never really an issue because the code was published under an open licence that encouraged re-use.

So Stuart did exactly that, and produced a page for his council that report live web statistics. Even better, he then shared his code with everyone using a service called GitHub.

Two things come out of this very nice story.

Firstly, the importance as mentioned above of having people able to code working within government. Say if Steph had this idea but had neither the skills himself nor access to them within his team to implement it. He would have had to write a business case, and a formal specification, and then tendered for the work… it would never have happened, frankly.

Leading on from that, the second point is around the efficacy of sharing code under open source licenses. Steph would probably admit to not being the world’s most proficient hacker, but the important thing is that he was good enough to get the thing working. By then sharing his code, it was available for others to come in and improve it.

The focus on open source software and its use in government is often based around cost. In actual fact open source solutions can be every bit as expensive as proprietary ones, because the cost is not just in the licensing but in the hosting, the support and all the rest of it.

The real advantage in open source is access to the code, so people can understand and improve the software. But this advantage can only be realised if there are people within government who can do the understanding and improving.

After all, what’s the point of encouraging the use of open source software if the real benefit of open source is inaccessible? Having access to the code is pointless if you have to hire a consultant to do stuff with it for you every time.

So three cheers to Steph and Stuart for this little collaboration and lovely story of the benefits of sharing and hacking. Let’s make sure there can be more of them in the future by encouraging the art of computer programming, and of being open with the results.

Photo credit: Joshua Delaughter

Government and mobile apps

A really thorough and thought-provoking post from Public Strategist on the whole ‘should government develop iPhone apps?’ debate:

If government is in the business of service at all, it should be efficient, up to date, and sensitive to the needs and preferences of the users of the services. That doens’t mean chasing every technological fad, but it does mean it was right for government to have web sites well before web access was anything close to ubiquitous, and exactly the same arguments now apply to the next generation of devices. It also doesn’t mean that because government communication is good, all government communications are good – and similarly, the argument that it may be good for government to create apps, does not at all mean that every app is a good one, still less that it is good only because it was created by government.

In the comments to that post, Steph Gray makes some equally astute points:

Of course Government should be developing smartphone apps (though probably not iPhone exclusively) as part of communication strategies to reach mobile audiences, and building on the ‘start simple’ approaches above. Frankly, it’s embarrassing to still be spending such large sums on direct mail to businesses, for example.

But government shouldn’t be a monopoly provider, crowding out commercial or voluntary alternatives, and it should probably focus on the areas with strong social benefit but limited commercial opportunity.

My view on this comes in two main flavours:

  1. It’s probably best to concentrate activity on making sure existing web properties render nicely on a decent range of mobile platforms than focus on native apps
  2. It’s also very difficult to justify the development of (still) currently pretty niche native applications on smartphone that have the reputation (perhaps undeserved) for being the playthings of tech-obsessed media types

That doesn’t mean to say that there is no place for the native mobile application in government, just that government probably shouldn’t be doing it.

Another problem, taking local government as an example. If we’re honest, that’s where most of the services are based that would be of use to most people in a mobile application. If every council were to create an application, whether on iPhone, Android, Blackberry or whatever, this could lead to considerable fragmentation and irritation to the user. Quite a few of us are in the boat of using services in different areas, whether transport, education, waste etc, and having a different app for each of those could get irritating.

The answer is I suspect in open data. In fact, mobile apps is one of the areas that the whole open data thing really makes sense to me (as explained here, I’m no data buff).

Here’s how it could work. Some people sit down and think about the services that would really benefit from having a mobile interface. Then the bits of government that want to be involved make sure they publish the relevant up-to-date data in a usable format.

It would then be up to the commercial sector to do something with that data. The obvious solution would be for someone to produce a platform that pulls in all the data, then spits it out as per user preference within one application – so I can have waste collection dates for Cambridge and bus timetables for Peterborough (say) all in the same app. The supplier then makes their money by selling the application, advertising, or some service arrangement. The important thing though is that government isn’t spending money on the development of, or having to sell, the thing.

Am sure there’s a lot of holes in this – not least in terms of how the app developer is ever going to make any money out of it – with your average app developer making just a few hundred pounds a year. However, if it is something citizens feel they want, or need, then perhaps the market will help decide how information is best delivered to people.

The need for data literacy

My attention was caught the other day by an article in The Register: “Data.gov.uk chief admits transparency concerns”

The head of the government’s website for the release of public sector data has said it is a challenge to ensure that users can understand the statistics.

Cabinet Office official Richard Stirling, who leads the team that runs Data.gov.uk, said that if he was at the Office for National Statistics he would have concerns about statistical releases and people making assumptions “that aren’t quite valid”.

The article was based on a podcast interview with Richard, and in typical journalistic style, took one part of his message and ignored the rest. To get the full picture, listen to the original audio.

datagovukBearing all this in mind, though, I do think this is an important issue which probably needs to be explored more thoroughly than it already exists.

To use myself as an example: I’m a geek, and I like computers, the internet *and* I find government interesting. I suspect this puts me into a very small percentage of the population. But even then – other than thinking open government data is almost certainly a good thing, and being able to reel off all the arguments around transparency and improving services – I don’t really understand or know how this happens. I am completely data illiterate.

This takes two forms. Firstly, knowing what data is, what format it is in and what can be done with it. Essentially a techie thing – fine, the data is there, but how do I do anything with it? This is probably the least important problem, because producing apps and mashups probably isn’t something that everybody needs to be able to do.

The second form is more important, though, and that is based in statistical awareness, understanding of how data is manipulated, and a grip of the context within which the original data was published.

In other words, if I come across some nifty app using open government data, how do I know what biases the developer had? Who – if anyone – paid them to do this? How can I check that the results it produces are correct?

Because even though the original data is published openly, and I can check that, the chances are I will not understand the relationship between that and the nifty app in question.

There’s always the argument that it was ever thus – not that it is a particularly good argument – but when statistical analysis appears in a newspaper, for example, most people are aware of the biases of those publications.

Don’t get me wrong – releasing data is important. But the technical challenges are of course the easy bit, whether sticking a CSV file on a web page or creating an API. What I am talking about is ensuring a reasonable level of data literacy for people at the receiving end.

Hadley Beeman’s project could well be something that could answer some of these issues, by providing a space for data to be stored, converted to a common format and appropriately annotated (assuming I have understood it correctly!).

Another possibility is a book being written in Canada, or the Straight Statistics site which seems full of good information (thanks to Simon for the tip).

But none of these seems to scratch the data literacy itch, really. We need interesting, well written, engaging content to help people get to a level where they can understand the process and context of open data. Might it even involve e-learning? It could do.

Update on the Knowledge Hub

Knowledge Hub

I spent an enjoyable afternoon at the advisory group for the Knowledge Hub (KHub) last Tuesday (sorry for the delay in writing this up…). Steve Dale chaired the day which featured a number of updates about the project, in terms of procurement and project management; technology platform and supplier; and communications and engagement.

Remember – the Knowledge Hub is the next generation of the Communities of Practice. Think of it as CoPs with an open API, plus some extra functionality.

The Knowledge Hub is going to be built by an outfit called PFIKS – who I must admit I had never heard of before. Their approach is heavily open source based and apparently they have about 80% of the Knowledge Hub requirements already working within their platform.

I’ve come away with a load of thoughts about this, most of which I have managed to summarise below.

1. Open platform

One of the strongest improvements that the Knowledge Hub will bring as compared to the current Communities of Practice platform is the fact that is will be open. This means that developers will be able to make use of APIs to use Knowledge Hub content and data to power other services and sites.

One compelling example is that of intranets – a suggestion was made that it would be possible to embed Knowledge Hub content in a council intranet – without the user knowing where the information came from originally. Later in this post I’ll talk about the engagement challenges on this project, but perhaps creative use of the API will enable some of these issues to be sidestepped.

Another aspect of this is the Knowledge Hub app store. I’m not quite sure whether this will be available within the first release, but it should come along pretty soon afterwards – it’s something Steve Dale seems pretty excited about. Developers will be able to create apps which make use of content and data stored within the Knowledge Hub to do cool stuff. I’m guessing it will be a two way thing, so content etc externally stored can be pulled into the Knowledge Hub and mashed up with other content.

It’s certainly something for Learning Pool, and I guess other suppliers to local gov, to consider – how can our tools and content interact with the Knowledge Hub?

2. Open source

The open source approach is to integrate various components into a stable, cohesive platform. This appears to be based on the Liferay publishing platform, with others bits added in to provide extra functionality – such as DimDim, for example, for online meetings and webinars; and Flowplayer for embedding video.

On the backend, the open source technology being used includes the Apache Solr search platform which is then extended with Nutch; and Carrot2, which clusters collections of documents – such as the results of a search query – into thematic categories. I think it is fair to say that the search bit of the KHub should be awesome.

What is also cool is that PFIKS publish their code to integrate all this stuff as open source as well – so not only are they using open source, they are also contributing back into the community. This is good.

Open Source, as I have written earlier, is not as simple a thing to understand as it might first appear. There are numerous complications around licensing and business models that have to be considered before a project commences. It certainly isn’t the case that by using open source tools that you can just rely on the community to do stuff for you for free – which seems a common misunderstanding.

Still, from the early exchanges, it appears that PFIKS get open source and are taking an active involvement in the developer communities that contribute to their platform. Hopefully the Knowledge Hub will end up as being a great example of collaboration between government, a supplier, and the open source community.

3. Data

One of the original purposes of the Knowledge Hub was that it would be a tool to help local authorities share their data. This was a couple of years ago, when Steve first started talking about the project, when data.gov.uk didn’t exist and the thought of publishing all purchases over £500 would have been anathema.

It would appear that the data side of things is taking a bit of a back seat at the moment, with the revamp of the communities taking centre stage. My understanding up until this point was that the Knowledge Hub would act as a repository for local government data to be stored and published. It would appear from some of the responses at the meeting that isn’t going to be the case now.

This is, in many ways, probably a good thing, as authorities like Lincoln, Warwickshire and Lichfield (amongst others) are proving that publishing data isn’t actually that hard.

However, all those authorities are those with really talented, and data-savvy people working on their web and ICT stuff. Are all councils that lucky? Perhaps not.

Hadley Beeman’s proposed project seems to be one that pretty much does what I thought the Knowledge Hub might do, and so again, maybe a good reason for the KHub not to do it.

When a question was asked about data hosting on KHub, the response was that it could be possible on a time-limited basis. In other words (I think), you could upload some data, mash it up with something else on the KHub, then pull it out again. Does that make sense? I thought it did, but now I have typed it up it seems kind of stupid. I must have got it wrong.

4. Engagement

You could count the number of people who actually came from real local authorities on one hand at the meeting, which for an advisory group is slightly worrying – not least because this was the big ‘reveal’ when we found out what the solution was going to be and who the supplier was. Actually – maybe that’s not of huge interest to the sector?

Anyway, it’s fair to say that there hasn’t been a huge level of interest from the user side of things throughout this project. Again, maybe that’s fair enough – perhaps in this age of austerity, folk at the coal face need to be concentrating on less abstract things. But now the KHub is becoming a reality I think it will become increasingly important to get people from the sector involved in what is going on to ensure it meets their needs and suits the way they work. By the sound of the work around the ‘knowledge ecology’ that Ingrid is working on, plenty of effort is going to be put in this direction.

It will also be vital for the Knowledge Hub to have some high quality content to attract people into the site when it first launches, to encourage engagement across the sector.

For all the talk of open APIs and the Knowledge Hub being a platform as much as a website, it still figures that for it to work, people need to actually take a look at it now and again. To drag eyeballs in, there needs to be some great content sat there waiting for people to find and be delighted by.

Much of this could be achieved by the transfer of the vast majority of the existing content on the Communities of Practice. There’s an absolute tonne of great content on there, and because of the way the CoPs are designed, quite a lot of it is locked away in communities that a lot of people don’t have access to. By transferring all the content across and making it more findable, the whole platform will be refreshed.

5. Fragmentation

The issue of fragmentation occurred to me as the day went on, and in many ways it touches on all of the points above. For while the Knowledge Hub both pulls in content from elsewhere and makes its own content available for other sites, there are still going to be outposts here and there which just don’t talk a language the KHub understands or indeed any language at all.

It’ll be great for dorks like me to automatically ping my stuff into Knowledge Hub, whether posts from this blog, or my Delicious bookmarks, shared Google Reader items, or videos I like. But those sites which publish stuff without

One striking example of this are the Knowledge Forums on the LG Improvement and Development website, which have continued despite the existence of the functionally richer Communities of Practice. My instinct would always to have been to close these forums and port them to the CoPs to both reduce the fragmentation of content and the confusion to potential users.

What about the content and resources on the rest of the LG Improvement and Development website – will that continue to exist outside of the rest of the platform, or will it be brought inside the KHub?

There are plenty of other examples of good content existing in formats which can’t easily be resused in the KHub, and for it to be the window on local government improvement, it’s going to need to drag this stuff in. Maybe a technology like ScraperWiki could help?

AVdebate

I’m rather interested in the referendum that we are going get get next May in the UK about changing our voting system.

It occurs to me that it isn’t an issue I have a particularly strong understanding of, and I’m sure that’s the case for a few other people as well.

So, with the help of friends like Anthony and Catherine, I’ve kicked off AVdebate – which will be an online space for constructive, deliberative debate and learning about voting reform, which will hopefully help folk make up their minds.

There’s almost nothing to see at the moment, and things are really in the early planning stage. But if you would like to get involved, or just keep up to date, sign up to the Google Group. You can also follow the #avdebate hashtag on Twitter.

More soon.