Is Search Broken?

Tom Foremski over at Silicon Vallery Watcher points out the things that annoy him about search:

– There are many publishers that try to make sure their headlines catch the attention of the search engines rather than catch the attention of readers. The same is true for content, editors increasingly optimize it for the search engines rather than the readers.

– Why should I have to tag my content, and tag it according to the specific formats that Technorati, and other search engines recommend?  Aren’t they supposed to do that?

– Google relies on a tremendous amount of user-helped search. Websites are encouraged to create site maps and leave the XML file on their server so that the GOOGbot can find its way around.

– The search engines ask web site owners to mask-off parts of their sites that are not relevant, such as the comment sections,  with no-follow and no-index tags.

– Web sites are encouraged to upload their content into the Googlebase database. Nice–it doesn’t even need to send out a robot to index the site.

– Every time I publish something, I send out notification “pings” to dozens of search engines and aggregators. Again, they don’t have to send out their robots to check if there is new content.

– Google asks users to create collections of sites within specific topics so that other users can use them to find specific types of information.

– The popularity of blogs is partly based on the fact that they find lots of relevant links around a particular subject. Blogs are clear examples of people-powered search services.

It’s my view that web search has come as far as it can based on algorithms and sheer grunt alone. There needs to be a human element in terms of whether or not a result is actually a) relevant and b) useful to the searcher.

This is the thinking behind the Search Wikia project which Wikipedia and Wikia’s Jimmy Wales is running. I wrote a little about this on my personal blog here and here.

It’s also why I am working on a human generated ‘search engine’. The aim will be for people to submit links they have found useful, tag and categorise them, and allow others to vote them as useful. This database of links will then be searchable, producing fewer results, but ones which have been recommended by others. I think it is going to be really useful, but it will need the committment of other people to make it work.

Watch this space.

Feedable

 

Feedable

Feedable is a nice online news aggregator. As well as having your own feeds, it also provides information on hot topics of conversation in a variety of subjects.

Thanks to Steve Rubel for pointing it out.

Tags: ,

FeedDemon/Newsgator Problem

FeedDemon

Hmmm. FeedDemon has starting playing up all of a sudden. It claimed not to be able to connect to my NewsGator account a couple of times, despite the credentials being correct, and refused to do anything.

So, I removed the sync between the two and gave it another go. Now all sorts of old feeds and posts are popping up from nowhere. Most annoying.

Screencasting

Thanks to Steve Dale for pointing me in the direction of Blueberry Software and their BB Flashback range of screencasting tools. Steve has just finished a screencast of the IDeA’s Communities of Practice platform using the full blown version of BB Flashback.

At £99, that’s too steep for me. But I am trialling the ‘Express’ version, which will cost £20, and it seems to do everything I want it to. I’m putting together some videos showing how the various bits of LGSearch work. My only issues at the moment is how to display and host the video. Am experimenting with self-hosted .swf files and uploading AVIs to YouTube.

New Wiki Section

I’ve started a new page on the LGNewMedia wiki, called the Local Government Blogger Directory. I’m working from a pretty liberal definition of what ‘local government’ means so feel free to chuck in anything that comes even close to the LGNewMedia sphere of interest!