RSS / Atom

Anti-social bookmarking (Bye delicious, Hello TiddlyWiki)

2011-01-22 , ,

The announcement by Yahoo! to shutdown the Social Bookmarking site delicious (or maybe not) opens the question of what to do with my bookmarks. Since the “social” part of it didn’t interest me but the tagging and central access did, I’m exporting it to a private TiddlyWiki, a browser-based wiki implemented in javascript that saves as a single local file.

Here is what I’ve done to create the local wiki and import the links from delicious. It’s very simple:

  1. Save a copy of the TiddlyWiki
  2. Export the bookmarks as XML using the developer API
    curl -u <username> > delicious.xml
    (It will prompt for a password)
  3. Transform the XML bookmarks into XHTML “tiddlers” using a XSL stylesheet. I use xsltproc and my variation twdelicious-url.xsl with the link in the body of this stylesheet from Paul S. Downey.
    xsltproc --output delicious.tiddlers.xhtml --novalid twdelicious-url.xsl delicious.xml
  4. Load the wiki in a browser and import the tiddlers from the “Import” link in the “backstage” menu.

I now have a local copy, which is good, but no access to it from other locations. The options are upload it and keep it in sync or run a server-side implementation of the TiddlyWiki. I chose to check the file into a git repository and sync that repo with the one on my web server but Ben Gillies has instructions for running the Python TiddlyWeb reference implementation as a CGI so the bar is fairly low if I change my mind.


Comment [2]

2011-03-05 04:34 , dalker

The idea to go back to Tiddlywik is great!

I tried to do this, but sadly it’s presently impossible to use the xml-generating API from delicious (it’s down, plus I have a delicious account authentified via yahoo, itslef via google, so this is complicated). Do you know of any way to generate the initial xml starting from the simple html file delicious creates as a standard export/backup feature?

2011-03-05 12:40 , Ross


Checking the API url, it’s currently working for me. It does throttle if you hit it too many times in short succession.

You’re right that the API is lacking. I don’t know of a way get another user’s public bookmarks with it or to pull them without authentication.

If you have the html, you can easily convert it to XML. I’d try HTML Tidy and xsltproc or just script something up in Perl or Python.

If you don’t have the urls, there are a few things you can do. If there are only a few, you could abuse the RSS feed (<USER>?count=<NUM>). It seems to return a maximum of 100. If there many, you’ll have to grab a page at a time. The urls for the pages are simple,<USER>/?page=<NUM>, so you might write a little script to slurp the links as html and dump them.

- Ross

Commenting is closed for this article.