Fallback scraper?

Issue #1316 open
Former user created an issue

Hi, would it be possible to have a "fallback" scraper that gets active when you're trying to add something from a page that isn't supported?

It could just look for a doi and then get the info from some database (pubget or similar) instead. At the moment it seems that when you try to add something by doi, it just calls the journal page, so if that isn't supported then you are stuck. It would be a big improvement if it would then fall back to some database.

I believe CiteULike handles some journals in this way when they can't scrape. CiteULike's big selling-point is that they process almost all journals and you rarely have to add things by hand.

Comments (4)

  1. Log in to comment