Facebook Sharing improvement - Can we have the Issue title in the shared link?

Issue #263 resolved
udemi repo owner created an issue

if I share one of my issues on Facebook, the link just says the standard udemi text. Instead, can we have the title of the issue (and maybe even some more info) in the Link Text?

So I shared this link: http://udemi.org/issue/view/13

On Facebook and below is the image I got

Wha is that important? Because if we can get people to share specific issues on Facebook and get likes on it, we have a viral feature for udemi :)

Comments (16)

  1. David Marrs

    Ok, I think what is happening: rather than getting page title details from the parameters we send to facebook in their http://www.facebook.com/sharer.php url (along with the url of the Udemi issue/policy/action page) they instead take the url that we send over and scrape it for open graph ('og') tags as described at https://developers.facebook.com/docs/sharing/webmasters. Hopefully they run JavaScript when they scrape the page, so we can update the tags inside Angular.

    To test this I'll need to get the new code onto dev.udemi.org first.

    I'll pick this up again on Saturday evening.

  2. David Marrs

    Right, I've revisited my old work on the tour pop-ups, which prevented every page request being redirected to /welcome, brought it up to date, merged into the develop branch and deployed to dev.udemi.org. Unfortunately that has not fixed the issue.

    I think the open graph ('og') tags need to be set before the base html is sent to the browser/robot (facebook doesn't run javascript on the page before reading the tags). Will need to look into what we can do about this.

  3. udemi reporter

    Thank you @dwmarrs. Just a quick note: before we deploy anything to udemi.org (prod.udemi.org) we need to allays deep test the new changes as we have users from Monday onwards.

  4. David Marrs

    It looks like we have a few options:

    A) we use a service like https://prerender.io/ , http://www.brombone.com/ or http://getseojs.com/ to cache versions of each page after all the JavaScript has fired, and deliver those to crawlers like Facebook's. But they cost money...

    B) we serve alternate static versions of the pages up to crawlers ourselves.

    C) we use something like phantom.js to build up a cache of prerendered html files (representing the DOM after all the JS fires) ourselves, and serve those up to crawlers.

    Personally, I've been thinking lately that, as interesting as it's been to work on, we've made a bit of a rod for our own back by having the UI as a single page app.

    Perhaps it would be easier to maintain/debug in future if we served up individual html files for individual urls, querying the Udemi API server side with Node/Express for the main parts of a page that would make up meta tags, e.g. get a policy name, description details, but then make the API call to get the comments etc from the client side. Or load up the policy map page server side, and have the submenu higlighted as 'Policies' server side, but still update all the policies under the map and statistics to the right of the map by making API calls client side.

    However, we should still keep the code we have for the single page angular app, and use it to build a 'Progressive web app': a site that can load full screen from an icon on the tablet/phone home screen - my understanding is that a site does need to be single page in order to remain at full screen (at least that's how it used to be when I last tried something like this years ago). See https://developers.google.com/web/fundamentals/getting-started/your-first-progressive-web-app/?hl=en

    We just need to breakdown the existing JavaScript into bits that can be used in both the desktop version (individual full html files sent from the server) and the mobile version (the single page angular progressive web app)

    I'm not saying this will be quick, or easy, but I think it could make life easier in future.

    I dunno, I may not have explained this very well: what do you think @enricosoft ? I know you came up with an estimate recently of what it would take to do a rebuild of the UI after the prototype, and you've been talking about moving it over to be more component-y for quite a while.

  5. Enrico Piccini

    @dwmarrs Hi, I think we must continue to use angular to build a web app that we can reuse with ionic. The only problem we can have is related to facebook share because facebook crawler doesn't work with javascript. In that case the simple solution is the create a "proxy" page with node.js. this will be the only page that we will develop with back-end language

  6. Enrico Piccini

    ANALISYS:

    Url to share on socials ---> http://udemi.org/issue/view/14

    The social sharer will share an url like this ---> http://udemi.org/sharer/?id=14&title=ESCAPED-TITLE&description=ESCAPED-DESCR&slug=SLUG (the url rewrite of this sharer will be managed directly by node.js and not NG routes)

    The Facebook crawler at this point will try to get the OG tags from the link above.

    If UserAgent string contains 'facebookexternalhit' you render the HTML with OG meta tags ready and filled by values in querystring. If UserAgent string is not Facebook, you will redirect the user using an HTTP 302 to the correct url (http://udemi.org/issue/view/14).

    This "proxy" page must be created using a backend language and in this case we will use Node.js to do the job

  7. David Marrs

    ok, cool, that sounds like option B of the 3 I listed: glad you're studying node.js!

    I totally agree with your comment about ionic, but I do think we've just made things a bit harder for ourselves by going single page on the desktop. :-(

  8. Enrico Piccini

    @dwmarrs Without the single page we cannot be able to do all things from the front-end using only api but we needed a backend support (like node or django). The problem was that we migrated from existing code (made for django) and then we introduced angularjs (our code was not thinked for angular) and the idea to use only apis (split front-end from the back-end) so we were forced to get this choice... but I think this is the right way...

    For the new front-end project we will have to study better all the flows and make a good analisys before start the development.

  9. David Marrs

    @enricosoft yeah, yeah I remember that far back :-)

    Splitting into API and UI made/makes sense, it was & is a good idea.

    What I'm thinking is that where we have node.js for the tiny backend for the UI - which is mostly just there to serve up index.html, and the JavaScript running on index.html makes all the queries to the API - it might have been simpler for us to make the JavaScript running on the (UI) backend a bit bigger and the JavaScript running on index.html a bit smaller i.e. have some of the queries that are made to the API be made from the (UI) server, returning seperate html pages to the browser, that then make any addtional API queries required to build up the rest of the page (e.g. make the API query for a policy on the node side, to get its name and description and put those in the meta tags, and then make API queries to get comments, related actions (assuming that was its own API query anyway) etc - the sort of things you wouldn't bother to put into meta tags - in the browser.

    But then, that might have put us back at the same problem we had with Django never just working when we pulled down any fresh changes from the repo (only with node instead) :-)

    Also, I don't know if we need a single page website to make a Windows 10 app anyway (as I was talking about the desktop) ;-)

    One day it'll be perfect!!! Ha!

  10. Log in to comment