Single page application SEO Optimizations

Client side rendered sites as is are not fully compatible with SEO standards before JS framework burst of technologies came out.  In 2009 there was a AJAX crawling scheme that google provided to developers. As cited here there no support and is currently deprecated.

Here is the deal. Modern JS Frameworks such as React, Angular, Ember, Vue, Aurelia have mixed support on crawling techniques. Some very promising frameworks are losing ground in popularity caused on lack of support on all aspects of web content delivery. Promising web developing heaven is not as easy as it looks in single hello world demonstration. Markets depended in web marketing are bound to server side scripting languages.

There are some solutions for web crawlers to be able to render and get information from you single applications but most of them require extra work.

A valuable  link on crawling settings “Robots meta tag and X-Robots-Tag HTTP header specifications

1 solution

Use a pre-render and serve html content of your pages with your own service. Note that not all js frameworks provide this functionality. Angular and Angular2 are the only fully supported pre-rendering engines.
You could actually use a robust caching service by rendering all your pages with a server-side js framework like PhantomJS and storing the html output to serve when requested to bots. The actual caveats here are that the output is not guaranteed to be same as the browser results. Also you add an overhead of refreshing the static content to the deployment procedure.

2nd Solution

Use Prerender.io, BromBone, Snapsearch hosted services and then you can easily redirect your pages accordingly to the service url when spotting that there is a crawler that is indexing your page.

3rd solution

Do nothing. It’s kinda vague to say that google now crawls SPA websites.  There was a special tag providing the context that this page uses # url redirection.

Previously adding
<meta name=”fragment” content=”!”/>
This meta tag actually transformed your url as follows
https://yoursite.com/#/contact to
https://yoursite.com/?_escaped_fragment_/contact.html

Assuming that google crawls content after # now you can cross your fingers that crawlers (not just google) will get more mature in the future. Try explaining that to marketing department.

4th solution

Use the href attribute value to go to the static page (or pre-rendering service) and use the onclick for actual users. Quickly provided as bellow
<a href=”/contact.html” onclick=”window.location=/#/contact; return false;” />
Provided that you have a prerenderer or a prerender service setup.

SEO and Javascript: New Challenges
SEO for AngularJS and other Javascript Frameworks

Posted in Uncategorized

Leave a Reply

Your email address will not be published. Required fields are marked *