Convincing businesses, marketers and developers of the real importance of SEO can be a tricky travel.
Despite the wide acceptance and love that SEO enjoys today, there are (and will always be) businesses that want to see instant results.
Try, in this scenario, making a specific case for technical SEO, and you will find no takers. The deep-rooted, simplistic understanding of SEO is so single minded – keywords, keywords and more keywords – that technical SEO often has to take a backseat.
It may not be the most glamorous thing out there, but technical SEO can add to your bottom line year in and year out.
Table of Contents
The fact that we have to start from the scratch should be enough to tell you that technical SEO has always been incredibly underrated.
Businesses hire a team of developers and just expect them to build a website that’s ready to battle it out against the competition. But the truth is, not a lot of developers follow the best web standards. They are paid to build you a beautiful and functional website – and that’s all they will do for you. Just like you and I, they are in a race against time with multiple leads to care for.
In the end, many websites end up never getting the technical SEO treatment they deserve. Much of this misfiring can be avoided if you know what technical SEO is. In the simplest of words, technical SEO makes the life easier for search engine crawlers by creating a well-defined environment. In this environment, the bots can easily find what they are looking for, see what you have to offer, index pages without any hiccups and render them to the end-user as and when needed.
If you know and realise the importance of general SEO, it shouldn’t be hard to extend the same line of thought to the specifics of technical SEO.
But SEO has its limitations. The results are rarely quick and straightforward. Moreover, the measurement metrics (the KPIs) are standalone entities that you have to actively convert into ‘money metrics’ such as ROI and cost per event.
Because of this long-winding process, it’s easy to think that SEO has little or no direct impact on the bottom line – and this is where things start unravelling. Businesses, especially the young and small ones, want to see their campaigns produce fabulous results in no time. This, quite predictably, drives them away from SEO and towards PPC.
This is not to say that PPC doesn’t work. It can definitely yield good returns if you know what you’re doing. But, over a long enough timeline, technical SEO will always have your bottom line covered.
I will support this argument using the bold, front-facing technical SEO benefits in the bottom line context.
SEO of every kind – local, content, general or technical – is ultimately trying to put your website and your business in front of people.
As far as technical SEO is concerned, it has a direct impact on the visibility. Quite ironically, the visibility impact is invisible.
There are two ways to look at the visibility of your website – the visibility to search engines and the visibility to users. The latter follows the former, and the former is completely dependent on the health of technical SEO.
Crawlability is the ability of various elements on your website to be crawled by search engine bots.
Indexation follows crawlability. The elements that are crawled are indexed by bots according to their best understanding of what these are.
What do you think happens when the important content resources on your website are just thin air for crawlers?
They move on!
Essentially, your website loses all the SEO value from the elements that crawlers can’t see, understand and index. A good example of this is using outdated CSS codes that improve the UX, but never really get crawled.
Crawlability and indexation have an enormous impact on the overall SEO success. Crawling is the first ever interaction your website has with search engines, and if that goes bad, it will always rank lower than where it deserves to be.
Lower ranks ensure thin organic traffic, and, eventually, lost profits.
Fixing the crawlability and indexation issues takes several days if your website is large and/or carrying SEO baggage. You can find the advanced crawl errors report in your Google Search Console. A good starting point is to redo the robots.txt file from the scratch and see if the crawling issues persist.
If they do, you can again turn to Google Search Console and analyse the error codes generated by bots. By suspending problematic elements, minifying CSS and converting error-prone JS to simple HTML you can instantly improve the crawlability. While this works, I will recommend giving a heads-up to your dev team to avoid including similar, problematic, non-crawlable elements.
Remember, this is a temporary, basic fix. To dig deeper into what’s stopping your pages from being crawled (and to stop losing profits in the long run), a detailed technical SEO audit is necessary. Click here to know more about how we, at HQ SEO, do this.
The sitemap errors are more common and their impact on technical SEO more fundamental than you think.
Once your website has a sitemap that is recognised by Google, it will always remain the key to how the crawling takes place. A well-formatted sitemap lets you optimise your crawl budget, letting Google index your pages faster.
Broken or poorly formatted sitemaps can cause frequent crawl errors that, over a period, drastically hurt the trust Google has in your website. This automatically depletes your crawl budget. In turn, when you publish an important content marketing piece or a great lead magnet, it will take forever for Google to have it indexed (even if you do it manually, there are no guarantees!). This directly impacts the number of leads generated.
Fixing sitemap errors is relatively easy, but it involves – again – starting from the scratch. Follow this article to know how to go about doing this.
Canonicalisation is an important part of technical SEO. While discussing our mini technical SEO audit, I mentioned that using canonical elements wisely is the key to maintaining consistency across the website and optimising the crawl budget.
The basic idea behind canolicalisation is to pick the best option from a list of similar resources. You could canonicalise content, URLs, scripts and even meta tags, for that matter. For now, we will stick to content.
Duplicate content is a menace to everyone involved in the game.
Nobody has enough time on their hands to read the same page or process the same information over and over again. Search engine crawlers are no exceptions to this. If you ask them to crawl similar pages multiple times, it’s just wasting your crawl budget (and we know it’s bad).
Moreover, if you have on board some really great content that’s produced and published multiple times (print friendly versions, archive pages, comment threads – it happens a lot), chances are that your backlinks will be divided across all these URLs.
What this means is that your primary content URL ends up receiving less backlink juice than it can and should.
Large e-commerce websites are prone to generating hundreds (if not millions) of near-duplicate URLs, thanks to sorts and facets. Consider a simple example of a website selling t-shirts. The sorts and facets will line up like this:
This simple arrangement (arguably implemented for a better UX) creates over a thousand duplicate URLs. You don’t want all of them to be indexed. You can pick the query modifier you want (the gender and the colour, in this case) and add canonical tags to all other URLs to limit the indexing.
Google has some brilliant ideas for the future of the internet and weeding out duplicate content sits at the top of the list. Having your website host multiple content pieces that resemble each other tells Google that your website is either a bunch of spam or run by amateurs. Either way, Google will see no value in sending people over.
Duplicate content, thus, acts as an invisible bottleneck that keeps pinching the organic traffic narrow. No organic traffic = lost profits.
Fixing duplicate content is a tedious task if you have just recently discovered that you host hundreds of such URLs. The easiest way to get there is to collect all the duplicate URLs, sort them by parameters and add relevant canonical tags to them. You can use Google Search Console and Bing Webmaster Tools to get this done.
The speed at which your website loads is important on the crawler front as well as the user front.
Having a slow-loading website means that crawlers can’t really access all the pages in your sitemap fast enough. In such situations, the bots have two options – either to drop the crawl rate down to affect your crawl budget or to index/render the URLs partially.
Neither of these two options is good for you.
This is easier to understand. Nobody likes bulky websites that insist on loading dozens of scripts, with loads of cookies thrown in for good measure. Google knows it all too well.
Here’s an interesting study carried out by Pingdom.
As you can notice, the optimum page speed for an average content page is somewhere between 2 and 3 seconds. At 2 seconds, the bounce rate is 9% (acceptable). Add just 2 seconds to it, and the bounce rate jumps to over 20%. If your website doesn’t load in the first 5 seconds, nearly 50% of your visitors will bounce – probably to your competitor’s website. A similar result can be observed for pageviews.
Speaking of code-bloat, Forbes.com is the first prominent name that comes to mind. We all want to see our names up there, but the website continues to draw flak from popular blogs and, not to forget, angry Redittors . The reason? Slow loading ads, autoplay-enabled videos and – wait for it – a mandatory money quote that adds to the load time.
Partially rendered pages or non-indexed content have a direct impact on organic traffic. Even if you manage to get crawlers to index a bulky page, it will only see users bouncing off.
Fixing slow loading pages is the job best left for your dev team to handle. Common culprits are – slow servers/DNS, unresponsive JS, unnecessary CSS, bloated code and poorly embedded media files.
Going mobile first isn’t a new thing anymore. It is also not something you should be having second thoughts about. If you run a business website, it needs to be optimised for the best mobile experience – there’s no choice.
The simple reason is – Google is going mobile first too.
Data Source: Frontburner Marketing
If your website isn’t mobile optimised (crawlers check for this), it will get pushed so far down the SERPs that you can, quite literally, lose all of your mobile traffic for important keywords.
When a smartphone-wielding 30-something (the prime consumer demographic) sees your website loading poorly on their phone, they know what to think of your business right away. This will further hurt overall bounce rates. High bounce rates will in turn impact your desktop traffic – it’s a vicious cycle.
Again, this is a job for your dev team. If you’re developing your website from the scratch, be sure to make every element responsive, and keep using tools like this to verify the mobile view in real time. You can also consider enabling AMP – the latest, bloat-free way of rendering heavy pages on mobile browsers.
A full-scale technical SEO audit is necessary to unearth scripts and CSS elements that may be stopping your website from being mobile friendly. To claim your free proposal for an audit, write to us here.
I have, on purpose, maintained a negative tone for the impact technical SEO has on your bottom line. This was just to drive home the point that business websites simply can’t ignore technical SEO and expect on-page SEO to keep bringing the organic traffic in.
Just as bad technical SEO can impact the eventual sales and profits, good technical SEO puts your website at a great vantage point from where your on-page SEO goodwill can best maximise the organic traffic.
If your website suffers from continually decreasing organic traffic despite having good on-page SEO, you most certainly need an end-to-end technical SEO audit. Drop us a line here to get in touch with us. You can also request a free proposal by filling in the required information in the form below.