header thumbnail image

Technical SEO Checklist: Part 1

February 18, 2019 - Chris Garner

Introduction to Technical SEO

Technical SEO looks at all the elements that sit behind a website. These aren’t typically the bits that a typical user of a website will see, but they will see the benefit of through their overall experience.

There are many elements to technical SEO optimisation. Some have a greater impact compared than others, but ticking each one off the list will help ensure your website is technically sound. This will work with your content, help you move up the rankings and increase your visibility in your chosen area of expertise.

In part one of our technical SEO checklist, we look at some of the fundamental elements that will get you started. These include

 

Sitemap

Check your Sitemap

What is a Sitemap?

Simply put, a sitemap is a list of pages that exist on a website organised in a hierarchical order or order of importance. These are usually written in a format known as XML (eXtensible Markup Language).

If you look at a sitemap, you will see the homepage at the top, with high level pages following on in the order of importance. This is the structure of the site as search engines see it.

Why do you need a sitemap?

The primary function of the sitemap is to aid with content discovery and provide a means for search engines to understand the overall structure of a website.

If you don’t have a sitemap, it doesn’t mean you content won’t be discovered, it will just slow the process down and will mean important pages could be missed.

For WordPress sites, there are a number of plugins including Yoast and All-In-One SEO Pack that can automatically generate you a sitemap

Review your sitemap

When you take a look at your sitemap, there are a few points to note to make sure it is optimised for search engines to crawl

Keep it error free

You will need to make sure it’s free from error, it doesn’t include any redirects and that they URLs are clear for indexing; so they don’t include any noindex, nofollow directives

Does it include new pages?

Does it include any new pages you have created? Have any old pages that no longer exist been removed?

Only include what you need to

It’s best to only have content which is useful included in your sitemap. Try to keep Authors and Tags out of the sitemap as they aren’t useful in the grand scheme of things.

Submit it to Search Console

If you haven’t already, add your sitemap to GSC (Google Search Console) and Bing Webmaster Tools as a priority. This will help speed up the process of content discovery and indexing.

Referencing the sitemap in the robots.txt file is also best practice

 

Robots.txt file

Check your robots.txt file

What is a robots.txt file?

robots.txt is a small but mighty text file that sits in the root directory of a website. Its sole function is to instruct user-agents, web-crawlers or robots on if they can access a website and its content.

The robots.txt file is also case-sensitive, so robots.txt does not equal Robots.txt or robots.TXT. This rule also goes for the content of the robots.txt, so any directories, URLs, etc need to match the site.

Why do you need a robots.txt file?

Basically, this can be the life or death of a website. In its most simplest form, this can be the difference between a website being crawled and content index… or not.

The robots.txt file is flexible and versatile. It offers enough flexibility to allow a whole site to be indexed, through to blocking (or allowing) certain user-agents, directories, sub-directories or wild-cards through RegEx (Regular Expression) pattern matching.

It is also worth noting that malicious (or spam-bots) will ignore the rules set out in a robots.txt file, so be careful with what you publish to the web.

What does a robots.txt. file look like?

Depending on whether you want your website indexing by search engines or not, you robots.txt file should look like this

Robots.txt Disallow

User-agent: *
Disallow: /
Sitemap: https://www.example.co.uk /sitemap.xml

The * next to “user-agent” is a wild-card and means that any robot or crawler should respect the rules that follow
On the next line, the / next to “Disallow” stops any of the robots from crawling, and ultimately indexing, your websites content

Robots.txt Allow

User-agent: *
Disallow:
Sitemap: https://www.example.co.uk /sitemap.xml

Like above, the “user-agent” rule applies to all robots wanting to crawl your website.
Unlike the above example, all crawlers are allowed to access your website, view and index your content. This is because the / next to “Disallow” has been removed.

This one little change can make a big difference to your website in search, which is why it can be the life or death of a site.

This is the simplest robots.txt file you should have on your site. If you don’t have a robots.txt file, it will be assumed by search engines and other crawlers that they can access your site.

 

Internal Linking Strategy

Internal linking Strategy

What is internal linking?

Internal links are links between pages or documents that all sit on the same domain of a website. This also includes content which sits on a sub-domain as long as it is a sub-domain of the main domain.

These can be found in a number of places. Primarily, internal links can be found in the header and footer navigation menus of a website. You can also find links within the body content of a page. These are useful to link content together that might not be important enough to sit in a navigation, but is still useful to users (and search engines).

Why is internal linking important?

Having internal links helps for a number of reasons. From a user perspective, this helps provide additional information on the topic they are reading. This can expand on a point or provide another perspective they may not have considered

For search engines, this helps build a picture of how content is linked together. The more information around a topic, the more authoritative a site will appear, especially when user metrics; time on site, bounce rate, pages viewed, etc.

Internal linking also helps define the architecture and the overall hierarchy of content and should form part of the content strategy.

As mentioned about, a lot of internal linking will be done automatically. This happens naturally as you build out the content on your site and is done through the site navigation.

In addition to this, you can add links to the content of a page to other pages that are related

For example, our SEO page on the Fifteen website. This page covers a broad range of sub-topics based around the broad topic of SEO. As a result, additional pages have been created which provide more information on each area, covering Local, International and Content Marketing.

In this case, SEO is a content pillar. This pillar is supported by pages relating to the topic of SEO and expand on each point. As a result, there is a direct relationship between each page.

How to build internal links?

When it comes to adding links to your content, you need to add in the following into the HTML (Hyper Text Mark-up Language).

A simple HTML link will look like this

Absolute URL

<a href="https://www.fifteendesign .co.uk/digital-marketing/seo/">SEO</a>
An absolute URL includes the website protocol (https://) and the websites domain (www.fifteendesign.co.uk) – This is a requirement when adding links to content off your site or a sub-domain of your own site, but this is optional when linking to content on your own site

Relative URL

<a href="/digital-marketing/seo/">SEO</a>
A relative URL does not include the website protocol or the domain. When this type of link is clicked, the assumption is that the page exists on the same domain. Using a relative URL keeps your code a little cleaner and if you migrate domains, your links will still work.
This is assuming your site structure does not change.

Adding a title tag to your link

In additional to the link, it is best practice to include a “title”. This is a brief description of the content on the page being linked to. It helps both search engines and assists with site accessibility for those who use screen readers. It can also help those who mouse-over a link to identify if they want to click the link or not.
<a href="/digital-marketing/seo/" title="Overview of SEO">SEO</a>

Migrate a website to https

Migrate to HTTPS

What is HTTPS?

HTTPS stands for Hyper Text Transfer Protocol Secure which is an extension of HTTP. It offers secure end-to-end encryption of data transferred between a website and your browser (and back again). This is done through either SSL (Secure Socket Layer) certificates or TLS (Transport Layer Security).

Why migrating to HTTPS is important?

There are a number of benefits to having a website served over HTTPS, but here are our top 3

Increased Trust

Having a secure website reassures potential customers that any submitted data will be kept secure and proves you are a responsible business

Conversion Rate Boost

Whether you’re a lead-generation or e-commerce business, potential customers are more likely to complete their desired action, knowing your site is secure and their personal data is secure

Increases in Organic Traffic

Whilst only small, search engines prefer a site to be secure. So, if your website is secure, there is the potential that you will see a ranking boost over a competitor who has an insecure website.

 

Google Search Console

Submitting changes to Google

We’re focusing on Google here due to it having the largest market share of all search engines and it being the largest traffic driver for most websites when it comes to Organic traffic.

This does not mean other search engines should be ignored. The likes of Bing also provides results for other search engines, including (but not limited to) Yahoo! and DuckDuckGo. Each is a different avenue to find and discover your websites content.

How do you submit changes to Google?

To submit changes you have made on any of your pages to Google, you will need to ensure your site is verified in GSC (Google Search Console). Assuming this is something you have already done, simply paste the complete URL into the box at the top and hit “Enter”
Submit URL to Google Index

It will take a few seconds for Google to crawl the page to see if it can be submitted to index.

Assuming there are no problems, simply click “REQUEST INDEXING“.
Request URL Index by Google
This process can take a couple of minutes, but once done, your changes should be re-crawled and cached within a couple of hours (usually, at most).

Why should you submit changes to Google?

Submitting pages to Google, either new or ones that have been heavily updated is a good thing to do, as it ensures the correct content is being served in the SERPs (Search Engine Results Pages).

If the content is out of date, when a user clicks the link in the SERPs and finds that it is different to what they were expecting, they are likely to bounce off your site. In Google’s eyes, this makes it look like the content isn’t relevant to the user search, so you are likely to lose ranking.

From a user’s point of view, it could be potentially damaging to a brands reputation as they will lose trust in your site as a source of reliable information.

So, rather than waiting for Google to re-crawl your pages, it’s best to submit them yourself to keep your content up to date.

Continue reading part 2 of our technical SEO checklist to discover more points you need to cover off including how to check for broken links, why you need to add redirects, adding further value through structured data and why you should use a clean URL structure for your site.

If you need any assistance with your SEO strategy, from Local to technical, outreach and link-building, get in touch with Fifteen’s experts today.

Back to Blog

Let's start something special

Get in touch with Fifteen today to start your project.

Get In Touch
Footer Call to Action
Sending