Fifteen Logo
blog

10 Point Technical SEO Checklist

February 18, 2019

Introduction to Technical SEO

Technical SEO looks at all the elements that sit behind a website. These aren’t typically the bits that a typical user of a website will see, but they will see the benefit of through their overall experience.

There are many elements to technical SEO optimisation. Some have a greater impact compared than others, but ticking each one off the list will help ensure your website is technically sound. This will work with your content, help you move up the rankings and increase your visibility in your chosen area of expertise.

In part one of our technical SEO checklist, we look at some of the fundamental elements that will get you started. These include

 

Sitemap

Check your Sitemap

What is a Sitemap?

Simply put, a sitemap is a list of pages that exist on a website organised in a hierarchical order or order of importance. These are usually written in a format known as XML (eXtensible Markup Language).

If you look at a sitemap, you will see the homepage at the top, with high level pages following on in the order of importance. This is the structure of the site as search engines see it.

Why do you need a sitemap?

The primary function of the sitemap is to aid with content discovery and provide a means for search engines to understand the overall structure of a website.

If you don’t have a sitemap, it doesn’t mean you content won’t be discovered, it will just slow the process down and will mean important pages could be missed.

For WordPress sites, there are a number of plugins including Yoast and All-In-One SEO Pack that can automatically generate you a sitemap

Review your sitemap

When you take a look at your sitemap, there are a few points to note to make sure it is optimised for search engines to crawl

Keep it error free

You will need to make sure it’s free from error, it doesn’t include any redirects and that they URLs are clear for indexing; so they don’t include any noindex, nofollow directives

Does it include new pages?

Does it include any new pages you have created? Have any old pages that no longer exist been removed?

Only include what you need to

It’s best to only have content which is useful included in your sitemap. Try to keep Authors and Tags out of the sitemap as they aren’t useful in the grand scheme of things.

Submit it to Search Console

If you haven’t already, add your sitemap to GSC (Google Search Console) and Bing Webmaster Tools as a priority. This will help speed up the process of content discovery and indexing.

Referencing the sitemap in the robots.txt file is also best practice

 

Robots.txt file

Check your robots.txt file

What is a robots.txt file?

robots.txt is a small but mighty text file that sits in the root directory of a website. Its sole function is to instruct user-agents, web-crawlers or robots on if they can access a website and its content.

The robots.txt file is also case-sensitive, so robots.txt does not equal Robots.txt or robots.TXT. This rule also goes for the content of the robots.txt, so any directories, URLs, etc need to match the site.

Why do you need a robots.txt file?

Basically, this can be the life or death of a website. In its most simplest form, this can be the difference between a website being crawled and content index… or not.

The robots.txt file is flexible and versatile. It offers enough flexibility to allow a whole site to be indexed, through to blocking (or allowing) certain user-agents, directories, sub-directories or wild-cards through RegEx (Regular Expression) pattern matching.

It is also worth noting that malicious (or spam-bots) will ignore the rules set out in a robots.txt file, so be careful with what you publish to the web.

What does a robots.txt. file look like?

Depending on whether you want your website indexing by search engines or not, you robots.txt file should look like this

Robots.txt Disallow

User-agent: *
Disallow: /
Sitemap: https://www.example.co.uk /sitemap.xml

The * next to “user-agent” is a wild-card and means that any robot or crawler should respect the rules that follow
On the next line, the / next to “Disallow” stops any of the robots from crawling, and ultimately indexing, your websites content

Robots.txt Allow

User-agent: *
Disallow:
Sitemap: https://www.example.co.uk /sitemap.xml

Like above, the “user-agent” rule applies to all robots wanting to crawl your website.
Unlike the above example, all crawlers are allowed to access your website, view and index your content. This is because the / next to “Disallow” has been removed.

This one little change can make a big difference to your website in search, which is why it can be the life or death of a site.

This is the simplest robots.txt file you should have on your site. If you don’t have a robots.txt file, it will be assumed by search engines and other crawlers that they can access your site.

 

Internal Linking Strategy

Internal linking Strategy

What is internal linking?

Internal links are links between pages or documents that all sit on the same domain of a website. This also includes content which sits on a sub-domain as long as it is a sub-domain of the main domain.

These can be found in a number of places. Primarily, internal links can be found in the header and footer navigation menus of a website. You can also find links within the body content of a page. These are useful to link content together that might not be important enough to sit in a navigation, but is still useful to users (and search engines).

Why is internal linking important?

Having internal links helps for a number of reasons. From a user perspective, this helps provide additional information on the topic they are reading. This can expand on a point or provide another perspective they may not have considered

For search engines, this helps build a picture of how content is linked together. The more information around a topic, the more authoritative a site will appear, especially when user metrics; time on site, bounce rate, pages viewed, etc.

Internal linking also helps define the architecture and the overall hierarchy of content and should form part of the content strategy.

As mentioned about, a lot of internal linking will be done automatically. This happens naturally as you build out the content on your site and is done through the site navigation.

In addition to this, you can add links to the content of a page to other pages that are related

For example, our SEO page on the Fifteen website. This page covers a broad range of sub-topics based around the broad topic of SEO. As a result, additional pages have been created which provide more information on each area, covering Local, International and Content Marketing.

In this case, SEO is a content pillar. This pillar is supported by pages relating to the topic of SEO and expand on each point. As a result, there is a direct relationship between each page.

How to build internal links?

When it comes to adding links to your content, you need to add in the following into the HTML (Hyper Text Mark-up Language).

A simple HTML link will look like this

Absolute URL

<a href="https://www.fifteendesign .co.uk/digital-marketing/seo/">SEO</a>
An absolute URL includes the website protocol (https://) and the websites domain (www.fifteendesign.co.uk) – This is a requirement when adding links to content off your site or a sub-domain of your own site, but this is optional when linking to content on your own site

Relative URL

<a href="/digital-marketing/seo/">SEO</a>
A relative URL does not include the website protocol or the domain. When this type of link is clicked, the assumption is that the page exists on the same domain. Using a relative URL keeps your code a little cleaner and if you migrate domains, your links will still work.
This is assuming your site structure does not change.

Adding a title tag to your link

In additional to the link, it is best practice to include a “title”. This is a brief description of the content on the page being linked to. It helps both search engines and assists with site accessibility for those who use screen readers. It can also help those who mouse-over a link to identify if they want to click the link or not.
<a href="/digital-marketing/seo/" title="Overview of SEO">SEO</a>

Migrate a website to https

Migrate to HTTPS

What is HTTPS?

HTTPS stands for Hyper Text Transfer Protocol Secure which is an extension of HTTP. It offers secure end-to-end encryption of data transferred between a website and your browser (and back again). This is done through either SSL (Secure Socket Layer) certificates or TLS (Transport Layer Security).

Why migrating to HTTPS is important?

There are a number of benefits to having a website served over HTTPS, but here are our top 3

Increased Trust

Having a secure website reassures potential customers that any submitted data will be kept secure and proves you are a responsible business

Conversion Rate Boost

Whether you’re a lead-generation or e-commerce business, potential customers are more likely to complete their desired action, knowing your site is secure and their personal data is secure

Increases in Organic Traffic

Whilst only small, search engines prefer a site to be secure. So, if your website is secure, there is the potential that you will see a ranking boost over a competitor who has an insecure website.

 

Google Search Console

Submitting changes to Google

We’re focusing on Google here due to it having the largest market share of all search engines and it being the largest traffic driver for most websites when it comes to Organic traffic.

This does not mean other search engines should be ignored. The likes of Bing also provides results for other search engines, including (but not limited to) Yahoo! and DuckDuckGo. Each is a different avenue to find and discover your websites content.

How do you submit changes to Google?

To submit changes you have made on any of your pages to Google, you will need to ensure your site is verified in GSC (Google Search Console). Assuming this is something you have already done, simply paste the complete URL into the box at the top and hit “Enter”
Submit URL to Google Index

It will take a few seconds for Google to crawl the page to see if it can be submitted to index.

Assuming there are no problems, simply click “REQUEST INDEXING“.
Request URL Index by Google
This process can take a couple of minutes, but once done, your changes should be re-crawled and cached within a couple of hours (usually, at most).

Why should you submit changes to Google?

Submitting pages to Google, either new or ones that have been heavily updated is a good thing to do, as it ensures the correct content is being served in the SERPs (Search Engine Results Pages).

If the content is out of date, when a user clicks the link in the SERPs and finds that it is different to what they were expecting, they are likely to bounce off your site. In Google’s eyes, this makes it look like the content isn’t relevant to the user search, so you are likely to lose ranking.

From a user’s point of view, it could be potentially damaging to a brands reputation as they will lose trust in your site as a source of reliable information.

So, rather than waiting for Google to re-crawl your pages, it’s best to submit them yourself to keep your content up to date.

Check for Broken Links

Check for broken links

What are broken links?

Broken links can be either; links to content that sit on your website. These can be either internal links, or links to other website. These are external links.

Search engines see links; both internal and external, as a “vote of quality”. Too many broken links can negatively affect a sites performance from both a user and search engine perspective.

Why do you need to fix broken links?

Resolving broken links can enhance a user’s experiences of a website. They don’t necessarily need to know that content has been moved (or removed). Ultimately, they are looking for information or wanting to convert.

Fixing a broken link will mean you will need to add a redirect; this is covered in the next point.

How do I find broken links on my website?

There are a number of tools which can be used to find broken links on a website. Google Search Console (GSC) or Bing Webmaster Tools, if linked to your site, are a good place to start. These are both free tools and let you see how search engines see your site. They also provide a wealth of other information.

Other tools do exist that are free, but you could also look at paid tools. If you are looking for additional insight that covers more than just broken links, SEMRush, Ahrefs and Moz are some of the more popular tools.

Website Redirects

Check your redirects

What are redirects?

A redirect is a simple function that sends a user from one URL (Uniform Resource Locator) to another. These are usually set up on a website where pages have been removed or changed so that the user (or search engine) does not hit a dead-end – typically a 404 page.

Types of redirects

There are 3 types of redirects to be aware of:
301 – A permanent redirect. The page the user is being redirected from is no longer that and it will never come back.
302 – A 302 now means “Found”. The page is not currently available to a user, but will come back. The user will be redirected to the new page, but search engines will still keep the old URL the same.
307 – A temporary redirect. This is the spiritual successor to a 302 response code, but in essence, there is no real different from a user’s perspective.

Why do you need redirects?

Redirect are needed to ensure a user’s journey is not interrupted and they can find the desired content on your website.

If no redirects are employed and content has moved (or been removed), a user is likely to hit a 404 page not found. 404’s naturally occur, but too many 404 that are created through content disappearing and not being replaced in some way are detrimental to a sites performance. These will cause users to leave the site, also known as a “bounce“.

You will need to ensure the content a user or search engine is being redirect to is relevant to page that is no longer accessible.

If this is a “service’, for example, the user should be directed to a similar “service” page. If this is not possible, they should be directed to a category landing page. This is a page which summarises all available services, so the user can then make an informed decision as to where to go next.

Review your redirects

When you look at adding redirects to a website, you will also need to check for redirect chains. These need to be eliminated, where possible. The reason for this is, after each “hop”, link equity starts to be lost after the first redirect.

What are redirect chains?

These happen when there are multiple redirects to get from one URL to another. This happens when one URL redirects to another page, then another redirect is added to the second page to lead to a third page, and so on.

Here is an example
Page A > Page B > Page C

In this instance, it is best to remove the redirect to “Page B” from “Page A” to remove the “chain“. You will then need to create a new redirect from “Page B” to “Page C”.

The update should now look like this
Page A > Page C
Page B > Page C

This ensures any authority from Page A is passed directly to Page C and isn’t lost through the extra hop.
The authority built by Page B will also be passed on to Page C

Structured Data - Schema Markup

Check your structured data

What is structured data?

Structured data comes in a number of forms. These include JSON-LD (JavaScript Object Notation for Linked Data), Microdata and RDFa (Resource Description Framework in Attributes) and utilises Schema markup.

JSON-LD is the recommended format both Google and Bing use for reading data.

This adds extra information to a page or website. This can include opening hours, pricing and review ratings and is used by search engines to provide additional information in search results. This can help users make a more informed choice before they click through to your website.

Extra tip: When creating structured data markup, the code needs to be added directly to the HTML of your website. This cannot be done through Google Tag Manager (GTM) for example, as it will not be loaded when the page is crawled.

Why do you need structured data?

Providing as much information about your business, the content on your website (articles or blogs) and the products/services on offer helps users make an informed choice before they click-through to you website.

The important thing to remember is, it can also help with off-site actions too. The structured data on your website can help verify information in your Google My Business (GMB) listing, for example. It provides a link between your GMB page and your website.

You can also increase your presence on search results page, by appearing in the lucrative “Position 0” by providing answers to user queries. This increases your overall visibility and trust. It can also help build your authority.

Note: There is no guarantee that implementing structured data on your website will improve your search presence, but it will increases your chances. There are a number of factors which determine how and what extra information is shown.

What does a structured data look like?

Organization Schema Example

As JSON-LD is the recommended format, the below example shows and example of “Organization” mark-up

{
"@context": "http://schema.org",
"@type": "Organization",
"name": "Fifteen",
"alternateName": "Fifteen Design",
"url": "https://www.fifteendesign.co.uk/",
"logo": "https://52df751a.delivery.rocketcdn.me/wp-content/themes/fifteendesign/assets/img/fifteen-logo-square.png"
}

This can be enhanced further with contact numbers for the business as well as different departments.

LocalBusiness Schema Example

Here is an example of the “LocalBusiness” schema mark-up for Fifteen. This will help support users wanting to find the business address and is likely to trigger the GMB listing – bases on user query.

{
"@context": "http://schema.org",
"@type": "LocalBusiness",
"name": "Fifteen",
"image": "https://52df751a.delivery.rocketcdn.me/wp-content/themes/fifteendesign/assets/img/fifteen-logo-square.png",
"@id": "https://www.fifteendesign.co.uk/",
"url": "https://www.fifteendesign.co.uk/",
"telephone": "0115 828 1835",
"priceRange": "££-£££",
"address": {
"@type": "PostalAddress",
"streetAddress": "Media House",
"addressLocality": "Nottingham",
"postalCode": "NG9 2RS",
"addressCountry": "GB"
},
"geo": {
"@type": "GeoCoordinates",
"latitude": 52.92794,
"longitude": -1.1978960000000143
},
"openingHoursSpecification": [{
"@type": "OpeningHoursSpecification",
"dayOfWeek": [
"Monday",
"Tuesday",
"Wednesday",
"Thursday"
],
"opens": "09:00",
"closes": "17:30"
},{
"@type": "OpeningHoursSpecification",
"dayOfWeek": "Friday",
"opens": "09:00",
"closes": "17:00"
}],
"sameAs": [
"https://www.facebook.com/FifteenDesign",
"https://twitter.com/fifteenagency",
"https://www.instagram.com/fifteenagency/",
"https://www.youtube.com/user/fifteendesign",
"https://www.linkedin.com/company/fifteen-design"
]
}

This can be enhanced to include a map, alternate business names; so Fifteen could appear for other user queries, multi-business locations and more. Schema structured data can be as flexible as you need it to be

Clean Semantic URLs

Create a clean URL structure

What is a clean URL structure?

Clean URLs have many different names, including semantic URLs, RESTful URLs, user-friendly URLs or search engine-friendly URLs, but there functions are all the same.

They take a more descriptive, easy to remember form which are free from parameters or query string and file extensions, unlike a dirty URL.

This does not include user-friendly files like PDFs, Word docs, Excel files, but more of the content files used to build the webpage.

The “slug” is the readable part of the URL. The “slug” is the part of the URL after the websites domain.

Clean URL example

https://www.fifteendesign.co.uk/digital-marketing/seo/
This is descriptive and provides the user (and search engine) with an understanding of where the content sits on the website, but also the topic of the content on the page.

This is also has added advantages from a usability and accessibility point of view too.

Dirty URL example

https:://www.fifteendesign.co.uk/products?category=12&pid=25
From a user perspective, they do not know what the content of this page holds. Ultimately, they both take the user to the same content.

It’s not easily remembers, it’s not user friendly.

From a search engines point of view, the page will be crawled and indexed just like any other. When a user searches for a relevant query, the page will be displayed.

All things being equal with a competitor, a clean URL is likely to be ranked higher than a dirty URL.

Why is a clean URLs important for SEO?

There are a number of reasons to use a clean URL structure for your website, but here are our top 5

  1. Includes keywords related to the page
  2. The shorter, more descriptive a URL is, the better
  3. Products/Service categorised easily
  4. Build topical authority
  5. Future proof your site

Bing Logo 2019

Submitting changes to Bing

Having covered “Submitting changes to Google” earlier, we’re now looking at how to submit changes to Bing

As briefly mention in the last post, Bing also provides results for other search engines, including (but not limited to) Yahoo!, AOL, DuckDuckGo and Ecosia. Each is a different avenue to find and discover your websites content, which is why Bing should not be ignored.

How do you submit changes to Bing?

To submit changes to you have made on any of your pages to Bing, you will need to ensure your site is verified through Bing Webmaster Tools. This can be done in a similar way to Google, by adding a single line of HTML to your website.

Assuming your website is already verified, navigate to
Configure My Site > Submit URLs

You can add multiple URLs at a time here. Simply paste each URL you want to submit on a new line.
Submit Multiple URLs to Bing

Once submitted, Bing will priories the crawl and the new pages should appear within Bing’s results pages shortly
Submitted URLs to Bing

Why should you submit changes to Bing?

Submitting pages to Bing, either new or ones that have been heavily updated is a good thing to do, as it ensures the correct content is being served in the SERPs (Search Engine Results Pages).

If the content is out of date, when a user clicks the link in the SERPs and finds that it is different to what they were expecting, they are likely to bounce off your site. In Bing’s eyes, this makes it look like the content isn’t relevant to the user search, so you are likely to lose ranking.

From a user’s point of view, it could be potentially damaging to a brands reputation as they will lose trust in your site as a source of reliable information.

So, rather than waiting for Bing to recrawl your pages, it’s best to submit them yourself to keep your content up to date.

If you need any assistance with your SEO strategy, from Local to technical, outreach and link-building, get in touch with Fifteen’s experts today.

Share

GET MORE

WANT REGULAR BLOG UPDATES TO YOUR INBOX?

Stay on top of your digital game with our blog updates.

Newsletter
More posts

OTHER BLOGS YOU MAY WANT TO READ...

Get a free, INSTANT SEO audit of your website – discover how well your site is performing on Google