Our passion to IMPROVE every day is INFINITE Do you have questions ? We help you 24/7

Top 9 SEO Issues and How to Avoid or Fix Them

SEO Issues

SEO is a complex discipline requiring careful consideration for various factors and procedures to be implemented correctly. There are different ways we can go wrong and hurt the organic potential of a website. 

In this post, we are going to cover the most common SEO issues and how to avoid falling victim to them. Likewise, if you are already struggling with such issues, we can help you fix them successfully.

 First, let’s start with the basics.

What Are SEO Issues?

A potential SEO problem that could hurt the performance of our websites in the search engines, could be identified as an issue. To ensure an adequate optimization of sites ranking high in Google (and other search engines), we should be aware of a few common mistakes.

Broken Internal & External Links

The more pages a website has, the higher the chances are for broken links. When a site keeps growing and producing more content, a danger of unnoticed 404 pages exists. Whilst it is good to develop and add new features & landing pages, we have to always pay attention to issues with internal & external links.

We, as users, do not fancy reaching a page that doesn’t work, do we? This interrupts our flow and often results in leaving the website straight away.

page not found 404
this is how a missing 404 page could look

Visitors could perceive the webpage as not trustworthy. As we know, Google is really good at identifying users’ perceptions of a website/page. Thus, if users are unhappy, consequently search engines will be unhappy too. 

What’s more, broken pages waste precious crawl budgets that could be otherwise used purposefully. We don’t want the bots spending time and resources on pages that are not accessible for the users. 

The good news is that we can easily identify broken internal & external links thanks to different SEO tools. Naturally, if we have a smaller website with just a few pages, we will probably know them by heart and it won’t be that difficult to make sure that everything is working well.

However, as we develop our websites, it becomes impossible and needless to do this manually.

Tip: run scheduled scans once a week or month and if you identify any broken links, dig deeper and try to fix them accordingly.

SEOcrawl tool how the crawlers analyses a page for seo issues
this is how our own SEOcrawl crawler analyzes a page for the most common SEO issues including 404 status codes

Duplicate Content

Duplicate content is one of the oldest and most common issues known amongst digital marketers. The main concern is that by providing similar pages to search engines, including Google, they may struggle to identify and rank the correct URLs.  

As a result, we (as SEOs)  may suffer from traffic loss or just not get the full benefit from our websites.

We as search engine professionals need to ensure that our content is unique. In order to make the life of search engines easier, we should avoid a few common pitfalls.

Often duplicate content occurs due to allowing different versions of the same page to be available for users and bots. For example, having both the http and https versions of a website load without the proper redirects is a popular problem.

In order to avoid this potential issue, we would need to have the correct http to https redirects set. We can easily test that by typing http://oursitename.com in the browser. In case our https protocol is enabled and correctly set, the browser should redirect us to https://oursitename.com.

an example of a website that hasn't enabled the http insecure version redirect to the https
this is the message users get from a website that doesn’t redirect its http insecure version to https

In the same way, the non-www version of a website should redirect to the www version, if that’s the main version of our website and vice versa. 

non-www to www version of a website redirect for chess.com
an example of a website which 301 redirects its non-www version to the www version

Parameters in URLs are another common trap that causes duplicate URLs. Content management systems often add sorting parameters (for size, color, model, etc.) that could result in having numerous pages with the same content.

parameters in URL example from Amazon
an example from Amazon with various parameters in the URL

Still, this is not something to worry about provided that we implement proper canonicals and no-index attributes when needed.

Note: canonical tags are a popular way to tell Google which set of similar URLs to index and count as the main one. Another way is to use a no-index attribute when parameters cause different URLs with the same or similar content.

Title Tag Fails

Title tags are amongst the most important on-site SEO elements. They inform search engines what is the main topic of a page. Title tags also show in the search results on top of each organic listing. This makes them one of the key elements and often a deciding factor for the users to click on a specific result.

Taking the time to set them correctly is a crucial SEO task. However, sometimes it gets neglected which results in low click-through rates.

The main issues with title tags are:

  • missing the title tags altogether

In this case, Google will set a title tag based on its understanding of what our page is about. Usually, it copes with this task well, but still, it’s a missed SEO opportunity.

 It’s good to set title tags ourselves, especially for our most important pages.

  • too long/short title tags

Using short titles tags is a missed way to attract potential users and get them to click on our results. The usual practice is to have between 55-65 symbols shown in the search results. 

Conversely, title tags that are too long (over 65 symbols), may get truncated and not shown fully. This will create another missed opportunity to show our whole message to the online world.

truncated title and description in Google search results
an example for truncated titles & descriptions in the search results

As we can see here, both the title and the meta description are truncated and, thus not providing the best user experience.

  • duplicate title tags

It is a common practice for e-commerce websites to have identical tags. This is often the case with other types of sites too, unfortunately. Duplicate title tags make it harder for webpages to stand out and differentiate from other similar pages.

Screaming Frog feature to find duplicate title tags
duplicate title tags discovery feature in Screaming Frog

Robots.txt Issues

Robots.txt is a relatively simple but useful tool giving important information and instructions to search engine crawlers. It is located in the root directory of the websites and is using plain text format. 

It can prevent certain sections of our website from being crawled, so bots don’t have to waste precious resources. However, there are some potential mistakes we should be aware of.

Granting Access to Staging and Dev Sites or Admin Panels

There are several ways to stop search engines from reaching any test and under development versions of your domain. One of the ways is to use a command in your robots.txt file, although there are more effective ways to do so (ex. HTTP authentication).

One of the most common blocking instructions for WP sites is to exclude the wp-admin panel folder. This is how it looks:

User-agent: *
Disallow: /wp-admin/

User-agent: *  means that the instruction applies for all bots (Google bot, Bing bot, etc.) and the second line tells we want to stop them from crawling the /wp-admin/ folder and everything that goes in it.

Blocking Important URLs from Crawling

Similar to the previous command, we don’t want to disallow any important folders on our website from being accessed by bots. For example, a common mistake could be:

User-agent: *
Disallow: /example-important-directory/

Or sometimes we might even have this:

User-agent: *
Disallow: /

which basically means to disallow the whole website for all bots. This is usually used before “opening” the website to the world during initial tests. However, sometimes it gets neglected and DEVs or SEOs forget to remove it when the website is available to the public, including search engines and users.

Not Including a Link to the Sitemap File

Robots.txt is a great way to make it easier for search engines to find the sitemap file of a website. Although it is not a huge error if we miss it (especially for smaller websites), it’s still a quick and useful thing to do.

Sitemap file address included in the robots.txt file
this is how usually the Sitemap link is included in the Robots.txt file

Meta Robots Tag Disasters

Meta robots is one of the most important tags and directives as a whole when it comes to SEO. It’s an effective way for site owners to inform search engines that a certain page should not be followed or indexed. 

There are various use cases and configurations, but the most popular (and often dangerous) is the noindex tag. It “lives” in the head section of the HTML and looks like this:

<meta name="robots" content="noindex,follow" /> 

Basically,  it means that we discourage search engines from indexing our content in the search results, but we would like them to follow the links on that page. We can solve various potential issues by preventing search engines from indexing content. For example:

  • pages with thin content that do not provide any real value for users
  • checkout pages on eCommerce websites
  • URLs containing sensitive information
  • dev/staging pages not ready to be launched for the public

The most common issue that happens with the noindex command is to forget to remove it for an important page (or the whole website) whenever it is ready to be officially launched to the online world. Let’s say DEVs have been working on it for a long time, testing various stuff, and then someone just forgots to remove it once it has been launched. 

Undoubtedly, this is one of the first (and simplest) checks to do if you wonder why a certain website or specific section is not bringing any organic traffic.

You just need to open the source code and search (ctrl+f) for the “robots” command. If you notice the “no index” directive then you’re in trouble! The good news is though that you know the reason now and how to fix it easily.

meta robots noindex in the source code
meta robots noindex in the source code of a website

Canonicals Go Wrong

The canonical tag is a powerful weapon in the arsenal of SEOs. It is often used to avoid potential SEO problems with similar content existing on different URLs. 

For example, it is very common in E-commerce magazines with different parameters in the pages which could potentially cause duplicate content issues.

With the canonical, we simply tell search engines which is the “main” / “original” page, so that all the other versions do not create problems. Moreover, Google will know which page to prioritize and show in the search results.

There are a couple of issues that may occur here. One of them, as already mentioned, is not to have a canonical set when you have different URLs with the same content.

missing canonical reported by the SEOcrawl crawler
this is how the SEOcrawl crawler shows when there is a canonical issue (missing in this case)

In case canonical is set, here are the most common dangers to be aware of:

  • canonical URL points to a URL with noindex tag
  • canonical URL points to a URL that is returning a 4xx or 5xx status code
  • canonical URL points to the non-secure http version of a page (when we also have the secure version available)
  • not self-referencing canonical (the so-called canocalized URL)

Note: this could be fine in case it is intentional, although in most cases we would want self-referencing canonicals

  • canonical tag is empty or points to an invalid page

Hreflang Issues

Hreflangs are hyperlink references in the HTML code of a page that let us specify the alternative URLs assigned to a certain language or region. They are especially important for websites operating in different countries and serving content in different languages.

SEOcrawler tool shows hreflang inspection and insights for seoalive.com domain
The SEOcrawler shows Hreflang data and insights

 The main idea behind the hreflang references is to make sure we show the correct website version according to the users and their country/language.

 For example, for the Spanish visitors, we want to provide the /es version of a website/page, for the German visitors, it should be /de, and so on.

Essentially, we are informing Google which page and in which language it should be shown to the users, depending on their language settings and location.

Hreflang annotations look like this:

<link rel="alternate" href="https://www.example.com/es/" hreflang="es" />

The most common hreflang issues include:

  • missing а return link

Alternate URLs should have the same code as the page that contains alternate hreflang URLs. When using an hreflang tag and page X links to page Y, page Y must link back to page X. Basically, every line of hreflang code referencing another page should have the same code on every page it is added to.

  • detected language does not match specified language

Sometimes the language that is specified in the hreflang tags will differ from the actual page content

  • wrong ISO codes

 A popular mistake is using “en-UK” instead of “en-GB” when targeting English-speaking visitors in the United Kingdom. The syntax is very important, too. Even though many websites use underscores to specify languages in their URLs, only dashes work for hreflangs.

  • missing a self-referencing tag

Adding a self-referencing hreflang tag is a must for ensuring that international sites are set correctly and are easy to understand by search engines.

  • using relative URLs instead of absolute

Another common mistake with hreflangs. We should avoid relative addresses that provide only a path and always go for the full page path.

Correct:

<link rel="alternate" href="https://www.example.com/es/spanish-post" hreflang="es" /> 

Incorrect:

<link rel="alternate" href="es/spanish-post" hreflang="es" />

Here is one useful tool for identifying hreflang issues- https://technicalseo.com/tools/hreflang/

JavaScript Dangers

Although, Google confirms that JavaScript can be used without causing any SEO issues, we should be careful with it. Often, developers use JS to load important content and links, and this may find us in a situation where search engines are unable to crawl and understand the content correctly.

Therefore, it’s recommended to spend extra time and inspect our websites to see whether all the important information is properly showing.

For example, bad JS implementation could result in Google not reading the meta titles and descriptions we set up, which then creates issues with our CTR in the search results. 

SEOcrawl crawler reports a missing title tag from a page
this is how our own seo crawler reports missing title tags

That’s why it is crucially important to be aware of Google’s interpretation of our JavaScript content and if they are able to properly crawl and index the information.

Mobile Usability Issues

It will probably not surprise anyone if we say that the mobile usability and performance of a website are two of the most important SEO factors nowadays. 

It has been a few years since Google switched to mobile-first indexing and taking into account the mobile version of a webpage with priority. 

One of the main issues that was seen more often back in the days, was showing different content to desktop and mobile users. This is a very dangerous practice and could result in lower organic results.

Some of the main factors that could affect the website performance include:

  • a large number of plugins

Try to stay away from installing a large number of plugins. The more plugins you have, the heavier and clumsy your website becomes.

What’s more, plugins are the potential entry point for hackers (when don’t updated on time), so they might also present a security risk.

  • not optimized images

Images are one of the most common factors that affect the pages’ speed and the overall performance of a website. No one enjoys slowly loading websites, so we always recommend to try keeping the images less than 100 kb in size.

Page Speed Insights tool from Google with recommendations to properly size images
Google’s Page Speed Insights recommendations to properly size images
  • hosting services

Take into account that the server where you host your website is the base on which everything will be built. Therefore, it’s better not to go for the cheapest solution and save yourself troubles in the future. It’s worth investing a bit more but knowing that in return you will be getting a reliable, secure and fast hosting service.

To Sum Up

As we can have already seen, there are numerous ways to go wrong when it comes to SEO. It’s also worth noting that these are just a few of the most popular and common SEO technical issues we might come across. There are many more SEO nightmares that could happen.

Hopefully, we have helped you so far get a better idea and understanding of the main SEO issues and what’s even more important- how to avoid or fix them.

 Good luck!

Share this article

About the author

Ognian Mikov
Ognian Mikov
I have a bachelor's degree in marketing and a master's in PR & advertising. I have been in the digital marketing world and SEO in particular for the last 10+ years. SEO is more than just a job for me, it is also a passion and hobby. In my free time, I enjoy playing/watching football, chess & poker.
Related Articles
SEO Game
Take part in the SEOcrawl Contest and win 2.000€
IMPORTANT: At the request of many SEO professionals, we have extended the deadline to send the review...
Read more »
Brighton SEO Autumn 2021
Brighton SEO (Autumn 2021) - all presentations
Would you have liked to attend the last edition of Brighton SEO? The biggest SEO event in Europe was...
Read more »
Informe SEO Mayo 2022
Product Report - May 2022
Another month has come to a close and we can’t wait to tell you about all the great improvements we’ve...
Read more »