SEO is a complex discipline requiring careful consideration for various factors and procedures to be implemented correctly. There are different ways we can go wrong and hurt the organic potential of a website.
In this post, we are going to cover the most common SEO issues and how to avoid falling victim to them. Likewise, if you are already struggling with such issues, we can help you fix them successfully.
First, let’s start with the basics.
What Are SEO Issues?
A potential SEO problem that could hurt the performance of our websites in the search engines, could be identified as an issue. To ensure an adequate optimization of sites ranking high in Google (and other search engines), we should be aware of a few common mistakes.
Broken Internal & External Links
The more pages a website has, the higher the chances are for broken links. When a site keeps growing and producing more content, a danger of unnoticed 404 pages exists. Whilst it is good to develop and add new features & landing pages, we have to always pay attention to issues with internal & external links.
We, as users, do not fancy reaching a page that doesn’t work, do we? This interrupts our flow and often results in leaving the website straight away.
Visitors could perceive the webpage as not trustworthy. As we know, Google is really good at identifying users’ perceptions of a website/page. Thus, if users are unhappy, consequently search engines will be unhappy too.
What’s more, broken pages waste precious crawl budgets that could be otherwise used purposefully. We don’t want the bots spending time and resources on pages that are not accessible for the users.
The good news is that we can easily identify broken internal & external links thanks to different SEO tools. Naturally, if we have a smaller website with just a few pages, we will probably know them by heart and it won’t be that difficult to make sure that everything is working well.
However, as we develop our websites, it becomes impossible and needless to do this manually.
Tip: run scheduled scans once a week or month and if you identify any broken links, dig deeper and try to fix them accordingly.
Duplicate content is one of the oldest and most common issues known amongst digital marketers. The main concern is that by providing similar pages to search engines, including Google, they may struggle to identify and rank the correct URLs.
As a result, we (as SEOs) may suffer from traffic loss or just not get the full benefit from our websites.
We as search engine professionals need to ensure that our content is unique. In order to make the life of search engines easier, we should avoid a few common pitfalls.
Often duplicate content occurs due to allowing different versions of the same page to be available for users and bots. For example, having both the http and https versions of a website load without the proper redirects is a popular problem.
In order to avoid this potential issue, we would need to have the correct http to https redirects set. We can easily test that by typing http://oursitename.com in the browser. In case our https protocol is enabled and correctly set, the browser should redirect us to https://oursitename.com.
In the same way, the non-www version of a website should redirect to the www version, if that’s the main version of our website and vice versa.
Parameters in URLs are another common trap that causes duplicate URLs. Content management systems often add sorting parameters (for size, color, model, etc.) that could result in having numerous pages with the same content.
Still, this is not something to worry about provided that we implement proper canonicals and no-index attributes when needed.
Note: canonical tags are a popular way to tell Google which set of similar URLs to index and count as the main one. Another way is to use a no-index attribute when parameters cause different URLs with the same or similar content.
Title Tag Fails
Title tags are amongst the most important on-site SEO elements. They inform search engines what is the main topic of a page. Title tags also show in the search results on top of each organic listing. This makes them one of the key elements and often a deciding factor for the users to click on a specific result.
Taking the time to set them correctly is a crucial SEO task. However, sometimes it gets neglected which results in low click-through rates.
The main issues with title tags are:
- missing the title tags altogether
In this case, Google will set a title tag based on its understanding of what our page is about. Usually, it copes with this task well, but still, it’s a missed SEO opportunity.
It’s good to set title tags ourselves, especially for our most important pages.
- too long/short title tags
Using short titles tags is a missed way to attract potential users and get them to click on our results. The usual practice is to have between 55-65 symbols shown in the search results.
Conversely, title tags that are too long (over 65 symbols), may get truncated and not shown fully. This will create another missed opportunity to show our whole message to the online world.
As we can see here, both the title and the meta description are truncated and, thus not providing the best user experience.
- duplicate title tags
It is a common practice for e-commerce websites to have identical tags. This is often the case with other types of sites too, unfortunately. Duplicate title tags make it harder for webpages to stand out and differentiate from other similar pages.
Robots.txt is a relatively simple but useful tool giving important information and instructions to search engine crawlers. It is located in the root directory of the websites and is using plain text format.
It can prevent certain sections of our website from being crawled, so bots don’t have to waste precious resources. However, there are some potential mistakes we should be aware of.
Granting Access to Staging and Dev Sites or Admin Panels
There are several ways to stop search engines from reaching any test and under development versions of your domain. One of the ways is to use a command in your robots.txt file, although there are more effective ways to do so (ex. HTTP authentication).
One of the most common blocking instructions for WP sites is to exclude the wp-admin panel folder. This is how it looks:
User-agent: * Disallow: /wp-admin/
User-agent: * means that the instruction applies for all bots (Google bot, Bing bot, etc.) and the second line tells we want to stop them from crawling the /wp-admin/ folder and everything that goes in it.
Blocking Important URLs from Crawling
Similar to the previous command, we don’t want to disallow any important folders on our website from being accessed by bots. For example, a common mistake could be:
User-agent: * Disallow: /example-important-directory/
Or sometimes we might even have this:
User-agent: * Disallow: /
which basically means to disallow the whole website for all bots. This is usually used before “opening” the website to the world during initial tests. However, sometimes it gets neglected and DEVs or SEOs forget to remove it when the website is available to the public, including search engines and users.
Not Including a Link to the Sitemap File
Robots.txt is a great way to make it easier for search engines to find the sitemap file of a website. Although it is not a huge error if we miss it (especially for smaller websites), it’s still a quick and useful thing to do.
Meta Robots Tag Disasters
Meta robots is one of the most important tags and directives as a whole when it comes to SEO. It’s an effective way for site owners to inform search engines that a certain page should not be followed or indexed.
There are various use cases and configurations, but the most popular (and often dangerous) is the noindex tag. It “lives” in the head section of the HTML and looks like this:
<meta name="robots" content="noindex,follow" />
Basically, it means that we discourage search engines from indexing our content in the search results, but we would like them to follow the links on that page. We can solve various potential issues by preventing search engines from indexing content. For example:
- pages with thin content that do not provide any real value for users
- checkout pages on eCommerce websites
- URLs containing sensitive information
- dev/staging pages not ready to be launched for the public
The most common issue that happens with the noindex command is to forget to remove it for an important page (or the whole website) whenever it is ready to be officially launched to the online world. Let’s say DEVs have been working on it for a long time, testing various stuff, and then someone just forgots to remove it once it has been launched.
Undoubtedly, this is one of the first (and simplest) checks to do if you wonder why a certain website or specific section is not bringing any organic traffic.
You just need to open the source code and search (ctrl+f) for the “robots” command. If you notice the “no index” directive then you’re in trouble! The good news is though that you know the reason now and how to fix it easily.
Canonicals Go Wrong
The canonical tag is a powerful weapon in the arsenal of SEOs. It is often used to avoid potential SEO problems with similar content existing on different URLs.
For example, it is very common in E-commerce magazines with different parameters in the pages which could potentially cause duplicate content issues.
With the canonical, we simply tell search engines which is the “main” / “original” page, so that all the other versions do not create problems. Moreover, Google will know which page to prioritize and show in the search results.
There are a couple of issues that may occur here. One of them, as already mentioned, is not to have a canonical set when you have different URLs with the same content.
In case canonical is set, here are the most common dangers to be aware of:
- canonical URL points to a URL with noindex tag
- canonical URL points to a URL that is returning a 4xx or 5xx status code
- canonical URL points to the non-secure http version of a page (when we also have the secure version available)
- not self-referencing canonical (the so-called canocalized URL)
Note: this could be fine in case it is intentional, although in most cases we would want self-referencing canonicals
- canonical tag is empty or points to an invalid page
Hreflangs are hyperlink references in the HTML code of a page that let us specify the alternative URLs assigned to a certain language or region. They are especially important for websites operating in different countries and serving content in different languages.
The main idea behind the hreflang references is to make sure we show the correct website version according to the users and their country/language.
For example, for the Spanish visitors, we want to provide the /es version of a website/page, for the German visitors, it should be /de, and so on.
Essentially, we are informing Google which page and in which language it should be shown to the users, depending on their language settings and location.
Hreflang annotations look like this:
<link rel="alternate" href="https://www.example.com/es/" hreflang="es" />
The most common hreflang issues include:
- missing а return link
Alternate URLs should have the same code as the page that contains alternate hreflang URLs. When using an hreflang tag and page X links to page Y, page Y must link back to page X. Basically, every line of hreflang code referencing another page should have the same code on every page it is added to.
- detected language does not match specified language
Sometimes the language that is specified in the hreflang tags will differ from the actual page content
- wrong ISO codes
A popular mistake is using “en-UK” instead of “en-GB” when targeting English-speaking visitors in the United Kingdom. The syntax is very important, too. Even though many websites use underscores to specify languages in their URLs, only dashes work for hreflangs.
- missing a self-referencing tag
Adding a self-referencing hreflang tag is a must for ensuring that international sites are set correctly and are easy to understand by search engines.
- using relative URLs instead of absolute
Another common mistake with hreflangs. We should avoid relative addresses that provide only a path and always go for the full page path.
<link rel="alternate" href="https://www.example.com/es/spanish-post" hreflang="es" />
<link rel="alternate" href="es/spanish-post" hreflang="es" />
Here is one useful tool for identifying hreflang issues- https://technicalseo.com/tools/hreflang/
Therefore, it’s recommended to spend extra time and inspect our websites to see whether all the important information is properly showing.
For example, bad JS implementation could result in Google not reading the meta titles and descriptions we set up, which then creates issues with our CTR in the search results.
Mobile Usability Issues
It will probably not surprise anyone if we say that the mobile usability and performance of a website are two of the most important SEO factors nowadays.
It has been a few years since Google switched to mobile-first indexing and taking into account the mobile version of a webpage with priority.
One of the main issues that was seen more often back in the days, was showing different content to desktop and mobile users. This is a very dangerous practice and could result in lower organic results.
Some of the main factors that could affect the website performance include:
- a large number of plugins
Try to stay away from installing a large number of plugins. The more plugins you have, the heavier and clumsy your website becomes.
What’s more, plugins are the potential entry point for hackers (when don’t updated on time), so they might also present a security risk.
- not optimized images
Images are one of the most common factors that affect the pages’ speed and the overall performance of a website. No one enjoys slowly loading websites, so we always recommend to try keeping the images less than 100 kb in size.
- hosting services
Take into account that the server where you host your website is the base on which everything will be built. Therefore, it’s better not to go for the cheapest solution and save yourself troubles in the future. It’s worth investing a bit more but knowing that in return you will be getting a reliable, secure and fast hosting service.
To Sum Up
As we can have already seen, there are numerous ways to go wrong when it comes to SEO. It’s also worth noting that these are just a few of the most popular and common SEO technical issues we might come across. There are many more SEO nightmares that could happen.
Hopefully, we have helped you so far get a better idea and understanding of the main SEO issues and what’s even more important- how to avoid or fix them.