Five reasons why Google isn't indexing all of your web pages

by Admin


06 Sept


Article by Axandra


Article by Axandra
SEO software

Although you do not know it, some of your web pages might block Google. If Google cannot access all of your web pages, you're losing visitors and sales. Here are five reasons why Google cannot access your pages:


1. Errors in the robots.txt file of your website keep Google away

The disallow directive of the robots.txt file is an easy way to exclude single files or whole directories from indexing. To exclude individual files, add this to your robots.txt file:

User-agent: *
Disallow: /directory/name-of-file.html

To exclude whole directories, use this:

User-agent: *
Disallow: /first-directory/
Disallow: /second-directory/

If your website has a robots.txt file, double check your robots.txt file to make sure that you do not exclude directories that you want to see in Google's search results.

Note that your website visitors can still see the pages that you exclude in the robots.txt file. Check your website with the website audit tool in SEOprofiler to find out if there are any issues with the robots.txt file.

2. Your pages use the meta robots noindex tag

The meta robots noindex tag enables you to tell search engine robots that a particular page should not be indexed. To exclude a web page from the search results, add the following code in thesection of a web page:In this case, search engines won't index the page and they also won't follow the links on the page. If you want search engines to follow the links on the page, use this tag:The page won't appear on Google's result page then but the links will be followed. If you want to make sure that Google indexes all pages, remove this tag.The meta robots noindex tag only influences search engine robots. Regular visitors of your website still can see the pages. The website audit tool in SEOprofiler will also inform you about issues with the meta robots noindex tag.3. Your pages send the wrong HTTP status codeThe server header status code enables you to send real website visitors and search engine robots to different places on your website. A web page usually has a "200 OK" status code. For example, you can use these server status codes:

  • 301 moved permanently: this request and all future requests should be sent to a new URL.
  • 403 forbidden: the server refuses to respond to the request.

For search engine optimization purposes, a 301 redirect should be used if you want to make sure that visitors of old pages get redirected to the new pages on your website.

The website audit tool in SEOprofiler shows the different status codes that are used by your website and it also highlights pages with problematic status codes.

4. Your pages are password protected

If you password protect your pages, only visitors who know the password will be able to view the content.

Search engine robots won't be able to access the pages. Password protected pages can have a negative influence on the user experience so you should thoroughly test this.

5. Your pages require cookies or JavaScript

Cookies and JavaScript can also keep search engine robots away from your door. For example, you can hide content by making it only accessible to user agents that accept cookies.

It might be that your web pages use very complex JavaScripts to execute your content. Most search engine robots do not execute complex JavaScript code so they won't be able to read your pages. Google can parse these pages to some extend but you're making it unnecessarily difficult then.

How to find these problems on your website

In general, you want Google to index page pages. For that reason, it is important to find potential problems on your site. The website audit tool in SEOprofiler locates all issues on your site and it also shows you how to fix these problems. If you haven't done it yet, try SEOprofiler now. See plans and pricing.

Article by Axandra
SEO software



News Categories

Ads

Ads

Subscribe

RSS Atom