badge

Remove Website Outdated Pages From Google Search


Learn How to block URLs in Google Search?

Ever worried for those pages of your website which are outdated coming in Google search. 

Want to remove outdated pages of your website from Google search, Don't worry there are few ways by you can remove your website outdated pages from Google search engine results pages.

1. Using Google tool for Removal Outdated url:
Simply put your website outdated URL in this google tool and click "REQUEST REMOVAL"
https://www.google.com/webmasters/tools/removals

2. Block Search Indexing with Meta Tags
You can also de-indexed your website outdated pages by adding a meta tag


3. Using Robots.txt
This is the second method for removing outdated URL from Google search. You have to update your robots.txt by adding the below  queries. For example i want to remove my website's "cgi-bin", "temp" and "junk" pages from Google search. Now i use the following code in my robots.txt file.

User-agent: *
Disallow: /cgi-bin/
Disallow: /tmp/
Disallow: /admin-login/

If you want to remove whole website from Google search just add
User-agent: *
Disallow: /

If you still not uploaded Robots.txt file on your website, don't worry just follow the below steps

You may find more information from Google here: https://support.google.com/webmasters/answer/6062602?hl=en&ref_topic=4598466

How to create Robots.txt file?
open a notepad
Write this code
User-agent: *
Disallow:



Now save your notepad as named "robots.txt". The file must be save in .txt format. Now upload this file on root of your website.

How to setup Google Webmaster tool?

How to setup Google Webmaster tool (Search Console)?



Google Search Console also known as Google webmaster tool. Google introduced this online free tool to monitor & analyze your website performance in Google search.

Setup Google Webmaster Tool or Search Console:
1. Open Webmaster.google.com
2. Sign in to your Google Account
3. Add your website by Clicking "ADD A PROPERTY"
4. Now you see verification method page, You may verify your account through various methods available in that page.

Available verification methods:


Recommended method
  • HTML file upload

Alternate Methods
  • Add HTML tag to your Home page meta tags
  • Domain Name provider
  • Google Analytics
  • Google Tag manager
If you have already an Analytics account on the same Google account you may verify your account by selecting "Google Analytics"
5. Now Your Webmaster account is ready

If you don't know how to create XML sitemap you may follow few below steps:

How to Create XML Sitemap for My Website?


There are many online tool available for generating your website XML sitemap such as www.xml-sitemaps.com, www.web-site-map.com, and www.web-site-map.com


Let's start with www.xml-sitemaps.com

1. Open www.xml-sitemaps.com
2. Submit your website URL under "Starting URL"
3. Select "Change frequency" as "Daily"
4. Select "Last modification" as "Use server's response"
5. Select "Priority" as "Automatically Calculated Priority"
6. Now click on "Start" button
7. Wait until all pages of your website not crawled
8. Now you may see a page named "Your sitemap is ready!"
9. Now you may download the xml sitemap file to your system and give this file a name "sitemap"
10.You have to upload this file at your website root level domain. If you don't know about tecknicality, you may send this file to your developer.


Your website XML sitemap URL will look like this:
http://www.yourwebsite.com/sitemap.xml

What is Google Crawling and Indexing?



To understand Crawling and indexing you have to first understand search engine bot. It is also called search engine crawler or spider. For different search engines the bots are differently pronounced:

Google - Googlebot
Yahoo - Yahoobot
bing - bingbot
ask - askbot


What is Googlebot or Google Crawler?
Googlebot (abbreviated Google Crawler or Google Spider) is a search bot software that Google sends to websites to gather information about documents on world wide web (www).

What is Crawling?
Crawling is the process where the search bot or crawler goes around from website to website, finding new and updated information to report it back to Google. The search engine crawler finds what to crawl using links. 

Basically there are three steps of a web crawling procedure:

1. First search bot starts crawling the pages of your website

2. Second these bots continue crawling for indexing the updated information on web.

3. Third the search bots also visit the links (URLs) within your webpage  and decide to crawl new web page.

When any search bot does not find the web page or link is broken, it will eventually deleted that web page from search engine's index.

What is Indexing?
Indexing is the process of collecting information by search crawler from it's crawling activities. Once the web page is crawled by crawlers they added it to search engine's index.

The results appear in Google search after crawler indexed your website. Searchbot processes the whole content within a web page where this content is located such as title tags, ALT attributes, heading tags and hyperlinks.

Jimmy Wales Speech at Cato Institute

On Nov 01, 2016

Wikipedia co-founder Jimmy Wales spoke about Wikipedia at Cato Institute Washington DC - Collaborative Information Platforms. He responded audience questions  on various topics  including government surveillance, globalization, technology innovation, and how knowledge spreads.


Resources: https://www.c-span.org/video/?417796-1/wikipedia-founder-jimmy-wales-delivers-remarks-cato-institute 

How to create a Wikipedia Article?


Hello Everyone,

We all are aware that Wikipedia is the most powerful domain and number #1 knowledge source worldwide. Wikipedia pages automatically rank in search engines. They don't need any SEO or optimization technique to rank in any search engine.

Today i will let you know you how to create a successful Wikipedia article. 

My secret for creating a Wikipedia page

Steps to create a Wikipedia page

  1. Create an account on Wikipedia.
  2. Verify your account on Your email
  3. Read carefully this article of How to create article on Wikipedia Wikipedia:How to create a page
  4. Remember never violate copyrights of Wikipedia Wikipedia:Your first article
  5. Go to this section Wikipedia:Article wizard
  6. Go to Wikipedia:Article wizard/Subject
  7. Before creating an article, First search that particular article in Wikipedia search bar and confirm that someone has not created the same article earlier.
  8. If your article is not available on Wikipedia click on “No my proposed article doesn’t already exist on Wikipedia
  9. Now select the preferred category for which you want to create an article. For example you want to create an article about a company you may click on “i’m writing about a company”
  10. Before creating the article, first ensure you are not creating an article for advertising purposes
  11. Now click on “My proposed article is not advertising”
  12. Make sure you have enough resources (News links on trusted websites) to create article
  13. Click “My proposed article has good sources
  14. Now there are 2 options to publish your article on Wikipedia
  15. First is Draft Submission,  If you submit your article as a draft it will take some time (approx. a week or a month) to publish as other Wikipedia admins first review your article then it is published.
  16. Second is “Article Namespace” This option is for registered users as you already a registered user you may use this option. You can directly publish your article from this section but at your own risk. If Wikipedia finds any violation of its guidelines your page will removed instantly.
Remember to add {{subst:submit}} in your top of the draft when you submit it for review.

Suggestion: Wikipedia and I will also suggest you to use first option (AFC Draft Submission) to create your Wikipedia article.

If you are unable to create Wikipedia page 
or your page is deleted previously 
or Your draft is declined by Wikipedia 
or you just want your Wiki page from Wikipedia professionals, you may take our Wikipedia Expert services by clicking the below link.

Why Should Have a Wikipedia Page?


Everyone wants to have a Wikipedia page. It is obvious that Wikipedia page increases your brand visibility in many ways. Wikipedia pages always rank on 1st page of Google. 90% Wikipedia pages are ranked in top 3 positions in any search engine's results pages.

Wikipedia is the most trustable online resource of information worldwide. Wikipedia containing all information on real fact base.

Why should hire a Wikipedia Expert ?

Wikipedia Experts are experienced and aware all guidelines of Wikipedia
They Write your Wiki article in Wikipedia writing style
They protect your Wikipedia article for deletion
They monitor your Wikipedia page by 24/7
They update your page with latest and updated information
They fix Wikipedia error tags


Are you tired by submitting a draft to Wikipedia?

Although Wikipedia is free for all, anybody can create & edit articles on Wikipedia but it is not so easy to publish an article on Wikipedia. There are a number of Wikipedia guidelines that need to follow while you create a Wiki article.

Have you submitted your Wikipedia article as a draft but never get success to publish your page?

Have you published your Wikipedia article but it was deleted by Wikipedia?

Do you really want to see your Wikipedia page on Google 1st page?

Are you really serious for a wikipedia page?

If you are serious to have your own Wikipedia page don't worry i am here to help you to create a Wikipedia page.

You may show your interest here for creating a Wikipedia page

Error Parsing XML Converter

How to fix XML Errors in Blogspot

Most of the time when you try to add a Facebook JavaScript or AdSense JavaScript code in your blogger template, you often come across XML Parsing errors that prompts and says "The reference to entity "version" must end with the ';' delimiter." Bloggers template.


When we insert JavaScript inside blogspot templates, all script inside the JavaScript tags is treated as a text and due to the presence of some illegal special characters like "<" , ">" and "&" , you often face the following error:

Adsense Page Level Ads Java Script Code Error in Blogspot Template



Special characters in XML that generate errors?
 (<)  - less than

(&) - ampersand

(>) - greater than

(")  - double-quote

(')  - apostrophe or single-quote


Replace Special Characters

Another simple method is to escape all 5 special characters and instead use the legal characters as alternatives to the above 5.



Another method is, convert whole line code which is generating error using below box

HTML Parsing Tool for XML Blogger Template



How to submit blogger sitemap to Google Search Console?

For ranking in Google or any other search engine it is necessary to crawling and indexing your website by search engine crawlers. Although all websites are crawled and indexed automatically from any search engine crawler but in few cases search engine crawlers can't crawl and index your website. I have seen many websites in my career which is perfectly crawled by crawler but not indexed. In this case there could be various reasons of not indexing your website like Wrong robots.txt commands or your website could be disallowed from  Google Search Console (Webmaster Tool).

To fix these issues there are few tasks that are recommended by Google for all new websites.


Submit XML sitemap to search console

Google has enabled this feature in its search console to crawl and index all web pages of a website. If you don't want to crawl few secure pages of your website you can disallow those pages in robots.txt file.

How to find Blogger XML sitemap?
If your website is live with Google Blogspot and you are unable to find blogger sitemap URL, don't worry you may find it by adding "atom.xml" in your home page URL. For example if your blogspot URL is http://xyz.blogspot.com/ then your xml sitemap URL will be look like this: http://xyz.blogspot.in/atom.xml

If you redirected your blogspot on a particular domain name like my blog:http://www.thedigitalseo.com/ i can find its XML sitemap on http://www.thedigitalseo.com/atom.xml

If you have more than 100 pages or posts in your blog you may use URL like this
http://www.thedigitalseo.com/atom.xml?redirect=false&start-index=1&max-results=500

You may set the limit of maximum pages crawled by adding any number just after results in the above URL.

How to Crawl or Index My Blogspot blog?
If your blog is not indexing from Google crawler you may perform the following tasks in Google search console:

1. Login your Google search console
2. Select your targetted website
3. Select "sitemap" under the "Crawl" category at left hand side
4. Click "ADD/TEST SITEMAP" at right hand side
5. Add "atom.xml" just after your website URL and click to submit

Now your website is successfully submitted to tell Google for Indexing
Please refer the below image for above process


image source: Google Search Console Login thedigitalseo.com


Popular Posts