Site Requirements

Site Requirements: Making sure your site is crawlable

FuelX works with external ad networks to serve ads on your behalf. Some of these networks have their own requirements about the health of your site. They follow their own procedures to confirm that your site is searchable. There are certain steps you can take to make sure your site is ready to serve ads. The good news is that these steps are also important for your company’s SEO.

Learn more about SEO from MOZ’s The Beginner’s Guide to SEO.

How to check if your site is crawlable

It’s a best practice to make your site crawlable and indexable by search engines before launching your campaign. Your webmaster can do this by including:

  1. a sitemap on your website (learn more about sitemaps at Sitemaps.org and create one at XML-Sitemaps.com)
  2. a robots.txt file to allow search engines to crawl your site. If you already have a robots.txt file, ensure that your site is crawlable by Google Crawler by adding “User-agent: AdsBot-Google” to your robots.txt file. Learn more about robots.txt files from Google Webmaster Tools Support.

Quick tests:

  1. A quick test to find out whether your site is crawlable is to double check that your robots.txt file is live and current. Do that by looking at the domain level for your website. For example, if your site were named examplesite.com, you would check for your robots.txt file at examplesite.com/robots.txt. If that returns an error, you probably have a problem with your robots.txt file.
  2. Check to see if you have a sitemap submitted or submit a sitemap for your site on Google Webmaster Tools.
  3. Check to see if you have too many uncrawlable pages on your Google Webmaster account or on WebmasterWorld.com. Google recommends that you have fewer than 5 redirects (3xx pages) and no error pages (4xx and 5xx pages) on your destination url.

 

What the heck does "crawlable" mean?

The internet is essentially a gigantic, mushrooming file of information. When you search for something using a search engine on the Internet, that search engine acts like a very efficient and helpful index to a massive encyclopedia. Search engines read (or “crawl”)  through the entire, growing Internet constantly so that they can spit back out relevant information to people when they are searching.

The spiders or bots who crawl your site do their best but they can’t understand everything that we can, such as images and videos. It helps those spiders to quickly and correctly crawl your site if you include your own abridged index for your site. This is known as a sitemap.

For more detailed developer information, Google has additional resources on controlling crawling and indexing.

Have more questions? Submit a request