Hi all,
I think my Robots.txt is stopping Google crawling the pages we need it to crawl.
Please can someone let me know if this looks correct. Webmaster says Google cant crawl www.mysite.co.uk/acatalog.
Not sure if that is OK or not and I don't have /acatalog in Robots.txt?
# robots.txt
User-agent: *
Disallow: /cgi-bin/
Disallow: /anon_ftp/
Disallow: /conf/
Disallow: /private/
Disallow: /subdomains/
Disallow: /statistics/
Disallow: /web_users/
Disallow: /PD/
Disallow: /httpsdocs/
Disallow: /*.cat
Disallow: /*.fil
Disallow: /*.gif$
Disallow: /*.jpg$
Sitemap: http://www.mysite.co.uk/sitemap.xml
Thanks
D
I think my Robots.txt is stopping Google crawling the pages we need it to crawl.
Please can someone let me know if this looks correct. Webmaster says Google cant crawl www.mysite.co.uk/acatalog.
Not sure if that is OK or not and I don't have /acatalog in Robots.txt?
# robots.txt
User-agent: *
Disallow: /cgi-bin/
Disallow: /anon_ftp/
Disallow: /conf/
Disallow: /private/
Disallow: /subdomains/
Disallow: /statistics/
Disallow: /web_users/
Disallow: /PD/
Disallow: /httpsdocs/
Disallow: /*.cat
Disallow: /*.fil
Disallow: /*.gif$
Disallow: /*.jpg$
Sitemap: http://www.mysite.co.uk/sitemap.xml
Thanks
D