Okay, #4 has me worried. I really hope that it isn't my fault that the .pdf comes up first because I don't know how to fix it.
Can you expand on the robot.txt file explanation please?.
robots.txt is a file that can be placed in the root of your website that search engine spiders look to in order to see if they have permission to crawl everything on the site. If you go to www. yoursite.com/robots.txt you can see your robots.txt file. I would say 90% of websites don't have them, which is fine. They are only used to exclude pages, so if there is nothing to exclude, then you don't need one.
Over the years I've been contacted by a few people who are sure they were banned or thought the search engines were plotting against them only to discover that their robots.txt file was blocking all pages. They were either victims of an accident or they pissed off their webmaster who was less than scrupulous (does that make them scrupuless
) and left a little going away present.