Website promotion. Professional expert advice


RU


Question: Why is there no feedback on Spam Reports? I mean, when you report to Google that a site is using search engine spam, you immediately get a response that "your message has been received." But it is never reported how and when the consideration of the complaint ended, whether the site was punished or not.

Google Response: We usually resolve complaints fairly quickly. In general, we are thinking about expanding the range of notifications to users.
Fishkin's response: It can take Google anywhere from one day to two years to take action on a spam complaint. Usually, if the violation of the webmaster's license is not extremely gross or if it is not covered by some media, Google prefers not to impose sanctions on the site, but to try to refine the search algorithm. They get your complaint, match it against a collection of complaints about similar violations, and try to fine-tune the search technology so that no one can take advantage of the tactics developed by specific spammers. Some complaints from a year or more ago about cloaking, keyword stuffing and link manipulation have still not resulted in any sanctions for sites. For proof of these considerations, see the post by Google's Susan Moscow on the Google Webmaster Blog.


Q: How do you define duplicate content? Does syndication fall under this definition?

Google's answer: We just recently wrote a post on this topic on the Google Webmaster Blog.

Fishkin's answer: unfortunately, they also consider syndication as duplicate content. Fortunately, duplicate content is not always penalized. In the very post that the Google representative is talking about (https://googlewebmastercentral.blogspot.com/2008/06/duplicate-content-due-to-srcapers.html), it says that no sanctions are imposed on the site for copied information. Just pages with it are excluded from the issue.

If you give content to syndication, then by signing a content cooperation agreement, you must protect your own interests. If you want traffic from exported content to go to your site, write down in the contract the requirement for backlinks to all materials, as well as, if possible, the use of "noindex" and "nofollow" meta tags on pages with your materials so that the search engine does not glue them with the originals. On the other hand, if you take someone else's content for yourself, ask how many sites already have it, whether they put backlinks, whether the pages were indexed. Fishkin recommends reading an article on SeoMoz (https://www.seomoz.org/blog/when-duplicate-content-really-hurts) about when duplicate content on a site can harm it.

Question: apart from content, what are the top three ranking factors for a website in Google?

Google Answer: - link (https://www.google.com/support/webmasters/bin/answer.py?answer=40349)

Fishkin's answer : I don't really understand that the page at the link can tell about the key metrics of the search algorithm. For sure, you can't know anything, and Google will not disclose search algorithms. If you are interested in my opinion - 90% of ranking in Google depends on 4 factors:
  • Keyword usage and content relevance. I don't believe in keyword density. They should be positioned so that the page is as relevant as possible to the query. I use the following keyword layout: one or two in the title, if it does not spoil the sound and meaning of the title; in the h1 tag - once; at least three times - in the body text (or more if the text is long), at least once - in bold, in the alt of the image, in the URL, once or twice - in the desrciption meta tag.
  • Link weight. It can be gained through internal linking and links from other high PR sites. Quality links can even bring up a not very relevant page in Google and Yahoo! (MSN and Ask.com are more focused on content).
  • The weight of the link text (anchor). A large number of links with the same anchor can outweigh all the other factors already mentioned. Do not forget that texts of both external and internal links are taken into account.
  • domain authority. This is the most complex factor of all mentioned. Search engine trust in a domain is based on a variety of factors: how long the domain has been known to the search engine, how many people search for and click on the domain, how many sites link to it, whether the domain itself links to respectable sites, how fast its link popularity is growing, and what is written in the registration information. For a domain to be authoritative, you just need to constantly work on the site for a long time.



Question: What should I do to make the geotargeting of the site work correctly? Registering the region in Webmaster Tools 3-4 months ago did not help, and the corresponding name was not available in the national domain zone.

Google Response: Now we seem to have set up geo-targeting using Webmaster Tools.

Fishkin's answer: There are a number of other factors besides setting up a site's geo-reference in Webmaster Tools. First, use the country code domain (ccTLD). If the domain name you want is not available in the national zone, consider your options. I would also recommend taking hosting on the IP of the country to which you want to be geotargeted. The site must be in the appropriate language and linked to by sites with the right geotargeting. Write on the pages of the site addresses belonging to the required country, mark it on Google Maps in this country, register in local directories. Before Google takes into account what you put in Webmaster Tools there, it takes into account a wide range of other factors - our experiments have shown this.


Question:Can hiding navigation elements on individual pages/directories with CSS negatively affect indexing?

Google's answer: when doing anything with your site, you should think like this: "Will it be good for users?" and "would I do this with a site if search engines didn't exist?"

Fishkin's answer: I don't like the phrase "would you do it if search engines didn't exist". If they didn’t exist, we wouldn’t register with Webmaster Tools, we wouldn’t close copied content in noindex and purchased links in nofollow, we wouldn’t use meta tags, and we wouldn’t do some of the simple tags, for example title sitemaps, would not create html duplicates of flash sites, would use CSS and Ajax as they wanted. In our time, it is generally meaningless to talk about web building without taking into account search engines.

So as long as you're hiding some small elements compared to the total amount of content on the page, and don't spoil that user's ease of navigation around the site, you'll be fine. Our own site, seomoz.org, was somehow sanctioned by Google for having a page with a lot of content set to display:none, even though everything was legal and user-friendly. The main thing is to always think about whether the search engines will somehow misunderstand your actions. You need to hide content very, very carefully and to a minimum.


Question: let's say the site ranks first in the search by anchors, by title, by page text. And directly on the key phrase site on the second page. Any advice?

Google's answer: The main thing is that your content is useful to users

Fishkin's answer: This answer just infuriates me. He doesn't really care about the issue either.
This happens quite often - Google ranks results differently in regular searches and when searching with special operators. My intuition and experience tells me that in searches like this, some trust factors and domain authority do not affect rankings as much as they do in regular searches. In your case, you are probably close to the top for the keyword, you just need to get more trusted links from high quality sites. It also happens if the site is in the "sandbox" - then one day it will finally "take off". In general, the good news is that you are doing everything right so far, and the bad news is that you need more "fat" links, or time - to get out of the "sandbox".
Question: I have several Amazon affiliate programs. The content there is the same as on Amazon itself. Google sees duplicate content and doesn't index my sites properly. How to solve this problem?

Google Response: As long as your stores provide value to users, you have nothing to worry about.

Fishkin's answer: I'm afraid that Google is misleading people with this answer. There is something to worry about, even if his stores are useful. Firstly, non-indexing of pages can be either due to duplicate content or due to a lack of links. Google has a threshold PageRank that a site must reach before being listed in the main index. If your stores have good external links, check internal linking - how link juice is distributed throughout the site.
As for duplicate content - naturally, in doubtful situations, the preferences of search engines on the side of Amazon. The only way out is to close as many copied texts as possible from indexing. Well, actually copying too many of them and one to one, without changes, is impossible - Google has ideas about the minimum required amount of unique content on the page so that the page can be indexed and ranked normally.


Question: Again about geotargeting. I have a multinational site with support for 12 languages. We use geo-targeting to automatically redirect the user to a page in their language. Will Google punish us for this?

Google's answer:We recommend that you do not redirect a user based on their geolocation. This is bad usability. It is better to let the user choose the language version themselves.

Fishkin's answer:What blatant hypocrisy! Google uses geotargeting to display search results, its own home page, and a host of other services - but everyone else can't! My advice is this: Before redirecting a user, check to see if their browser supports cookies. If it does, send it as a redirect to where it is planned and then analyze the statistics, feedback and other parameters on how well it turned out for the user and usability. If cookies are not supported - let him go to the multilingual entry page and choose it himself - search bots that do not have cookies and can only get to all other multilingual sections will go there.


Question:Does Google prefer .html documents over .php documents, or are all URLs equal to you?

Google's answer: A URL is a URL. As long as there is content at this address that we are able to recognize, Googlebot does not care what extension the page has.

Fishkin's answer: in fact, we recently proved here that, for example, .exe, .tgz, and .tar pages are not indexed.


Question: Does Google tools help in website ranking?

Google's response: Focus on the quality of the site and the content you offer to users.

Fishkin's answer:It is unpleasant to see such a useless phrase instead of an answer. I think it hides the fact that Google doesn't know the exact answer to the question. My answer would be "probably yes". It seems to me that if you collect statistics on all sites that are registered with Google Webmaster and compare them with those that are not registered, there will be much less search spam among the first. So it is quite possible that for Google, registration there is one of the reasons for trusting the site, even if it does not directly affect the ranking.

As a side note, I think Google would be much more respectable if they honestly said "we can't answer this question because it affects our ranking mechanisms, which we need to keep secret so as not to breed spammers" . Such an answer would cause much more respect, as it would demonstrate the attitude towards webmasters asking questions as adults and professionals, than answers in the style of "make good sites, and you will be happy."



0 up down





Start Optimizing Your Website - Fee SEO Audit