Single Source Page Link Test Using Multiple Links With Varying Anchor Text – Part Two

Back on October 9th, I blogged about a test I performed that demonstrated only the first link on a given page will count as far as ranking purposes go. In the thread where the test originated, pops (of TOONRefugee cartoon blog) asked what would happen if the first link were nofollowed. Since I had no clue, I decided to test that as well. Similar test as before, but checking the use of rel=”nofollow” on the initial link, and adding in a third link as a control:

brogginoodle

hrumphidating

gorlumphadump

Will revisit after the destination page is re-cached.

21 thoughts on “Single Source Page Link Test Using Multiple Links With Varying Anchor Text – Part Two”

  1. So how about the results of this test? It looks like none of those links are making the physicsprimer site rank for the keywords, although of course this post ranks for them all. Does that mean that because the first link to the page is nofollowed, all the other links are being ignored too (ie dedupe first, nofollow second)? It’s important info for people considering using nofollow on their non-keyworded navigation to give value to their keyworded in-page links to the same pages.

  2. I still find this topic fascinating.

    Search for anchor 1 returns both this and target page (interesting use of meta description there!). Even the more interesting given the nofollow on anchor 1…

    Purposely not repeating the anchors in plain text.

    Have you drawn any further conclusions on this Michael?

  3. Richard,

    Yes, I meant to blog about this before, but was busy with some other things. The reason that the page currently shows for the first phrase is because for a while now (until very recently anyways) there were some followed links pointing to that page using that anchor text. This is due to a feature in MyBlogLog called “Hot In My Communities”. I believe all of them are gone, for now anyways, and am just waiting for the effects of the last one to drop off.

    Today if you search in Yahoo for the phrase you can see only one instance of it in MyBlogLog, and if you click through to the live page you can see it is no longer there. However, if you look at Yahoo’s cache of that page you can see where it was (you have to click on the “Hot In My Communities” after clicking on the cache, that column is hidden by Javascript initially):


    (click to enlarge)

    What’s interesting to me about that is the fact that even though those links all appear to be gone in Google completely (ie. no MyBlogLog pages are showing up for that search now, although they were last time I checked), the target page still appears in the search results for now. So just as getting a link won’t instantly help your rankings, since it takes time to actually carry any weight, apparently losing them doesn’t instantly drop you either (which we all pretty much already knew, just neat to see the delay in action).

    I expect it to drop off in a week or so, as long as that link doesn’t become popular to click on again. 😀

  4. Always the difficulty – control.

    Two things from a big-picture perspective:
    1. IMO this whole topic could have very deep ramifications for SEO;
    2. I wouldn’t be surprised if this behaviour changes if it turns out to be supported by further testing. I say this because I think Google would rather us not know, and now that one more small signal is known they’ll be inclined to change it. I’m pretty sure they want to change their algo’s dependence on anchor text as this is where most abuse has focused of late.

    I’d love to hear any follow-up findings on this.

    Rgds
    Richard

  5. BTW – I think your captcha thingie is killing your subscribe to posts plugin. I got no email after you responded, and after my last comment the checkbox is unticked even though I seem to have been cookied correctly (my name/email are pre-populated).

    Delete this to remove clutter 🙂

  6. After reading this thread a few months ago and the other October one on a similar topic, I have chatted with several SEO agencies about this specific question. All of them give me a big blank stare when I describe the question and mumble in reply. So certainly this is not a well-known phenomenon.

    I’ve done a little testing on this although nothing I’m quite ready to share, and have found the same to be true as what you showed in the other thread (that only the first anchor text counts). That said, site-wide navigation shared across all pages does not always seem to be counted as the “first” link on the page. Instead, it seems those links are sometimes ignored and the first link comes from the first non-navigation link in the code. Again, the tests I’ve done so far are not conclusive, but it seems like navigation that is shared across many pages may be getting discounted/ignored in this formula when it’s recognized as such. Has anyone else done any testing that involves both navigation links and in-page links?

    also, anyone have any idea if there are differences between pages within the same domain linking to each other vs. across domains?

  7. I would be more interested to see the results if you actually added the text on the page. The reason it may not be showing could be due to the fact the word only shows up in the description.

    I’d suggest you install webmaster tools and this firefox extension http://yoast.com/seo-tools/link-analysis/
    Then login to your WT accound and check to see if they are all marked as nofollow.

    If you do please share the results.

  8. I applaud and appreciate all efforts to probe the black box of google;s algorithm. However there are several things that should be kept in mind.

    1) To be valid, any experiment should be repeated several times. Google likely has a small randomizing effect in place to limit probing of the algorithm.

    2) These tests are often conducted on nonsense words, etc. I understand the reason for that. It could be that G’s algorithm is different based on rare vs common words.

    3) These tests are usually conducted on small sites, which is necessary for easy control purposes. G’s algorithm could vary based on size of the site. This is not just a theoretical concern, but something that I feel is likely.

    4) These tests are usually conducted on PR0 sites. The results might vary depending the PR of a site. Part of this would depend on when G consolidates the data that it has captured from a webpage. Clearly, so they don’t have to drag around too much data, they do some consolidation at the time the page is scraped. They might consolidate all the link data then, or they might keep all the link data and consolidate it later, once the first tentative PR for a page is determined. It is very likely that they COULD use a different algorithm based on the PR of the site.

    5) I think this science is good. Even though the results might not scale, at least it’s based on fact, while so much else seems based on supposition.

  9. I just wanted to add that this is the first time that I have seen the Puzzle Captcha. I think the concept is good and I understand the need for a time limit, but I would like to suggest a slightly longer time, perhaps 45 seconds, if that is allowed.

    I didn’t complete the first one on time because I had never seen it before and it took a moment to figure out what to do. I did solve the second one. I know lots of people who could make intelligent comments who might have difficulty in completing it in 30 seconds.

  10. Itsme, you need to look at what is being measured though. This was never a test of “how much juice is passed”, which is a much, much more difficult test to determine. This was a test to determine if any juice was passed. Based on that this test is plenty sufficient.

    Also interesting to note that it is 1 year later, and still the only pages to rank for phrases #2 or #3 are ones that actually have the phrase in the onpage text.

  11. On PuzzCAPTCHA, that’s my bad. I upped it to 60 seconds when Matt Cutts said the same thing, forgot to change the instructions.

    Which is weird, I thought I wrote that to pull dynamically from the settings… will look at it later. Thanks. 🙂

  12. Michael,

    So the conclusion is that the destination link doesnt rank for the anchor text of links #2 and #3 since link #1 is nofollow?

    Have you ever tried to nofollow link #2 to see if link #1 would still be considered?

    Thanks!

Leave a Comment

*