Why The Renewed Interest In The Linkscape Scams And Deception..?

Yesterday a friend of mine, Sebastian, wrote a post titled, “How do Majestic and LinkScape get their raw data?“. Basically it is a renewed rant about SEOmoz and their deceptions surrounding the Linkscape product that they launched back in October 2008, a little over 15 months ago. The controversy is based around the fact that moz basically lied about how it was exactly they were obtaining their data, which in part was probably motivated by wanting to make themselves look like they were more technically capable than they actually are.

Now, I covered this back when the launch actually happened, in this Linkscape post, resulting in quite a few comments, and there was more than a little heated conversation in the Sphinn thread as well. This prompted some people, both on Sebastian’s post and in the Sphinn thread on it, to ask why all of the renewed interest?

It is not extreme, its just that it isn’t new. The fact that they bought the index (partially)? That was known from the beginning. The fact that they don’t provide a satisfying way of blocking their bots (or the fact that they didn’t want to reveal their bots user agent)? Check. The fact that they make hyped statements to push Linkscape? Check. {…} I don’t get the renewed excitement. – Branko, aka SEO Scientist

Well, I guess you could say that it’s my fault. Or, you could blame it on SEOmoz themselves, or their employees, depending on how you look at it. You see, the story goes like this…

Back when SEOmoz first launched Linkscape, it would have been damn near impossible for a shop their size to have performed the feats they were claiming, all on their own. Rand was making the claim “Yes – We spidered all 30 billion pages”. He also claimed to have done it within “several weeks”. Now, even if we stretch “several” to mean something that it normally would not, say, 6 (since a 6 week update period is now what they are claiming for the tool), we’re still talking a huge amount of resources to accomplish that task. A conservative estimate of the average website, considering only html, is 25KB of text:

30,000,000,000 websites x (25 x 1024) bytes per website = 768,000,000,000,000 bytes of data (768 trillion bytes, which is 698.4TB)

(698.4TB / 45 days of crawling) x 30 days in a month = 465.6TB bandwidth per month

Now, I know that one of the reasons that Rand can get away with some of his claims is that most people just don’t grasp the sheer size

Read moreWhy The Renewed Interest In The Linkscape Scams And Deception..?

Quick Poll… Who Here Wants To Bing Jessica Biel?

Today CNN wrote a piece about the “‘Most dangerous’ celebs to search for online”. The article discussed which celebrity searches that were most likely to lead to sites infected with spyware. It was an interesting enough story, but what caught my eye were the two opening sentences:

Be cautious if you plan to Bing Jessica Biel or Google Brad Pitt. A new report says you might get a virus.

Now, while Microsoft may be hoping that people will associate the name of their revamped search engine, Bing, with

Read moreQuick Poll… Who Here Wants To Bing Jessica Biel?

Win A Date With Pedobear? WTF??

I was checking out a link a friend of mine Stumbled on tonight, when I see this ad for what looks like a teen dating site. Like most of the adult version dating sites that you see plastered all over the internet these days, the banner ad featured profile pics of the girls you could supposedly wind up hooking up with. The service advertised is not some small time website thrown up by amateurs with a very low budget… it is owned by Hearst Teen Network, the same guys who own Seventeen.com, CosmoGIRL.com, and a bunch of other teen oriented websites. I am not exactly sure who the hell their advertising team is targeting with this one, however. The ad features profile pics of two cute girls… and Pedobear:

I mean, seriously… wtf??

Is Plagiarism Ok… If It Was An Accident?

Last year I wrote this handy little script named EasyWP. It makes installing WordPress much easier for those without Fantastico or shell access, and is many times faster than having to upload all of the files individually. It’s very useful, especially if you install WordPress on a regular basis, or if you need to do a complete WordPress reinstall for whatever reason. Lots of people use and enjoy the script.

Today I receive this email from someone by the name of Joel Drapper:

Read moreIs Plagiarism Ok… If It Was An Accident?

Google Decides To Slow Down Search Results And Cloak Their New Tracking URLS

Today over at ReadWriteWeb Sarah Perez wrote an article on how Google was gaining ground on their share of the search market. In the article she talked about the latest buzz from Google Analytics blog having to do with changes to the way Google.com handles clicks in their serps, which were a implemented as result of what Google would break in analytics packages by implementing AJAX driven search results. She notes that even though the speed benefit Google gains from going AJAX would be minimal on a per-search basis, when multiplied by the millions of searches performed every day it would eventually add up to more of a market share for them.

Although a change to AJAX technology would only make searches milliseconds faster, those milliseconds add up, allowing people to do more searches, faster. And that would let Google grow even more, eating up percentage points along the way. – Sarah Perez

However, what was missed by many

Read moreGoogle Decides To Slow Down Search Results And Cloak Their New Tracking URLS

Google Re-initiates Testing of AJAX SERP’s With Faulty Proposed Fix

Last month I blogged about the fact that I had noticed that Google was playing around with delivering the SERP’s via AJAX. I pointed out that due to the way that referrers work, using AJAX to generate the pages would cause all traffic coming from Google to look like it was coming from Google’s homepage instead of from a search. This means in turn that analytics packages, including Google Analytics, would no longer be able to track what keywords searched on in Google were sending traffic to the webmaster’s websites. There was a bit of a buzz about it, and Google seemed to stop the testing shortly thereafter. Google’s only reply on the subject was “sometimes we test stuff”, to point to a post from three years ago that also said, “sometimes we test stuff”, to say that they didn’t intend to break referrer tracking, and that was it.

Shortly thereafter, the tests

Read moreGoogle Re-initiates Testing of AJAX SERP’s With Faulty Proposed Fix

Robert Scoble Chews Out Lisa Barone’s Ass For Taking His Name In Vain – WTF?

Tonight Robert ‘I Am Thy Lord And Thou Shalt Kneel, Bitches!’ Scoble, a blogger who has some claim to internet fame through his blog Scobleizer, decided that the title of “technical evangelist” that has been often attributed him simply wasn’t enough, and that deity is apparently more fitting.

Lisa Barone wrote a piece talking about personal brands and false idols on the web. In it she wrote the following paragraph:

Don’t support personal brands built on smoke and mirrors. Make people work for the brands they’re trying to create. Don’t let them scoble their way in. Don’t accept that someone is important just because they act like they are or someone told you they were.

Apparently Robert is the ultra sensitive type, and didn’t take too kindly

Read moreRobert Scoble Chews Out Lisa Barone’s Ass For Taking His Name In Vain – WTF?

Is Digg Trying To Tell Me Something?

As far as CAPTCHA’s go, I think that the one that Digg.com uses for story submissions is fairly reasonable. It’s monochrome, decent contrast, and doesn’t try and get too fancy with out of focus characters or exotic fonts. Of course I have a preference for my own PuzzCAPTCHA as far as usability goes, but for mainstream CAPTCHA’s I think Digg’s in intelligently done.

Maybe a little too intelligently, actually. I think that it might be trying to send me messages. I logged in to submit

Read moreIs Digg Trying To Tell Me Something?

Digg Allows Image Ads Embedded With Hidden Subliminal Messages

I was looking through Digg the other day, when this image ad caught my eye for some reason. Something about it caught my attention, and I wasn’t quite sure what it was, so I took a closer look at it. It was subtle, and hard to figure out at first. The copy on the ad itself was unremarkable, and went like this:

FLASH NEWS: Pam Scott, N.Y., made $1,000,000 on FOREX!

19 y.o. housewife, using $99 Autotrading program-robot, made $1 million in only 2 weeks! READ FULL STORY..

The copy itself was bad enough to make me simply ignore

Read moreDigg Allows Image Ads Embedded With Hidden Subliminal Messages

What Will *Really* Break If Google Switches To AJAX…?

On Friday I wrote a piece on how it looked like Google was testing AJAX results in the main serps. Some discussion followed as to whether, if this change were to become a widespread permanent one, this would affect Firefox plugins that existed (definitely some existing ones would stop working), break some of the rank checking tools out there (they would have to be re-written I’m sure), and even some people asking if it would thwart serps scrapers from using serps for auto page generation (not for long, no).

While those things would definitely be affected in at least the short term, there is a much greater impact from Google switching to AJAX. All of the issues mentioned involve a very small subset of the webmastering community. What actually breaks if Google makes this switchover, and is in fact broken during any testing they are doing, is much more widespread. Every single

Read moreWhat Will *Really* Break If Google Switches To AJAX…?