Jason Calacanis: Screw You Google, Now I’ll Sell Links Too

By now Google has to be getting more than a little embarrassed about the behavior of Mr. Jason Calacanis and his site, Mahalo.com. Aaron Wall did a very well written piece explaining how Mahalo Makes Black Look White and the spammy techniques they were employing. This isn’t the first time Aaron has blogged about Mahalo either, and talked about exactly how this makes Google look bad. For those who might not know, I have also been blogging about this recently.

While Google will ban smaller websites from their search results or from AdSense on a whim, usually it takes heavier coverage

Read more

Dear Jason Calacanis: This Isn’t An “Absurd Microscope”

Jason Calacanis replied to my post from yesterday. In it he discusses how he is indeed deleting many of the spammy pages that I had pointed out. Some, like the duplicate content doorway pages, he continues to defend. Either way, progress is being made.

However, he still kinda kills it by tossing in at the end about how this whole scrutiny on his site is “absurd”, and anyone who calls him on it is being “vicious”:

Read more

Mahalo.com: Meet the New Spam, Worse than the Old Spam

Last week, after Matt Cutts gave Jason Calacanis a warning about Mahalo.com’s spammier pages (and probably a few stern looks as well), Jason changed a few items. He had them rename their spambot from “searchclick” to “stub”, thinking a less obvious name would throw off anyone looking into the spam situation. Very briefly they added a noindex meta tag to the content-less pages (a change that they then undid after just one day, of course). Probably the biggest change that they made, however, is that they decided to actually turn off (for now anyways) the bot that was creating all of those pages that were nothing more than scraped content.

What then, you may ask yourself, is Jason going to replace all of these pages with, exactly? I know that’s what I was asking. As I pointed out

Read more

Jason Calacanis Makes Matt Cutts A Liar

Last week at SMX West, during the Ask The Search Engines panel, moderator Danny Sullivan asked Matt Cutts why he didn’t ban Mahalo.com for spamming Google. Matt stated that he had talked to Jason Calacanis, Mahalo.com CEO, about the issues, and warned him that Google might “take action” if Jason didn’t make some changes to the spammy side of Mahalo. Matt also made the following statement, in reference to Aaron Wall’s post on the subject:

All the pages Aaron pointed out now have noindex on them. – Matt Cutts

Matt was referring to all of the autogenerated pages that both Aaron I blogged about in our posts, the ones with

Read more

Apparently Jason Calacanis Knows He’s Spamming – He Just Thinks It’s No Big Deal

Last month Jason Calacanis wrote a rather sarcastic post aimed at Aaron Wall, which I am assuming was written in response to Aaron’s post, “Black Hat SEO Case Study: How Mahalo Makes Black Look White!“. In it Aaron discusses how sites that are composed largely of nothing more than auto-generated pages wrapped in adsense can get accepted and even gain authority in Google if they have enough financing and press. In Jason’s rebuttal to this was a claim about rankings that Mahalo had “earned” (and I use the term loosely) for “VIDEO GAME walkthrough”. I originally misinterpreted what he was trying to say, and thought that he meant rankings for that exact phrase. I commented how that wasn’t exactly a great accomplishment before realizing that what he actually meant was rankings for [{insert video game name} walkthrough], and that Mahalo has a couple top 10 rankings for that genre of search phrases.

Jason sent me an email to correct me on what he was talking about. We replied to each other back and forth a couple times, and a few very interesting things were revealed in that conversation:

Read more

Why The Renewed Interest In The Linkscape Scams And Deception..?

Yesterday a friend of mine, Sebastian, wrote a post titled, “How do Majestic and LinkScape get their raw data?“. Basically it is a renewed rant about SEOmoz and their deceptions surrounding the Linkscape product that they launched back in October 2008, a little over 15 months ago. The controversy is based around the fact that moz basically lied about how it was exactly they were obtaining their data, which in part was probably motivated by wanting to make themselves look like they were more technically capable than they actually are.

Now, I covered this back when the launch actually happened, in this Linkscape post, resulting in quite a few comments, and there was more than a little heated conversation in the Sphinn thread as well. This prompted some people, both on Sebastian’s post and in the Sphinn thread on it, to ask why all of the renewed interest?

It is not extreme, its just that it isn’t new. The fact that they bought the index (partially)? That was known from the beginning. The fact that they don’t provide a satisfying way of blocking their bots (or the fact that they didn’t want to reveal their bots user agent)? Check. The fact that they make hyped statements to push Linkscape? Check. {…} I don’t get the renewed excitement. – Branko, aka SEO Scientist

Well, I guess you could say that it’s my fault. Or, you could blame it on SEOmoz themselves, or their employees, depending on how you look at it. You see, the story goes like this…

Back when SEOmoz first launched Linkscape, it would have been damn near impossible for a shop their size to have performed the feats they were claiming, all on their own. Rand was making the claim “Yes – We spidered all 30 billion pages”. He also claimed to have done it within “several weeks”. Now, even if we stretch “several” to mean something that it normally would not, say, 6 (since a 6 week update period is now what they are claiming for the tool), we’re still talking a huge amount of resources to accomplish that task. A conservative estimate of the average website, considering only html, is 25KB of text:

30,000,000,000 websites x (25 x 1024) bytes per website = 768,000,000,000,000 bytes of data (768 trillion bytes, which is 698.4TB)

(698.4TB / 45 days of crawling) x 30 days in a month = 465.6TB bandwidth per month

Now, I know that one of the reasons that Rand can get away with some of his claims is that most people just don’t grasp the sheer size

Read more

Facebook / Twitter / Myspace Hacking: How To Keep It From Happening To You

Breaking into Facebook.Over the past few weeks I have noticed a sharp increase of scammers trying to get my Facebook password, and not too long ago a few people I know actually fell prey to it. Recently there was an outbreak of of similar activity on Twitter, where the attempts were being spread through direct messages, and Myspace has seen it’s share of woes with these issue as well. The methods being used to try and trick users into giving their passwords away are collectively known as phishing attempts, where the members of the site are sent a message, either through the site itself or in an email,

Read more

Digg Allows Image Ads Embedded With Hidden Subliminal Messages

I was looking through Digg the other day, when this image ad caught my eye for some reason. Something about it caught my attention, and I wasn’t quite sure what it was, so I took a closer look at it. It was subtle, and hard to figure out at first. The copy on the ad itself was unremarkable, and went like this:

FLASH NEWS: Pam Scott, N.Y., made $1,000,000 on FOREX!

19 y.o. housewife, using $99 Autotrading program-robot, made $1 million in only 2 weeks! READ FULL STORY..

The copy itself was bad enough to make me simply ignore

Read more

How To Remove Your Website From Linkscape *Without* An SEOmoz Meta Tag

You do have rights to your content. Over the past couple of weeks, one of the biggest concerns about SEOmoz’s new Linkscape tool (which I recently blogged about in reference to the bots that Rand refuses to identify, and then again due to suspicious additions of a phantom 7 billion pages to one of his index sources) has been the complete lack of a method available for someone to remove their data from the tool. Assuming that all of the hints Rand has been so “subtly” dropping are accurate, and the one bot that they do actually have control over is in fact DotBot, then from the beginning the data was collected under false pretenses. The DotBot website clearly states

Read more