What Will *Really* Break If Google Switches To AJAX…?

On Friday I wrote a piece on how it looked like Google was testing AJAX results in the main serps. Some discussion followed as to whether, if this change were to become a widespread permanent one, this would affect Firefox plugins that existed (definitely some existing ones would stop working), break some of the rank checking tools out there (they would have to be re-written I’m sure), and even some people asking if it would thwart serps scrapers from using serps for auto page generation (not for long, no).

While those things would definitely be affected in at least the short term, there is a much greater impact from Google switching to AJAX. All of the issues mentioned involve a very small subset of the webmastering community. What actually breaks if Google makes this switchover, and is in fact broken during any testing they are doing, is much more widespread. Every single

Read more

Google Web Search Goes Completely AJAX

Yes, I know… Google has been offering AJAX driven results through the API and other services for ages, but now they have rolled that out to the main Google Search. It appears to be only on Google US (I tried manually switching to Google UK, and it redirected me from the AjAX version to a static HTML page), but that of course could change in the future.

I noticed this as soon as I started searching for stuff today, from almost the first query I typed in. When I looked at the url, instead of seeing the normal /search?= at the beginning:

Normal Google search url

I found myself looking at this:

Read more

SERPs Scrapers, Rejoice! Matt Cutts Endorses Indexing Of Search Results In Google!

That’s right… today Matt Cutts completely reversed his opinion on pages indexed in Google that are nothing more than copies of auto-generated snippets.

Back in March of 2007, Matt discussed search results within search results, and Google’s dislike for them:

In general, we’ve seen that users usually don’t want to see search results (or copies of websites via proxies) in their search results. Proxied copies of websites and search results that don’t add much value already fall under our quality guidelines (e.g. “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.” and “Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches…”), so Google does take action to reduce the impact of those pages in our index.

But just to close the loop on the original question on that thread and clarify that Google reserves the right to reduce the impact of search results and proxied copies of web sites on users, Vanessa also had someone add a line to the quality guidelines page. The new webmaster guideline that you’ll see on that page says “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” – Matt Cutts

Now, while the Google Webmaster Guidelines still specifically instruct webmasters to

Read more

My Blog Hacked, Yet Again – WordPress 2.6.5 Vulnerability / Exploit?

Busted WordPress security. Again, I’ve been hacked. Well, not me personally… I wear the most up to date tinfoil attire, I assure you, and no one is getting into my head… but my blog was. This time I was running WordPress 2.6.5 when it happened.

Those who know me know that I always prefer to do manual upgrades, wiping everything out and starting over completely fresh each time, whether I have been hacked or not. This way if there was an intrusion it should still clean the hack out completely, even if I don’t know it’s there. As it happens, when I upgraded to 2.6.5 from 2.6.2 I did not do this. I merely upgraded the 2 files involved in the security portion of the WP 2.6.5 upgrade (which were wp-includes/feed.php and wp-includes/version.php). However,

Read more

How To Find The Best Free Image/Photo/Graphics Downloads For Your Blog Posts

Smile! Adding images to your blog posts can make them much more visually appealing to your readers. This in turn can increase the likelihood that someone will link to that post or subscribe to your feed, which will of course in the long run help to improve your rankings and traffic. The internet is chock full of images, many of which will fit perfectly with that blog post or article that you are writing. The problem is, however, finding images that are both high quality and that you are actually allowed to use.

The Problems

Two internet no-no’s that beginner web publishers often perform, many times without even realizing that they are doing anything wrong,

Read more

Matt Cutts, If This Paid Link Were A Snake It Would Have Bitten You In The Ass

PageRank for sale. Wednesday TechCrunch posted an article about a new ad product launched by MediaWhiz. The name of the product is InLinks, and it involves people being able to purchase anchor rich text links embedded into content in a way that is supposed to give it a “natural” feel. Michael Arrington called the product “insidious”. His whole take on it was that these new paid links would “be hard for Google to detect”. Quite a bit of discussion followed, sparked in large part by the fact that Matt Cutts chimed in on the matter. What no one seemed to notice, however,

Read more

Google Tries Too Hard To Appear Useful, Starts Making Up New Words

The Google Search feature that Google calls “Spell Checker” can be very handy at times. You know the one I mean… you type something hastily in the box, manage to inadvertently slip in a typo or two, and Google, very helpfully, asks you “Did you mean: {some other word}”. Aside from putting a dent in the revenue for all of those SEO’s who are cleverly banking on people making common typos, most people (like myself) probably

Read more

Yet Another Link Test – Single Source Page, Multiple Links, Nofollowed Middle

Last year I performed a couple of tests on what happens if you have multiple links pointing to the same page all from the same source page. Today a reader left a comment from one of the follow-up posts, which had to do with answering the question of what happens if the first link is nofollowed. He asked if I had tested with the second link being nofollowed instead of the first.

Well, no, I haven’t. So…

Read more

How To Remove Your Website From Linkscape *Without* An SEOmoz Meta Tag

You do have rights to your content. Over the past couple of weeks, one of the biggest concerns about SEOmoz’s new Linkscape tool (which I recently blogged about in reference to the bots that Rand refuses to identify, and then again due to suspicious additions of a phantom 7 billion pages to one of his index sources) has been the complete lack of a method available for someone to remove their data from the tool. Assuming that all of the hints Rand has been so “subtly” dropping are accurate, and the one bot that they do actually have control over is in fact DotBot, then from the beginning the data was collected under false pretenses. The DotBot website clearly states

Read more

How To Add 7 Billion Pages To Your Index Overnight

A couple of days ago I posted my assertion that Rand Fishkin had lied about the details of the new Linkscape tool on SEOmoz. During the discussion that followed, Rand continued to maintain that they owned the bots that collected the data that powered the tool, despite several points on that being very unclear, and that his bots had collected those 30 billion pages.

Right in the heat of the argument, someone decided to drop a comment on my blog that struck me as a little odd

Read more