Google Web Search Goes Completely AJAX

Yes, I know… Google has been offering AJAX driven results through the API and other services for ages, but now they have rolled that out to the main Google Search. It appears to be only on Google US (I tried manually switching to Google UK, and it redirected me from the AjAX version to a static HTML page), but that of course could change in the future.

I noticed this as soon as I started searching for stuff today, from almost the first query I typed in. When I looked at the url, instead of seeing the normal /search?= at the beginning:

Normal Google search url

I found myself looking at this:

Read moreGoogle Web Search Goes Completely AJAX

SERPs Scrapers, Rejoice! Matt Cutts Endorses Indexing Of Search Results In Google!

That’s right… today Matt Cutts completely reversed his opinion on pages indexed in Google that are nothing more than copies of auto-generated snippets.

Back in March of 2007, Matt discussed search results within search results, and Google’s dislike for them:

In general, we’ve seen that users usually don’t want to see search results (or copies of websites via proxies) in their search results. Proxied copies of websites and search results that don’t add much value already fall under our quality guidelines (e.g. “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.” and “Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches…”), so Google does take action to reduce the impact of those pages in our index.

But just to close the loop on the original question on that thread and clarify that Google reserves the right to reduce the impact of search results and proxied copies of web sites on users, Vanessa also had someone add a line to the quality guidelines page. The new webmaster guideline that you’ll see on that page says “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” – Matt Cutts

Now, while the Google Webmaster Guidelines still specifically instruct webmasters to

Read moreSERPs Scrapers, Rejoice! Matt Cutts Endorses Indexing Of Search Results In Google!

How To Find The Best Free Image/Photo/Graphics Downloads For Your Blog Posts

Smile! Adding images to your blog posts can make them much more visually appealing to your readers. This in turn can increase the likelihood that someone will link to that post or subscribe to your feed, which will of course in the long run help to improve your rankings and traffic. The internet is chock full of images, many of which will fit perfectly with that blog post or article that you are writing. The problem is, however, finding images that are both high quality and that you are actually allowed to use.

The Problems

Two internet no-no’s that beginner web publishers often perform, many times without even realizing that they are doing anything wrong,

Read moreHow To Find The Best Free Image/Photo/Graphics Downloads For Your Blog Posts

Matt Cutts, If This Paid Link Were A Snake It Would Have Bitten You In The Ass

PageRank for sale. Wednesday TechCrunch posted an article about a new ad product launched by MediaWhiz. The name of the product is InLinks, and it involves people being able to purchase anchor rich text links embedded into content in a way that is supposed to give it a “natural” feel. Michael Arrington called the product “insidious”. His whole take on it was that these new paid links would “be hard for Google to detect”. Quite a bit of discussion followed, sparked in large part by the fact that Matt Cutts chimed in on the matter. What no one seemed to notice, however,

Read moreMatt Cutts, If This Paid Link Were A Snake It Would Have Bitten You In The Ass

Google Tries Too Hard To Appear Useful, Starts Making Up New Words

The Google Search feature that Google calls “Spell Checker” can be very handy at times. You know the one I mean… you type something hastily in the box, manage to inadvertently slip in a typo or two, and Google, very helpfully, asks you “Did you mean: {some other word}”. Aside from putting a dent in the revenue for all of those SEO’s who are cleverly banking on people making common typos, most people (like myself) probably

Read moreGoogle Tries Too Hard To Appear Useful, Starts Making Up New Words

Yet Another Link Test – Single Source Page, Multiple Links, Nofollowed Middle

Last year I performed a couple of tests on what happens if you have multiple links pointing to the same page all from the same source page. Today a reader left a comment from one of the follow-up posts, which had to do with answering the question of what happens if the first link is nofollowed. He asked if I had tested with the second link being nofollowed instead of the first.

Well, no, I haven’t. So…

Read moreYet Another Link Test – Single Source Page, Multiple Links, Nofollowed Middle

How To Remove Your Website From Linkscape *Without* An SEOmoz Meta Tag

You do have rights to your content. Over the past couple of weeks, one of the biggest concerns about SEOmoz’s new Linkscape tool (which I recently blogged about in reference to the bots that Rand refuses to identify, and then again due to suspicious additions of a phantom 7 billion pages to one of his index sources) has been the complete lack of a method available for someone to remove their data from the tool. Assuming that all of the hints Rand has been so “subtly” dropping are accurate, and the one bot that they do actually have control over is in fact DotBot, then from the beginning the data was collected under false pretenses. The DotBot website clearly states

Read moreHow To Remove Your Website From Linkscape *Without* An SEOmoz Meta Tag

How To Add 7 Billion Pages To Your Index Overnight

A couple of days ago I posted my assertion that Rand Fishkin had lied about the details of the new Linkscape tool on SEOmoz. During the discussion that followed, Rand continued to maintain that they owned the bots that collected the data that powered the tool, despite several points on that being very unclear, and that his bots had collected those 30 billion pages.

Right in the heat of the argument, someone decided to drop a comment on my blog that struck me as a little odd

Read moreHow To Add 7 Billion Pages To Your Index Overnight

My Friend Donna Fontenot Sure Is, Well… Different…

Louisiana Donna is definitely one of my bestest friends. She gets me, we think alike, and when I get stuck on an issue she’s always there to help me, even if it’s just moral support (although usually it’s in the form of information I need when my brain is just plain overloaded). I love her to death. Thing is, Donna is from Louisiana, and they don’t always do things in those parts in a way that I would call, um… normal.

For example, just today, Donna and I had the following conversation:

Read moreMy Friend Donna Fontenot Sure Is, Well… Different…

How To Block The Bots SEOmoz *Isn’t* Telling You About

I swear to tell the... wait, what did you say..? Ok, so, looks like Rand and gang finally decided to reveal their top-secret recipe about how they gathered all that information on everybody’s websites without anyone noticing what they were doing. There was quite a bit of hoopla over the fact that when they announced their new index of 30 billion web pages (and the new tool powered by that index), due to the fact that they never gave webmasters the chance to block them from gathering this data. In fact, they never even

Read moreHow To Block The Bots SEOmoz *Isn’t* Telling You About