Florida has definitely had it’s “doh!” moments, but this one is a doozy. It seems as if last month, March 23rd 2012, Florida HB 1175 went into effect, with the following intent:
On The Ball-ness
What’s A Faster Way To Get A Virus Than Browsing Porn? That’s Right: The New Facebook
Quit staring, it’s just a thumb.
Facebook has never been known for it’s safety. It is a site designed so that the least Internet savvy people out there can sign up and network with millions of other people, both those they know and those they don’t, with only a minimal amount of technical know-how required (ie. how to sign up, and how to browse). It is a giant playground filled with games and people to talk to from all over the world, luring in droves of people who, when they come, know nothing about “scareware”, or “phishing scams”, or even how to clean a virus from their machine if they get one. Sure, they’ve been told that if they visit porn sites they could very well get a virus, but hey, this is Facebook, everyone is on Facebook… it must be safe. The result is a gigantic community of
Google Censors Torrent Sites – Except For The Pirate Bay
Yesterday Search Engine Land reported about Google removing piracy-related terms from it’s Instant Search, which includes the word torrents, names of torrent sites, names of torrent clients, and other file sharing sites such as RapidShare and Megaupload. This does raise some concerns, seeing as how, as SELand’s Matt McGee mentions, torrents and file sharing sites in and of themselves are not inherently illegal. Of course, neither is porn, but Google seems to have seen fit to remove that genre from it’s Instant Search as well.
Does this mean that Google really hates torrent sites? Well, not all of them, apparently. The Pirate Bay, world’s largest bittorrent tracker,
My Mom Needed Me To Let The Plumber In While She Was At Work (True Story)
I work from my house and keep odd hours, so when a family member needs some sort of worker let into their house during the day I am often asked if I am available to do it. I don’t mind, we all live fairly close together, and it’s not that much of a hassle on most days. Tonight my mom called and asked me if I could let someone in to her place tomorrow to look at her tub, because it’s clogged. She’s tried Drano twice, poured boiling hot water in it, and even tried plunging it, all to no avail. I told her it would be no problem for me to let someone in.
A little while later I went into my own bathroom, and while in there happened to glance at my own tub…
Is Google Referrer Spamming Too Now?
Yesterday a friend of mine sent me a section of her traffic logs that were showing some odd information. According to what was recorded there her brand new, as of yet unlinked-to website was ranking on the first page of Google for the single keyword, [free]. If she actually had managed to rank for that phrase it would be an amazing feat to say the least. The competition for that single word is enormous. Unsurprisingly, when performing that actual search her site is nowhere to be found. The site in question is barely one week old, and hasn’t even been launched yet.
What is surprising, to me anyways, is that it appears that the traffic is actually coming from a bot at Google… a bot that is cloaked, sending fake
Google Re-initiates Testing of AJAX SERP’s With Faulty Proposed Fix
Last month I blogged about the fact that I had noticed that Google was playing around with delivering the SERP’s via AJAX. I pointed out that due to the way that referrers work, using AJAX to generate the pages would cause all traffic coming from Google to look like it was coming from Google’s homepage instead of from a search. This means in turn that analytics packages, including Google Analytics, would no longer be able to track what keywords searched on in Google were sending traffic to the webmaster’s websites. There was a bit of a buzz about it, and Google seemed to stop the testing shortly thereafter. Google’s only reply on the subject was “sometimes we test stuff”, to point to a post from three years ago that also said, “sometimes we test stuff”, to say that they didn’t intend to break referrer tracking, and that was it.
Shortly thereafter, the tests
Robert Scoble Chews Out Lisa Barone’s Ass For Taking His Name In Vain – WTF?
Tonight Robert ‘I Am Thy Lord And Thou Shalt Kneel, Bitches!’ Scoble, a blogger who has some claim to internet fame through his blog Scobleizer, decided that the title of “technical evangelist” that has been often attributed him simply wasn’t enough, and that deity is apparently more fitting.
Lisa Barone wrote a piece talking about personal brands and false idols on the web. In it she wrote the following paragraph:
Don’t support personal brands built on smoke and mirrors. Make people work for the brands they’re trying to create. Don’t let them scoble their way in. Don’t accept that someone is important just because they act like they are or someone told you they were.
Apparently Robert is the ultra sensitive type, and didn’t take too kindly
SERPs Scrapers, Rejoice! Matt Cutts Endorses Indexing Of Search Results In Google!
That’s right… today Matt Cutts completely reversed his opinion on pages indexed in Google that are nothing more than copies of auto-generated snippets.
Back in March of 2007, Matt discussed search results within search results, and Google’s dislike for them:
In general, we’ve seen that users usually don’t want to see search results (or copies of websites via proxies) in their search results. Proxied copies of websites and search results that don’t add much value already fall under our quality guidelines (e.g. “Don’t create multiple pages, subdomains, or domains with substantially duplicate content.” and “Avoid “doorway” pages created just for search engines, or other “cookie cutter” approaches…”), so Google does take action to reduce the impact of those pages in our index.
But just to close the loop on the original question on that thread and clarify that Google reserves the right to reduce the impact of search results and proxied copies of web sites on users, Vanessa also had someone add a line to the quality guidelines page. The new webmaster guideline that you’ll see on that page says “Use robots.txt to prevent crawling of search results pages or other auto-generated pages that don’t add much value for users coming from search engines.” – Matt Cutts
Now, while the Google Webmaster Guidelines still specifically instruct webmasters to
Matt Cutts, If This Paid Link Were A Snake It Would Have Bitten You In The Ass
Wednesday TechCrunch posted an article about a new ad product launched by MediaWhiz. The name of the product is InLinks, and it involves people being able to purchase anchor rich text links embedded into content in a way that is supposed to give it a “natural” feel. Michael Arrington called the product “insidious”. His whole take on it was that these new paid links would “be hard for Google to detect”. Quite a bit of discussion followed, sparked in large part by the fact that Matt Cutts chimed in on the matter. What no one seemed to notice, however,
How To Remove Your Website From Linkscape *Without* An SEOmoz Meta Tag
Over the past couple of weeks, one of the biggest concerns about SEOmoz’s new Linkscape tool (which I recently blogged about in reference to the bots that Rand refuses to identify, and then again due to suspicious additions of a phantom 7 billion pages to one of his index sources) has been the complete lack of a method available for someone to remove their data from the tool. Assuming that all of the hints Rand has been so “subtly” dropping are accurate, and the one bot that they do actually have control over is in fact DotBot, then from the beginning the data was collected under false pretenses. The DotBot website clearly states