Daily Archives: February 23, 2011

Case-Shiller: double dip in most of tracked markets | Inman News

Case-Shiller: double dip in most of tracked markets

Real estate prices in 11 of 20 markets fall to new lows

By Inman News, Tuesday, February 22, 2011.

Inman News™

U.S. home prices fell 3.9 percent during the last three months of 2010, back to where they were at the beginning of 2003 and near the low for the downturn set in 2009, according to the latest Standard & Poor’s/Case-Shiller National Home Price Index.

Looking back a year, the index showed prices down 4.1 percent, with 18 of 20 markets tracked in the 20-city composite index losing ground and 11 reaching new lows for the downturn.

Markets that have "double dipped" to new lows since peaking in 2006 and 2007 are Atlanta; Charlotte; Chicago; Detroit; Las Vegas; Miami; New York; Phoenix; Portland, Ore.; Seattle and Tampa.

"We ended 2010 with a weak report," said David M. Blitzer, chairman of the index committee at Standard & Poor’s. Despite improvements in the overall economy, he said, "housing continues to drift lower and weaker."

more…

To continue reading sign in to your Premium Membership Premium Member account.

Premium Membership Premium Members have full access to all news archives.

Buy Now Purchase 1-year Premium Membership – $149.95

Copyright 2011 Inman News

All rights reserved. This content may not be used or reproduced in any manner whatsoever, in part or in whole, without written permission of Inman News. Use of this content without permission is a violation of federal copyright law.

To Improve Search Quality, Google Must Penalize Sources of Link Pollution | Search Engine Journal

Feb 23 2011

To Improve Search Quality, Google Must Penalize Sources of Link Pollution

If you’ve been in SEO for more than a few weeks, it’s a safe bet you’ve seen poorly moderated blog comments or forum threads that have been polluted with  spammy links. It’s also likely you’ve found evidence of “link hacking,” the unauthorized placement of links on an otherwise quality website via some sort of hacking mechanism. These are forms of link pollution, and much like real-world pollution, they can damage our search environment if left unchecked.

A must-read example of link pollution is documented in an SEOMoz blog post from late January, which shows that blog comment spam, forum spam, and hacked links placed on trusted websites led to spammy Google search results on a variety of competitive terms. This post is one of dozens that illustrate the immense task faced by Google and Bing: not only must they deliver excellent search results, but they must do so while dealing with an ever-increasing amount of link pollution.

While some people propose algorithmic adjustments to counter the effects of link pollution, I think it’s time to put a simpler and more obvious solution on the table: force website owners to take responsibility for allowing this pollution in the first place.

To be blunt, link pollution is often caused by poor and/or incompetent website management.

  • Moderation of blog comments and forum posts has never been easier – there is no excuse for spammy links on blogs and forums, yet this problem doesn’t seem to be going away.
  • When we find links that were placed by hackers, we’re reminded that password security is often the culprit. A 2010 password study shows that nearly 50% of Internet users have easily compromised passwords, and it’s a safe bet that many FTP passwords, WordPress passwords, etc. fall into the “easily compromised” category. How else could all these hacked links be explained?

While it’s true that even the best security and anti-spam measures can be compromised, it’s also true that these occurrences are relatively rare and – if a site is properly managed – easily corrected. If Google created a system that penalized irresponsible website management, link pollution would be drastically curtailed. Here is what I would propose:

1. Google should only index sites registered with Webmaster Tools. Assuming that all website owners valued Google search traffic, Webmaster Tools registration would be nearly universal. This would give Google a direct connection to website managers.

2. Mandate regular interaction. If Google were to send website managers a monthly “suspicious link report” – and then require the website manager to acknowledge receipt of this report – they would encourage website owners to actively manage the security and quality of their website.

3. Alert website managers to obvious spam links. While it is impossible for Google to detect every instance of link spam, there are certain occasions when a site has obviously been spammed. In these instances, Google should email the website manager immediately and request that the site be fixed ASAP.

4. De-index sites that fail to acknowledge alerts. Once Google has contact information for each and every website owner, there’s no excuse for failing to respond to Google spam link alerts in a timely manner. If a website manager fails to remedy a warning within a certain time frame, the site should be de-indexed for a period of days or weeks. Repeat offenders should be de-indexed for longer and longer periods until they either a) fix the problems or b) give up and go away.

While my plan to de-index non-conforming websites might sound draconian, imagine the benefits:

  • Webmasters would close security gaps, reduce the incentive for hackers to attack websites, and subsequently make all of our sites a little safer.
  • Blogs and forums would improve their moderation systems.
  • A reduction in link spam would probably make paid links easier to detect.
  • The rewards for ethical link-building would become even greater.
  • Social sites like Twitter might do more to reduce incredible amount of spam generated on their platforms.
  • Most importantly, Google would provide consumers with a better search experience.

To those who would argue against my plan on the idea that universal webmaster registration would make Google too powerful, I would submit that the current “unknown webmaster” free-for-all has no basis in reality. If we can’t drive a car without a driver’s license, why should we expect Google (or any other search engine) to send visitors to our website if we don’t identify ourselves to them first?

Besides, it’s not as if participation in my plan is compulsory. If a website owner doesn’t want to register with Google, they don’t have to…they just won’t be listed in Google search results.

To be clear, I’m not picking on Google. Google’s results are only as good as the websites they index. If tens of thousands of poorly managed websites are compromised with spammy links, it’s unreasonable to expect any search engine to overcome this problem algorithmically. My plan is focused on Google because of their prominence in the marketplace, but Bing could just as easily take action. Perhaps Google and Bing could even collaborate.

Whatever the solution, it’s clear that the current system has a fundamental flaw: there is no penalty levied against websites that are a source of link pollution. While our search environment is resilient enough to deal with some link pollution, regulation is the only way to prevent long-term damage.

Written By:

PG

Jason Lancaster | Denver Internet Marketing | @sporkmarketing

Jason Lancaster is President of Spork Marketing, a Denver Internet marketing company specializing in search engine optimization, marketing, and web design.

More Posts By Jason Lancaster

  • http://twitter.com/Webprotech WebPro Technologies

    A brilliant post on how to tackle the problem of link pollution.

    I hope the search engines adopt the suggestions mentioned in this post which would automatically punish the spammers and reward the ethical methods and thereby ensure quality results in SERPS which would be fair to the SEOs, website owners and the search engines.

  • As the saying goes about better mousetraps and smarter mice, Google (and other search engines) find themselves facing an uphill battle. Every time they try to clean up the search results, someone is going to find a new way to game the system. I do agree that website owners should take a more active role in managing the links posted to their site, but it’s going to take the combined efforts of website owners and Google to help clean up the search results.

  • Absolutely correct. Just like changes to the algo aren’t enough to combat spam, registration isn’t enough by itself either. Both are needed.

  • Thanks!

    We’re working in the very first years of what will someday be a massive, established industry, and my guess is that someday people will look back and marvel at how an anonymous website with no caretaker could possibly receive search traffic.

  • Hey Jason. Interesting article with some valid points. I think in general it highlights problems with the whole model of a link economy for determining websites’ rankings. While Google et al. have made a some advances in revising this model (for example, by now factoring in social signals as a new “vote” metric for users’ views of a web resource’s trustworthiness and usefulness), the basic model is still in some ways based on relatively primitive bean-counting – as evidenced by the recent JC Penny fiasco.

    But I have to say that your proposed solutions are preposterous – most of all the requirement that website owners be required to register with Webmaster Tools in order to be included in Google’s index (and your remaining solution bullet points are predicated on this).

    Sure, Google could do this – and pretty much immediately cease to be a search engine that anyone looking for comprehensive information on the web would ever use. Google is in the business of discovering, indexing and ranking as many possible web-delivered resources as it possibly can, regardless of whether or not site owners even care if search engines exist. With this goal in mind, Google can’t afford to be draconian in who gets into their index: they’re after information, not compliant web owners. Millions upon millions of site owners have never heard of SEO, don’t care (even if they “should”) about search-engine derived traffic, or otherwise see their output as a part of the search landscape. This in no way means that their content isn’t potentially valuable – and in fact may be produced solely with engaged users in mind, regardless of a presence in search. Google wants this content in its index, regardless of whether or not the owners of this content care about it’s existence. This is why Google has an opt-out mechanism (robots.txt or meta robots exclusion), but not an opt-in mechanism (if we can find your resource we’re putting it in our index unless you specifically tell us not to). The single compelling reason why a user might agree to mandatory WMT registration – because Google is the biggest, best and most comprehensive of the enterprise search engines – would immediately be voided by this very requirement, as it would cease to be comprehensive, and by dint of that no longer the best, as it could be usurped by any non-authoritarian engine with a decent ranking algorithm.

    With your proposed solution you’re not actually describing what *Google* should do, but what *site owners* should do in order to correct deficiencies in Google’s algorithm: be listed in a register so they can be made to comply with Google’s demands. Yes, Google needs to do something about comment spam, including penalizing site owners that violate their TOS – but most site owners (especially sites that don’t have a comment mechanism or security vulnerabilities – probably the vast majority) are already in compliance, and shouldn’t have to prove it. The ball is in Google’s court.

  • Of course you make an excellent point – Google would cease to be the dominant search engine in the marketplace if they no longer indexed all information.

    However, I would say that:

    1. The rule would be implemented slowly – webmasters would have months to comply. It’s not as if things would disappear overnight.

    2. Good quality sites would comply immediately. I can’t imagine that anyone with a business that relied upon search engine traffic would hesitate to play alone, let along publishing companies, individual bloggers, small business owners, etc. I guess what I’m saying is that every who matters would participate. If they don’t want to register, there’s a good chance they don’t matter.

    3. Google wouldn’t have to go it alone. Link pollution is a serious problem, and by some accounts it’s getting worse every year. Google and Bing both need to get rid of link pollution sources if they want to improve the quality of their product. If Google and Bing both mandated registration, the market would have to comply.

    4. Regulations, like taxes, are used to influence individual behavior. While you’re correct in saying this isn’t about Google should do, it’s really just semantics. Google is the big player in the search world, and until they create a penalty for poor site management – a “tax” if you will – insecure and unmoderated sites are going to hurt our search index. It’s incumbent upon Google and Bing to combat this problem, because they are the main beneficiaries of the current system.

    The bottom line is that it’s impossible to solve link spam problems with an algo, especially when link spam is very carefully placed. Google’s results are only as good as the data they process…spam in = spam out.

    Also, you touched on the idea that social signals could be used to help evaluate quality, but I see that as trading one form of spam for another. Until social profiles are verified and tied to specific individuals – and until all social data is open (i.e. Facebook), there’s no way social signals are enough to evaluate quality. The long and short of it is that links are still the backbone of the search index, and I don’t see that changing.

    Thanks for the comment.

  • Some great points Jason and i do appreciate your efforts and i really wish that search engine adopt this in order have better search engine for tomorrow… but it didnt seems easy…

    Take your 4th point de-index the website from the search engine if the webmaster is failed to respond to search engines email… i personally believe by this you are limiting the search engine…. what if there is a great informative website on the web and contain some spam links… search engine reports the owner then there can be several of reasons why one get failed to respond to search engine.. one of the legit excuse is some uncertain problem (Newzeland earthquake)

    same goes for the point that says Google should only index the site that is registered by Google Webmaster tool… don’t you think by this search engine will be limiting to search engine only?

    Over all an interesting Article!

  • It would be limiting if we assume that people wouldn’t choose to participate.

    I agree that there are some implementation questions, but at the end of the day something needs to be done. I’d like to see Google and Bing work together…and perhaps start in North America and eventually move worldwide.

  • The hardest thing about requiring something like webmaster tools, is that there is a lot of web content out there that is not moderated by a “webmaster” but rather a normal less-knowledgeable website owner who built their site via a free WYSIWYG application such as weebly or Godaddy.

    Great article though, and I agree with you that something needs to be done.

  • Matt – To be quite honest, I think it would be great if the cheap, poorly constructed and poorly maintained “website tonight” websites didn’t exist. In my opinion, there’s no excuse when a business doesn’t put forth a minimum investment to build a legitimate, quality website. I hear business owners tell me that they can’t afford a website, then I watch them spend hundreds of dollars on an ad in the Yellow Pages.

    In other words, I agree that less-knowledgeable website owners will struggle – I think that’s one of the benefits! 🙂

  • I’ll play a little bit of devils advocate on this one.

    I do agree with you on the fact that the web needs less junk and more qualified pages but there are exceptions. For example, I love cooking/recipe blogs, these blogs are often not ran by a webmaster, a professional chef or a business, but by an individual with a hobby and desire to share their goodness.

    In this regard, the web becomes less “free” (which is good/bad), but content regulation must be put into place at some point.

    Thanks for the reply!

  • I hear that, and I think some of my favorite blogs wouldn’t make it under this new regime. Still, if Google made the process easy…maybe hooked up with ICANN…this wouldn’t have to be a stumbling point.

    I was thinking earlier today that Google could offer business owners free custom email addresses (via Google apps) when they register for Webmaster tools. That would give them another incentive and also help expand Google’s reach.

    All of the comments make one thing clear: No solution is perfect. Mine could definitely use some tweaking, but ultimately I would argue that it’s absurd for Google to send traffic to sites that it can’t verify as “real”. At some point this will have to change, and registration is a great way to prove a site is legit, cut down on spam, reduce the impact and likelihood of paid links, etc.

  • Guest

    This is a troll right?

    Ridiculous.

blog comments powered by Disqus

New Google PageRank Algorithm Debunked | Search Engine Journal

Feb 23 2011

New Google PageRank Algorithm Debunked

In my recent article on SEJ about Google PageRank I suggested that a new formula could be in action. The article has gone big on Twitter and LinkedIn, with hundreds of shares. And from some comments, it looks like many people accepted it for a fact.

Now I hate to break it but I have to admit that I was most probably wrong. So I need to write this follow-up article to prevent yet another SEO myth from spreading. Let me explain what happened:

  • I saw an unreasonably low toolbar PageRank value for my blog after the last update;
  • I started looking around for possible reasons;
  • I noticed many people observed similar effects;
  • I found an article by Bill Slawski about Google’s Reasonable Surfer model, which seemed to explain my observations and other people’s rankings well;
  • I decided to share my findings for consideration and discussion.

And the discussion followed! My article collected about 100 comments, and a few SEO experts came up to correct me. The problem is, too many people accepted the alleged news without doubt and spread it further, so now it is my responsibility to get things straight.

So what was the real cause of poor ratings?

Special thanks to Donna Fontenot for providing the most likely explanation for the effect that I and other webmasters observed. The toolbar PageRank value is known to lag substantially even after an update. Matt Cutts mentioned it in his blog post about what is a Google update.

Please note the difference between the toolbar PageRank (TBPR) and Google’s internal PR value used as one of their 200+ factors to rank pages. I should have made this distinction more clear in my original article, too. Toolbar PageRank is more or less useless today because of how rarely it gets updated, and how much it lags. You can safely ignore it. If you are good with your link building, you will notice that from improvements in your search rankings first, and toolbar PageRank value will catch up eventually.

This devaluation of toolbar PageRank even makes quite a few people declare that “PageRank is dead”. This includes some industry experts. Be careful when listening to these proclamations. Those people know what they are talking about, but they are referring to toolbar PageRank value, not the Google’s internal formula. Internal PR is still a significant ranking factor, albeit only one of the hundreds.

And what about the Reasonable Surfer model?

I based my previous explanation of poor TBPR ratings after the recent update on Google’s Reasonable Surfer patent. But the thing is, a patent does not equal implementation! Google has hundreds of patents. They even seem to have a weight loss related patent. This does not mean they are secretly crafting weight loss into their ranking algorithm.

On one hand, some parts of the Reasonable Surfer model could have been tested by Google even before they filed the patent (and that was in 2004). On the other hand, some parts of it may still not be implemented, either because of engineering difficulties, or because of too much noise in the suggested signals. I don’t know the current status of this model’s implementation, and I’m afraid Google will not disclose it.

I, for one, would welcome the change if they crafted this model directly into the PR and TBPR formula. That would make the link building efforts of those webmasters who are focused on TBPR more productive. It’s an easily observable single indicator, and it is often used for bragging and comparison, so it would be nice if it correlated more with the true quality of the page’s link profile.

In any case, it is worth your time to learn about this model, because it has a good chance to influence your rankings one way or another, now or in future. Just don’t take this for a hard fact. Here is one more good article about this model: The reasonable surfer; makes for unreasonable thinkers.

What practical conclusions you can make from this?

Number one conclusion is: do not trust everything you read. If someone publishes an article on a reputable site, that does not make him an expert. If someone is an expert, that does not mean he is always right. And even if someone is right, that does not mean the same applies to your situation. Be especially careful about trusting any anecdotal evidence. Double-check any information you plan to apply to your business or communicate to your customers.

Next, while the Reasonable Surfer model may not be fully implemented yet, the work performed in that direction indicates that search engines are not happy with simple mechanical ratings, and want their rankings to match the human behavior closer. While for now you may still enjoy some results from link spam, rest assured that search engines will be fighting to make it obsolete. So do not make your business or your customers depend on cheap links entirely.

One more reminder: don’t pay much if any attention to toolbar PageRank values. If you want to brag about your site, talk about unique visitors and conversion rate. If you want to find influential online partners, ask them about the same. That green bar just doesn’t mean too much today.

And finally, if you write an article about a high-impact subject like PageRank, make all the necessary terminology distinctions very clearly, and consult real experts in the field before publishing, even if you have years of relevant business experience under your own belt.

Credits

Thanks to Barry Adams for raising an alarm about my previous article in a way that no-one could ignore. Thanks to David Harry for dissecting it at SEOBS and for reviewing this follow-up (Editor’s note: the link to SEOB post was added by Ann. Cheers, Dave!). Thanks to all the readers for your support and understanding!

If you retweeted or otherwise shared my previous article, please share this one, too. Don’t let yet another SEO myth spread over the Internet. In fact, share it anyway, it will not hurt. Thank you!

Written By:

PG

Val Danylchuk | Web Tracking | @webtrackingblog

Val Danylchuk is the author of Web Tracking Guide – an easy, step-by-step tutorial on tracking and maximizing your online profits.

More Posts By Val Danylchuk

  • Sometimes that ToolBar PageRank lag is a couple years.

  • I have an enormous amount of respect for you and the way you’ve handled the criticism. Huge props to you for that.

  • Anonymous

    @Val….Barry surely helped in all of this…I’d send him a Timmys – Dbl/Dbl of course!

    🙂

    Jim

  • http://twitter.com/iamlasse Lasse H. Kristiansen

    I agree with Donna. Well done, Val.

  • http://twitter.com/Webprotech WebPro Technologies

    I agree with dazzlindonna too.

    When you guest post especially on such a famous blog you reach to a wider audience one has to be prepared for the bouquets as well as the brickbats.

    I like your conclusion:
    If someone publishes an article on a reputable site, that does not make him an expert. If someone is an expert, that does not mean he is always right. And even if someone is right, that does not mean the same applies to your situation.

    Well said and well managed .That’s the spirit. Huge props to you for that and kudos to SEJ for giving you the opportunity for this post.

  • Eren Mckay

    Excellent – I respect you for being honest and open.

  • Hi Val,

    Regarding the Reasonable Surfer Model, we have had a number of statements from people at Google that not every link on a page carries the same weight, and that PageRank itself has transformed in many ways over the years since it was first introduced. Reasonable Surfer Model? Who knows?

    What intriqued me about the patent when I first read it and wrote about it was that it provided an intricate approach that a machine could use to make decisions about how much weight each link might pass along, and gave us some insight into the assumptions behind those decisions. Many of them made a lot of sense. For instance, a link with text the same color as the background it appears upon probably doesn’t pass along much PageRank at all.

    But, writing about search patents is a little like walking in a field filled with landmines. It’s really helpful to keep in mind that what your writing about may come to pass, or may already be in place, or may be implemented but transformed in many ways, some of those with some serious implications, or it might never see the light of day. Some patents that impact user interfaces are easier to see when implemented, while others that involved mostly algorithmic changes are much harder to recognize. (One of the words that appears in my Webmaster Tools list as a major keyword for my site is “may,” and another is “might,” – you can see how often I used them in the previous sentence alone.

    A number of Google’s patents have provided me with actionable steps that I could follow that made a big difference in how well pages ranked in certain areas, especially when it comes to local search. Many end up providing questions and ideas to be tested, poked, prodded, and experimented with. Most provide a view of search, search engines, and searchers from the perspective of people working at search engines.

    What I want to say to you is don’t be afraid to write about what you’ve found, and share your views, even if sometimes it seems like people are grabbing torches and coming after you as if you were Frankenstein’s monster. But also, don’t take something that you’ve read in a place like a Google patent, and take it as proof that the search engine is doing something. Take it as a possibility, and use it as a springboard to explore what they’ve actually come up with. As a primary source, directly from the search engine, it’s often better information that anecdotal information spread from one SEO to another, and transformed in the process often into nothing resembling the original tidbit of information.

    You’ve handled the criticism that people raised against your original post very well. I hope that you do keep writing, and questioning, and raising points for people to respond to, even if it’s with criticism. The discourse is what helps us all grow.

  • Hi Val,

    Regarding the Reasonable Surfer Model, we have had a number of statements from people at Google that not every link on a page carries the same weight, and that PageRank itself has transformed in many ways over the years since it was first introduced. Reasonable Surfer Model? Who knows?

    What intriqued me about the patent when I first read it and wrote about it was that it provided an intricate approach that a machine could use to make decisions about how much weight each link might pass along, and gave us some insight into the assumptions behind those decisions. Many of them made a lot of sense. For instance, a link with text the same color as the background it appears upon probably doesn’t pass along much PageRank at all.

    But, writing about search patents is a little like walking in a field filled with landmines. It’s really helpful to keep in mind that what your writing about may come to pass, or may already be in place, or may be implemented but transformed in many ways, some of those with some serious implications, or it might never see the light of day. Some patents that impact user interfaces are easier to see when implemented, while others that involved mostly algorithmic changes are much harder to recognize. (One of the words that appears in my Webmaster Tools list as a major keyword for my site is “may,” and another is “might,” – you can see how often I used them in the previous sentence alone.

    A number of Google’s patents have provided me with actionable steps that I could follow that made a big difference in how well pages ranked in certain areas, especially when it comes to local search. Many end up providing questions and ideas to be tested, poked, prodded, and experimented with. Most provide a view of search, search engines, and searchers from the perspective of people working at search engines.

    What I want to say to you is don’t be afraid to write about what you’ve found, and share your views, even if sometimes it seems like people are grabbing torches and coming after you as if you were Frankenstein’s monster. But also, don’t take something that you’ve read in a place like a Google patent, and take it as proof that the search engine is doing something. Take it as a possibility, and use it as a springboard to explore what they’ve actually come up with. As a primary source, directly from the search engine, it’s often better information that anecdotal information spread from one SEO to another, and transformed in the process often into nothing resembling the original tidbit of information.

    You’ve handled the criticism that people raised against your original post very well. I hope that you do keep writing, and questioning, and raising points for people to respond to, even if it’s with criticism. The discourse is what helps us all grow.

  • Oh many thanks for the information! I started SEO but this is just the time that I knew about this! Thanks a lot for the new information.

  • Geebster

    Good for you coming out with this backtrack, but the trouble is that in the SEO industry there are too many people attempting to be chiefs – over-anlayzing every minute detail, attempting to garner interest in their own expertise, get links to sites, blogs, create linkbait and (sometimes intentional controversy), all to jockey themselves into positions of authority for the good of their business and websites.

    I’m am in no way saying there aren’t enough indians – but rather that 90% of what is put out there as original material by SEO’s when writing about SEO is regurgitated whether it be right or wrong.

    As you’ve rightly pointed out – make judgements for yourself based upon reasonable observation and supporting evidence, and learn to separate what does work from the plethora of nonsense.

  • A couple of products have cropped up claiming to actively predict a websites future PR – before the update.

  • Respect!

  • Kudos. Not everyone has the “bells” in admitting when something went out that was not totally 100% accurate. Fact is, it is never really 100% accurate as proven many times over. However, you invoke very important things here that is worth a lot – (practical conclusions)… Thumbs up

  • Kudos. Not everyone has the “bells” in admitting when something went out that was not totally 100% accurate. Fact is, it is never really 100% accurate as proven many times over. However, you invoke very important things here that is worth a lot – (practical conclusions)… Thumbs up

  • http://docsheldon.com Doc Sheldon

    Val, I’ll add my respects to the pile. You stepped in it, but you’ve cleaned off your boots, and moved on, wiser for the experience. Well handled recovery!

  • Val lots of respect for you! DiTesco is right not everyone have the “bells” to admit specially when things get out of control.

    The great part is you seriously respect the image of guest blogging and also shows others the way on how to handle the situation when things go worst.

  • Hi Bill,

    Thank you for taking the time to explain your view on this in such detail. I’m also pleased to hear your words of support.

    I think it is actually the biggest strength of the social media, that you can hear the feedback immediately, discuss and learn together. So I always welcome criticism, especially when it is well grounded and constructive.

  • http://samirbalwani.com/ Samir Balwani

    Seriously great follow up Val. Awesome article.

  • Hi Donna,

    Thank you again for suggesting the most likely explanation, and also for being friendly, constructive, willing to explain things and provide references.

  • Thank you, Moosa,

    Yes, there is probably something to learn from this situation for those interested in guest blogging.

  • Thanks, Doc,

    I like how you put it. You’ve got style!

  • Thank you,

    I think it makes sense to post an update when you learn more about the subject. Even more so if you had it wrong. And you are right, SEO knowledge is often hard to test, so it’s hard for anyone to be accurate 100% of the time. That’s why I believe we should be very open about our new findings, evidence and mistakes.

  • Thank you!

  • That’s interesting. They could actually provide some useful data, I just hope they don’t over-hype it and claim to be some sort of prophets. Honest analysis based on available knowledge with full disclosure of calculations could be useful to someone who has enough time/budget to research that.

  • I think it’s okay for some writing to be regurgitated as long as it’s reasonably accurate. There are always more people to reach who might have not learned about the particular facts yet.

    Still, your point is very valid. There are dozens of myths about online business and SEO which are floating around only because they have been repeated many times before.

    You should always test and verify any new ideas you apply to your business, as much as possible.

  • Thank you!

  • Thanks for your support!

  • Thank you!

  • Hi Jim,

    You also played your part in this. Thanks for your opinion!

  • This sometimes looks a little irresponsible on Google’s part. Of course they can’t guarantee fresh and accurate rankings for everyone all the time. But in this case, TBPR is more often useless than not. If they make it available and so highly visible, they should really keep it up to date better.

blog comments powered by Disqus