to bypass Googles sandbox
apparently some talk at SES Chicago on getting a site indexed in
regular results a lot quicker. This involved the use of an existing
established similar domain (either one you own or one you bought)
and use it to help you get a site out of the “sandbox.” The full
forum post can be found here.
The technique is
in the gray-to-black range. I wanted to mention that first, however
if your site has been boxed for a while, this may be an alternative
for you. It also requires some coding, and assumes your site is
built in PHP although I would also guess there is similar ASP code
First lets look
at how this works: You have an established related domain and a new
domain which is ‘boxed. By establishing a subdomain on the
established site and mirroring the content of the new domain there
you will get the new subdomain indexed more quickly because it would
inherit some of the trust of the main domain.
Once it has
established itself you would use some form of redirect (likely a
301) to redirect crawlers to the new domain. The new domain then
inherits whatever the subdomain gained in terms of link popularity
transferred from the established, trusted domain.
but there are a few things you need to do.
is to find an established domain. If you need to buy an expired but
still relevant site (and it’s in your budget) the author recommends
you do so. You would also not change the registrar information,
according to the author (this would be considered in that dark gray
You don’t want to
change the registrar information because there is a chance that
Google will notice the change in ownership and any trust the domain
you just bought has previously earned would likely be lost.
So lets say you
just bought a related domain that’s been around for a couple years
and has a PageRank 5. by leaving the site intact and not changing
the registrar information you are essentially ensuring the site
maintains its existing stature in the engines.
Then you will
create a subdomain on the site. Here you will place a mirrored copy
of all your content navigation, etc. from your new site. Since the
new site hasn’t been added to the index yet there will not be a
duplicate content penalty.
You will also use
some PHP coding to change the page header information to fool the
webserver into thinking the page was created earlier than it was
(the suggested PHP code is found in the forum post linked above).
By telling the webserver the pages are old, you are informing the
crawler that the pages are old as well.
This is because
the crawler requests this information from the webserver at the time
established a completely new section within an established domain
the new section will get indexed sooner than the new domain.
It will inherit
link popularity and trust from the parent domain allowing it to
establish itself more quickly than the new site.
subdomain has been fully indexed by Google you will want to redirect
it to the new domain.
By doing this you
have allowed the content to be found by Google which then assumes
the pages are properly aged because it has been told by the
webserver that the pages are in fact old (even though in reality
you’ve recently created them).
the subdomain you are then passing the inheritance and trust given
to the subdomain by the main domain to the new site.
The reason this
works is because the established site is already trusted by Google.
Therefore the vote from the trusted site helps illustrate to Google
that the new site is also trusted.
There are some
things to consider with this tactic however:
Now that it’s
been widely publicized I wouldn’t expect it will take that long for
Google to realize the hole and patch it.
Also, the whole
trustbox patent is based partly on authority but also on age. So
while a page may appear old (because you’ve altered the page header
served) Google may opt to instead consider the page’s age from the
time it found the page.
In other words,
even if the page is a year old, if Googlebot just found the page
yesterday then it’s only 1 day old. While the patent does say that
“scoring the document based, at least in part, on the inception date
corresponding to the document,” it also goes on to say that Google
could determine that age to be not the page date, but the date when
it found the page.
And remember that
like any type of blatant manipulation you risk being penalized by
Google. Google engineers also visit these forums, don’t forget, so
they are also keenly aware when new tactics are shared which are
designed to circumvent the current algorithms.
Therefore it is
their job to fix those holes, and likely also find ways to penalize
the sites taking advantage of the holes. While no one can prove or
disprove this theory, I’ve heard of enough sites which have been
removed from the index for doing something they weren’t supposed to.
this may sound like a great way to get yourself out of the ‘box
early, consider the alternatives. What if you do get out of the
index early but Google catches on in 3 months, 6 months or more? Do
you think they may decide to “backdate” any changes to your site if
they determine that you participated in such a tactic? Then, not
only are you back where you started, you could be worse off than if
you had just taken your lumps and done things properly.
Rob Sullivan - SEO Specialist and Internet Marketing
Consultant. Any reproduction of this article needs to have an html link
Search Engine Optimization
Keywords and misspellings: sub domain
sub-domain sand box wikki wiki whiki