Google.be 28-sept-06

Everybody’s been saying lots of things about the Google.be case, especially that the Belgian newspapers should have used robots.txt to tell Google what not to index. And that the fact they did not use robots.txt clearly show all they were interested is in getting money from Google…

Well, friends, I’m no lawyer or legal expert of any kind, but I’m French… and that lets me read and “almost” understand the terms of the ruling… I guess…

I think the ruling makes it pretty clear what the Belgian newspapers want, and I think this has been mistunderstood:

  • The papers welcome Google to index and display their news as part of Google News! (or at least they don't care)
  • The papers' particular online business model is that news are free, but access to archives require payments. Example here.
  • Once an article falls out of the news category and into the archives category, it should not be freely accessible any more.
  • Google, via its world (in)famous Google Cache, often makes the content available forever, or at least for a very long time after is has gone off the official site's free area.

I guess that’s it: what the Beligian paper really want is a way to get the content out of Google News once it is no news any more.

Now, I’m no robots.txt or Googlebot expert either, but from what I understand there was no convenient way for the papers to tell Google that it is okay to index some content for, let’s say 2 months, but not keep it in cache after that delay.

Goggle made some general comments on the case on their blog, but:

  • They are not allowed to comment specifically on the ruling, so it's not that useful;
  • They failed to show up at the trial, which is quite unbelievable... but would make it almost believable they fail to understand the real issue that has been raised... :roll:

Note: again, I’m no legal expert. Just trying to make a little sense of all this noise…


Comments from long ago:

Comment from: Kochise

You could add at the end of each line in robots.txt the date in which the file should not be referenced anymore :) I think it’s easy to change a text file format (csv), what are parsers for otherwise ?

Kochise

2006-09-28 09-08

Comment from: François Planque

hum… Kochise, are you sure? where did you find that??

2006-09-28 09-33

Comment from: Danny Sullivan

It’s very easy. You simply put a meta noarchive tag on each page you don’t want to have archived. That means Google will index the page, so you can find it in a search, but you won’t be able to view a cached copy ever. After two months, if you put the page in a registration required area, it will even drop out of Google entirely.

2006-09-28 11-58

Comment from: Peter

Yes, Danny is right.

2006-09-28 17-24

Comment from: François Planque

Danny’s got to be right! :)

I asked him by email if there would not be any side effects in telling Goggle not to Cache. I mean, Google needs the Cache to determine exact relevancy at search time. Not being in the Cache could restrict you to the supplemental results only.

Danny answered that people have been worried some time ago ago but that he hasn’t seen any worry like that for some time.

It is possible that meta noarchive just hides the archive link in Google but Google still caches internally. In this cases everything is okay.

Now I wonder: why don’t we all use meta noarchive? What good can it do to have content publicly available from Google’s cache instead of the original site? ;)

2006-09-28 17-34

Comment from: Angie Medford

There have been plenty of cases where a page will be removed by the webmaster and get 404ed, but if you search Google with text that appeared on the original page, the cache copy shows up in Google. So Google is serving an index of “fresh” pages + caches of removed pages that webmasters and the content owners have removed from the Internet completely. That isn’t right. It’s harmful. And pretty evil.

2006-09-28 17-36

Comment from: François Planque

Hum… I also wonder how Google responds to a “410 Gone” response (instead of “404 Not Found”)
The 410 response is primarily intended to assist the task of web maintenance by notifying the recipient that the resource is intentionally unavailable and that the server owners desire that remote links to that resource be removed. Such an event is common for limited-time, promotional services and for resources belonging to individuals no longer working at the server’s site. It is not necessary to mark all permanently unavailable resources as “gone” or to keep the mark for any length of time – that is left to the discretion of the server owner.
I Think it should definitely unindex and uncache the page in that event.

The remaining hacky solution would be to replace the gone page with a blank page containing a meta nocache header ;)

2006-09-28 17-42

Comment from: François Planque

Thanks for pointing that out Ben. I think it pretty much makes it clear that the papers could have fixed their issue without going to court! ;)

Another thing I understood from the ruling is that the papers were pretty much pissed off by the fact they Google never listened to their matter in the first place. I can believe that… since they didn’t show up at the court! :>

Maybe Google could afford a little tech support… even to Belgians ;)

As said in the comment on Reddit, you practically need to be an SEO to know there is a solution. (That also applies to Danny Sullivan above I guess ;)

2006-09-28 19-05

Comment from: nordsieck

Sounds like the solution for google is to totally ban all links to the effected newspapers.

I find issues of copyright and the web non-sensical - you had to make about 15 copies of this comment (along with the rest of the text on the page) in order to view it, between the inter-router hops, your browser’s cache, the in-memory version of the article, etc.

The fact that (at least in the US) everything is automatically copyrighted, and very few websites specifically grant people the right to copy flies in the face of their actions - stuff is on the web (generally) to be viewed (and that means copied) by everyone, as much as they want.

The entire situation simply doesn’t make sense.

2006-09-28 23-50

Comment from: Kochise

@Angie Medford : “So Google is serving an index of “fresh” pages + caches of removed pages that webmasters and the content owners have removed from the Internet completely. That isn’t right. It’s harmful. And pretty evil.”

Google isn’t that bad, if you fall on a 404, it just couldn’t harm people anymore. Otherwise WebArchive may harm several people over the world, more than Google !

Kochise

2006-09-29 09-23

Comment from: John

In defense of the Google (or other) cache: Caches are VERY useful and provide a positive good to society in exposing hypocritical sites that post something controversial, then withdraw that posting (which is OK) but then try to claim that they never posted the material in question in the first place (which REALLY evil).

Also, most people here are missing the main point of Google’s objection to the ruling: their home page is ‘sacred’. It is a key part of what makes them Google - the home page is simple and uncluttered.

There’s no reason the court couldn’t have compromised and permitted them to simply add a prominent link from their home page to the settlement. There’s no reason the text of settlement itself has to appear on the home page.

2006-10-02 08-08

Comment from: Nikhil

In response to Angie Medford, there was once a case of a server crashing at a major university. This server had a database of rare, historical documents. Because of Google’s system of caching this information, the university was able to recover a large percentage of the information from Google’s cache. So I would not really call the system of caching webpages /websites an “evil” practice. Any resource or utility can be used for good or bad purposes. It all depends on an individual’s or institution’s intentions.

2006-10-02 21-12

Comment from: Stefaan Vanderheyden

Why should Google be held liable for another company’s inability to correctly manage their own web content?

It’s sad that Google did not appear in court. The judge’s ruling seems irrelevant in light of the fact that Google has always provided a technical means whereby the belgian newspapers could easily prevent users from linking to a cached version of their copyrighted articles. There are many sites which correctly use the meta noarchive to do just that.

It looks like the judge was simply defending a group of incompetent publishers’ right to continue being totally incompetent…

This fact becomes even clearer when you note that certain “archived” articles are available for a “1 credit” charge via LeSoir’s search box, but the same article remains accessible for free via another link on exactly the same website:

link
or
link

Luckily, I do not own shares Rossel et Cie SA (editor of Le Soir Magazine), ‘cause it seems to me that they do not know what the hell they are doing…

2006-12-13 18-10

Comment from: Click

Seems to me like the papers, and possibly even Belguin, are just trying to say something to the effect of “we demand to be taken seriously,” and would have gone to court (and returned) over this even if they knew how to fix the problem technically. The judge from Belgium would be almost guaranteed to rule in favor of the papers every time, as it is in the best interest of Belgium, even if it seems a little unfair from an international perspective.

2007-02-22 05-19

Comment from: Thor Ingason

As a person battling to get an old website out of Google cache, which someone else submitted, I feel this scenario is REVERSED. It is the site webmasters who should put a meta tag in to HAVE sites archived, not the other way around. Google should NOT archive people’s sites without being asked to.

2009-07-07 17-04