Friday, November 30, 2007

MSM Takes a Pepto-Bismol

If you don't want us reading your Zionist-controlled garbage, FINE!!!!

Well get the news someplace else. A better place -- like the BLOGS!!!!!!!!!!!

"News sites seek content control; Groups want to limit what search firms can collect" by Associated Press November 30, 2007

NEW YORK - Leading news organizations and other publishers have proposed changing the rules that tell search engines what they can and can't collect when scouring the Web, saying the revisions would give site owners greater control over their content.

It's called CENSORSHIP and RESTRICTING INFORMATION!

Sig Heil, "free" press!


Google Inc., Yahoo Inc., and other top search companies now voluntarily respect a website's wishes as stated in a document known as "robots.txt," which a search engine's indexing software, called a crawler, knows to look for on a site.

Under the existing 13-year-old technology, a site can block indexing of individual Web pages, specific directories, or the entire site. Some search engines have added their own commands to the rules, but they're not universally observed.

The Automated Content Access Protocol proposal, unveiled yesterday by a consortium of publishers at the global headquarters of the Associated Press, seeks to have those extra commands - and more - apply across the board.

With the protocol commands, sites could try to limit how long search engines retain copies in their indexes, for instance, or tell the crawler not to follow any of the links that appear within a Web page.

If the protocol commands are accepted by search engines, publishers say they would be willing to make more of their copyright-protected materials available online. But Web surfers also could find sites disappear from search engines more quickly, or find smaller versions of images called thumbnails missing if sites ban such presentations.

"Robots.txt was created for a different age," said Gavin O'Reilly, president of the World Association of Newspapers, one of the organizations behind the proposal. "It works well for search engines but doesn't work for content creators."

As with the current robots.txt, ACAP's use would be voluntary, so search engines ultimately would have to agree to recognize the new commands. So far, none of the leading ones have. Search engines also could ignore the new commands and leave it to courts to resolve any disputes.

Robots.txt was developed in 1994 following concerns that some crawlers were taxing websites by visiting them too many times too quickly. Although the system has never been sanctioned by any standards body, major search engines have voluntarily complied.

As search engines expanded to offer services for displaying news and scanning printed books, news organizations and book publishers began to complain that their content was being lifted from their sites and displayed on those of the search engines.

News publishers had complained that Google was posting their news summaries, headlines, and photos without permission. Google claimed that "fair use" provisions of copyright laws applied, though it eventually settled a lawsuit with Agence France-Presse and agreed to pay the AP without a lawsuit filed. Financial terms haven't been disclosed."

Take your stuff down, take it away, I don't give a shit!

Because that's what AmeriKa's MSM is!

That's why there are no advertisements here!

This FAIR USE FLOW of FREE INFORMATION!

I PAID FOR MY NEWSPAPER, SIR!!!!!!!!!!!!!!

And I have EVERY RIGHT to use the material, as long as I source it!