THE NEW YORK TIMES tried to block a web crawler that was affiliated with the famous Internet Archive, a project whose easy-to-use comparisons of article versions has sometimes led to embarrassment for the newspaper.1
The Times has, in the past, faced public criticisms over some of its stealth edits. In a notorious 2016 incident, the paper revised an article about then-Democratic presidential candidate Sen. Bernie Sanders, I-Vt., so drastically after publication — changing the tone from one of praise to skepticism — that it came in for a round of opprobrium from other outlets as well as the Times’s own public editor. The blogger who first noticed the revisions and set off the firestorm demonstrated the changes by using the Wayback Machine.2
The New York Times declined to comment on why it is barring the ia_archiver bot from crawling its website.3
The Wayback Machine
The Wayback Machine is the most popular part of the Internet Archive website. First introduced in 2001, the free online tool lets you go “back in time” to see what websites worldwide looked like at points in time. The Wayback Machine features 562 billion web pages at the time of this writing, with many more added each year.4
(thanks to KM)
Ibid.
Ibid.
What is the Wayback Machine and Why is it Useful? (groovypost.com)
The New York Times wants their lies to be memory-holed.
He who controls the past etc etc...
Just another perk of being the official Narrative Enforcement Agents (aka Thought Police) of the GAE.