Revealing that which is concealed. Learning about anything that resembles real freedom. A journey of self-discovery shared with the world.
Have no fellowship with the unfruitful works of darkness, but rather reprove them - Ephesians 5-11
Join me and let's follow that high road...
Thursday, June 6, 2019
YouTube to delete thousands of accounts after it bans anyone they deem a threat to the NWO
Oh, so it's like that is it. This is what happens when you sell out to a CIA front company like Googleye. Fascism. This is Fahrenheit 451 and Nazi Book Burning 2.0
Authored by Kit Knightly via Off-Guardian.org,
YouTube has just announced they have changed their “community standards” to combat “extremist content” on their platform. This is just the latest step in the war against free speech online. This move comes as no surprise – the press have been laying the groundwork for this for weeks, even months.
Three weeks ago Buzzfeed reported that YouTube’s monetised chat was “pushing creators to more extreme content”, and just yesterday it was reported that YouTube’s recommend algorithm was “sexualising children”. You cannot move for stories about how bad YouTube is.
Given that, it comes as no surprise that the mainstream media are celebrating this latest “purge”.The Guardian reported:
Both these headlines are wildly inaccurate, deliberately playing the
racism/white supremacy angle in the hopes that people will clap along
without reading anything else. Vox was a little more truthful in its headline, reporting:
YouTube finally banned content from neo-Nazis, Holocaust deniers, and Sandy Hook skeptics
The Independent likewise:
YouTube to delete thousands of accounts after it bans supremacists, conspiracy theorists and other ‘harmful’ users
However, even these headlines – though a touch closer to the whole
truth – leave out some really important information (I’m sure entirely
by accident).
As much as the media are playing the neo-Nazi/hate speech angle, there’s far more to it than that.
To really dig down into what this means, we need to ignore the media and go straight to the source. This is YouTube’s official statement on the matter, posted on their blog. The bans, contrary to the media headlines, are not about
racism. They are far more incoherent than that – they are about
“supremacist content”.
YouTube’s delightfully vague description of which, is as follows:
videos alleging that a group is superior in order to justify
discrimination, segregation or exclusion based on qualities like age,
gender, race, caste, religion, sexual orientation or veteran status.
Honestly, almost any video you wanted – that expresses a political
position – could be twisted into fitting that description. But it
doesn’t end there:
Finally, we will remove content denying that well-documented violent
events, like the Holocaust or the shooting at Sandy Hook Elementary,
took place.
What does “well documented” mean? It’s a deliberately ambiguous phrase.
The cited examples, the Holocaust and Sandy Hook, are chosen for shock value – but they are only examples: “Like the holocaust”.
What other examples might there be? The Douma gas attack from last year? The poisoning of Sergei Skripal?
You can’t deny people the right to ask simple questions. “Did that really happen?”, “Is the government telling the truth?”
These are the basic questions of journalism. You can’t simply pass
history off as “well documented” and put it beyond question. Don’t let
them cite the Holocaust as an example to bully you into silence. Free
speech applies to all topics, and all opinions, no matter how “well
documented” they are.
In an increasingly fake world, where government actions are routinely narrative-based rather than reality-based, outlawing the ability to simply say “that didn’t happen, you made that up!” is incredibly powerful.
It doesn’t stop at that either, “violent incidents” are just the start. There are other kinds of “harmful content”:
harmful misinformation, such as videos promoting a phony miracle cure for a serious illness, or claiming the earth is flat
Again, note the use of extreme examples – flat earth and “miracle
cures”. It’s manipulation. What they’re talking about is “well
documented” science. They mean the big three: Climate change, GM crops and vaccinations. Questioning any of those will become “harmful”.
People will say “obviously people shouldn’t be allowed to question
vaccination”, but they’re wrong. People should – people must – be
allowed to question everything. That’s what free speech means. Imagine
this was seventy years ago, corporate consensus then was that smoking
was good for you. Studies saying otherwise would have been described as
“harmful misinformation” that were “shaking public confidence in our
industry”. Whether censoring lies or censoring truth, censorship serves
the same agenda – protecting authority. What is “harmful content”?
Harmful content is anything that attacks the “well documented” official
consensus.
For that matter, what is hate speech? The phrase is used half-a-dozen times in the statement, but it can mean all kinds of things.
Critics giving bad reviews to Star Wars: The Last Jedi and the Ghostbustersremake
were described as “misogynists” just because the main characters were
women. Will poorly reviewing films with a female, or ethnic minority,
main character be hate speech too?
This might seem a trivial example, but it hands enormous power to
film studios to shut down negative opinions on their films, and
Hollywood is a huge propaganda outlet for mainstream ideology. Besides,
the triviality is the point.
This blanket term can be applied anywhere and everywhere, and with
the increasingly hysterical tone of identity politics, almost anything
could be deemed “hate speech”.
As we have said many times, “hate speech” is a term which can mean whatever they want it to mean. YouTube are expanding on that though, creating a whole new category called “almost a bit like hate speech”.
Yes, you don’t even have to actually break the rules anymore:
In addition to removing videos that violate our policies, we also
want to reduce the spread of content that comes right up to the line.
See? YouTube will ban channels, or at least suppress creators, who “bump up against the line”.
Meaning, even if you’re incredibly clever, and work seriously hard to
keep anything that a dishonest mind could potentially twist into “hate
speech” out of your content…they’ll just ban you anyway and claim you “nearly did hate speech”. Another way they’re combatting all this “dangerous misinformation” is by “boosting authoritative sources”:
For example, if a user is watching a video that comes close to
violating our policies, our systems may include more videos from
authoritative sources (like top news channels) in the “watch next”
panel.
For example, if you watch an alt-news interview with Vanessa Beeley,
your next “recommended video” will be a piece of western propaganda
mainstream news from a massive corporate interest an authoritative
source telling you to ignore everything you just heard, and/or calling
Beeley an “apologist for war crimes”. It’s a beautiful system, really. Very efficient and not-at-all Orwellian.
Don’t worry though, you can still use the platform, as long as Google trusts you [emphasis ours]:
Finally, it’s critical that our monetization systems reward trusted
creators who add value to YouTube. We have longstanding
advertiser-friendly guidelines that prohibit ads from running on videos
that include hateful content and we enforce these rigorously…In the case
of hate speech, we are strengthening enforcement of our existing
YouTube Partner Program policies. Channels that repeatedly brush
up against our hate speech policies will be suspended from the YouTube
Partner program, meaning they can’t run ads on their channel or use
other monetization features like Super Chat.
See? If you’re a “trusted creator” you still get your ad money. Just
don’t break the rules – or even come near breaking the rules – or the
money stops.
This is about creating an environment free of hate, and NOT enforcing
a state-backed consensus using vague threats to people’s financial
well-being. Shame on you for thinking otherwise. Now, how will YouTube decide which stories “come up to
the line” or “spread misinformation” or “hate speech”? How is it
determined which users are “trusted creators”? Well, simply put, the government will tell them.
YouTube freely admits to this. Outside of its wishy-washy definitions,
its incredibly vague buzzwords, and its platitude filled “reassurances”,
the most important part of YouTube’s statement is this:
As we do this, we’re partnering closely with lawmakers and civil
society around the globe to limit the spread of violent extremist
content online.
“Partnering closely with lawmakers” means “working with the
government”, essentially an admission that YouTube (owned by Google, in
turn, owned by Alphabet Corp.) will remove any videos the state orders
them to remove.
Something we all knew already, but it’s refreshing they’re admitting it.
So, some questions arise:
Will this be the death of youtube as any kind of source for alternate information?
What will be classified as “conspiracy theories”?
What about, for example, people questioning the official story of the Douma “attack”? Or MH17?
How long before there is a mass migration to rival platforms?
Will those platforms be allowed to exist?
In the meantime, we suggest migrating to other video platforms, such as d.tube or bitchute.
* * *
Here is an initial list - courtesy of @infElePro - of those affected by YouTube's purge so far...
— Jack Posobiec 🇺🇸 (@JackPosobiec) June 6, 2019
Finally, SHTFplan's Mac Slavo notes that we all knew that the censorship would be ramped up sooner or later. There are just not enough people left to fall in line with globalist and authoritarian ideals anymore without it.
Sadly and quite horrifically, the difference between the Nazi book
burning of the past and the technology giants censorship of today is
support. People all over the globe condemned the censorship of
the Nazi’s, while today, people are pushing for others to be silenced.
We live in a disturbing time in history, for certain.