I've probably seen that phrase 500 times in the last few years. This is the first time I've seen it used correctly since I was in college (a long, long time ago).
anyone who would like to read Geddy Lee's excellent autobiography can decide for themselves if the "holocaust never existed." it was one of the most harrowing reads that i never expected from a music biography. it's super important. god, these idiots.
Just finished that chapter last night. What a horrific experience his parents and their families went through. I knew they were survivors, but didn’t know the full story.
And when I deleted my last twitter account (I used it to promote my writing on Product Management) in April, the last few times I logged in, I was inundated with hardcore quasi-amateur porn. All their filtering seemed to be broken, and a lot of the OF/Porn accounts had purchased the blue check and that guaranteed their content being smeared like peanut butter over my glasses.
Ick.
(no, I don't judge the creators, they are/were doing what they needed to survive and make a living. It was just that all the filters were hopelessly broken, and the incentives of the premium/plus levels of subscription drove their behaviors.)
"I dislike McKenzie’s apologia for Substack’s policy and for Richard Hanania because it has a sort of detached, sociopathic philosophy popular with techbrahs that all differences of opinion are equal"
^^^^^!!!!!
Will follow you anywhere -- fan in WEST TEXAS (okay, and usually Austin :)
If the greatest annoyance of being a defender of free expression is spending a great deal of your time and effort defending scoundrels, then the second greatest annoyance is all the scoundrels spending a great deal of time and effort pretending that their self interest is really a grand defense of free expression.
Whether it's Elon Musk (a fierce advocate of free speech except when he censoring people he doesn't like), the university presidents (taking a break from their usual draconian "hostile environment" enforcement to become champions of free speech on this one issue), Donald Trump (Musk's position, just with a bigger stick), to Substack, there is an endless parade of wolves cloaking themselves in the guise of free speech.
I suppose we should take it as a compliment of sorts. When bad people are pretending to uphold your values, you know that you have traction with the public.
Demagogues choose their political appeals because they work.
The university presidents didn't say that they were making sure those uppity Jews knew their place (although someone in a similar position 100 years ago might have), they didn't claim that these were private decisions at private universities and that it was therefore none of Congress's business (That may be true, but it wasn't going to help them politically.), and they didn't say that they were combatting colonialist oppression by siding with the Palestinian supporters (That might have been popular back at the campus, but it wasn't going play in Peoria.).
They said they were upholding the principle of free speech. That tells us that among their available arguments, they thought that would be the best received.
That tells us that even these people who don't really believe in it think that free speech is popular with the public.
To be fair to the administrators (and I have no particular inclination to be so, but this argument does make itself) the argument they made was literally the one they'd been told to make (albeit not in so many words) by the exact same people who were "holding them to account". Republicans have spent much of the past decade hammering universities for not being friendly enough to casually genocidal talking points and using "free speech" as the fig leaf.
That just strengthens my point. Republicans are hardly notable as champions of free speech other than their own, and yet they too choose to hammer Democrats on this point.
I am a big fan of Milton Friedman's dictum that the key to success in politics isn't trying to elect good people (a fool's errand), but rather creating incentives such that even bad people find it advantageous to do good things.
But therein lies the rub - all this talk about "free speech" as a metaphor for "my specific speech" and/or hate speech isn't making speech any more free, it's just muddied the water about what free speech actually means.
So, if your point is simply that people are talking about free speech more, then yes my own point strengthens that one. But if you want all that talk to /mean something/ as per Friedman then I don't agree, because what Republicans and Substack are doing is weakening the usefulness of the concept by skunking the term.
TL;DR: *Some* concept of free speech has become popular with the public, for a given definition of popular, but I'd argue not a useful definition - and different segments are using different definitions.
Substack and Republicans may be employing free speech selectively but they are still making the argument that we should not silence people barred on their point of view, which is the essential principle of free speech. They are simply eliding all the instances in which they are not upholding that principle.
Thankfully there are Democrats available to call them to account in those instances, just as they are Republicans available to hold Democrats to account for, say, attempts to ban hate speech.
That's where Friedman's principle applies. Not because either side is a particularly strong friend of free speech, but because an accusation that someone is against free speech remains a powerful political weapon, which limits the amount of censorship politicians, university administrators, or police officers, or anyone else in a position of authority can engage in. It is protection even in those areas where the first amendment does not apply.
my point was that hijacking a buzzword for one's audience, who didn't comprehend or care about the real concept behind it in the first place, isn't an actual appeal to that concept, and it subsequently insulates that audience's idiocies when they can just shout back "no WE are the ones who care about [concept-buzzword]"
i certainly didn't deny it "works". i just don't share in being indirectly encouraged by its deployment by scumbags.
Left leaning replies to Mainstream Media posts have been disproportionately shadow-banned to the point of being invisible, meanwhile the right wing anonymous unsubscribed replies still appear.
Great job, as always, of parsing a messy subject. I don't like Substack's decision, but as you rightly point out, it's extremely easy for readers to avoid undesirable content on this platform, unlike some others. I spend quite a bit of time on Substack and can honestly say I've never encountered any of the nazi crap. I can see why you might not want to help Substack make money now, and I understand why their justification is irksome, but unless and until there's a functionally better alternative — for you and your readers — I hope you'll stay.
There’s a writer’s pleasure in drafting a footnote calculated to vex Reply Guys and not only for it to work but the Reply Guys can’t resist outing themselves as Reply Guys by kvetching about it.
I appreciate the footnote, as I was legitimately confused that you were defining “Nazi” as “racist.”
I’m projecting here, but I wonder if pure annoyance is the reason Substack is making the choices it has. I wouldn’t look forward to fielding incessant complaints and never-ending arguments about who is, and who isn’t actually a “Nazi.” Especially if I’d built my platform in a way that one needs to seek out the offensive (or good) stuff.
Ultimately, I think the question becomes how much any of us needs the companies we do business with to share our values.
I don’t necessarily assume that racists talking on Substack has caused “direct harm” to people or that removing such people from Substack would prevent such harm.
And then there's this kovely footnote to clarify how the word is being defined:
“I use “Nazis” throughout in this post as shorthand to refer to an array of right-wing bigots and assholes with the secure knowledge that doing so will offend and annoy the people I intend to offend and annoy. Merry Christmas!”
Since you have all this evidence of real-world harms caused by Substack speech, go ahead and share it.
I remain skeptical that it exists, and based on the 500 other comments you've hammered across this thread, I think you just want to shut up people that you find disagreeable.
Thank you for this very helpful response. It helped me better understand and classify my concerns, and I just wanted to flag that I'd follow your writing to just about any new platform.
I don't understand your objection. Yes, it's true that they remove certain speech but why is that an issue?
It's certainly possible to believe, indeed I actually do believe, that in general trying to deplatform Nazis from substack or other bad ideas isn't particularly effective and has more harms than benefits. Indeed, I think that not doing so does create valuable social benefits (less people can scream their ideas are being oppressed which makes those ideas less sexy).
I also believe that other forms of speech such as imminent calls for violence or doxxing present different costs/benefits and there the costs exceed the benefits. Indeed, that's made more plausible by the fact that it roughly tracks the kind of situations (tho not the specific tests) where we don't extend 1st amendment protections (invasion of privacy torts, harassment and imminent calls to violence).
Maybe you don't agree with that judgement but it's a perfectly reasonable one that the people in charge of substack probably genuinely believe so what's the problem? Sure they have an incentive to believe it but that's the nature of for profit buisness.
It's the lying. They say they don't moderate content, but they do. They've just made a choice not to moderate white supremacist content. And they think we're too dumb to see through it. You don't have an objection to being lied to?
Language is tricky. I'd understand that claim to mean they do exactly what they say they do.
He has to make his point briefly and if he goes on a long digression about the difference between blocking things they fear might lead to legal liability or just harms via doxxing/imminent calls to violence most people are just going to tune out.
It feels like a very uncharitable reading of the comments.
They say, "we arre committed to upholding and protecting freedom of expression, even when it hurts."
They do restrict a range of types of expression - including some that are indisputably protected free expression under the First Amendment, but which don't align with their brand values. Like smut. And - to be clear - their potential legal liability here is not spectacularly high. That's a branding choice, not a legal restrictions one.
This does not strike me as difficult to understand.
That's like the branding of literally every company. Yes they gave a brief statement that highlights their general defense not a PhD thesis that explains every judgement.
And if they want to say, "hey, we're chill with any content that we don't think hurts our branding the way smut does," that's fine. But that's not what they're doing. They're pretending that they're standing for expression, writ large, rather than 'expression we think fits the branding that makes us money.'
You are effectively saying that if someone believes that a very broad range of speech should be allowed we are going to pick on every nit or imperfection in a way we wouldn't if you believed in only allowing a narrower range of speech on your platform.
And yes, you can believe different things about what the optimum range of allowed content should be but let's not pretend applying higher standards to those who choose to allow a broader range of content is anything but a way of saying that you prefer platforms be more restrictive.
That's not usually how people work. It's more that they tend to convince themselves what benefits them is true. I mean if people can convince themselves Christ rose from the dead or Mohamed flew to heaven (pick whichever you don't believe) then he's probably convinced himself of this (whether or not he believed it before...which he may have).
But that's such a universal human failing we don't usually see it as deserving criticism absent some overt evidence of hypocrisy. And since I actually do believe this and don't have a financial stake it seems plausible to me he also believed it beforehand -- not certain but plausible.
None of that denies that lies can be beneficial to people.
Nor does it deny that, as demonstrated in human social psychology and its self-evolution, that the most powerful lies to the external world are the ones we tell best to ourselves first.
And, hypocrisy slots right in the juxtaposition of knowing, at least vaguely, if not explicitly, that maybe what we're claiming to ourselves first, then others, tain't so.
Per your examples, especially given that either 2000 or 1400 years later, we can be pretty sure those things didn't happen, if we wanted to, we could call a fundamentalist Christian or Muslim a liar too, for that matter.
Oh, I blogged about the money train behind Dreher's Primitive Root Wiener fixation getting to be too much being cut off in March. (And, per the original response, I've met Dreher in person, and if not a liar, knew he was unethical 20 years ago when breaking Morning News rules by also doing other editorial writing for pay. And, I was one of the first to report him.) https://socraticgadfly.blogspot.com/2023/03/i-come-to-bury-rod-dreher-not-to-praise.html
There really isn't much profit in explaining things to and trying to persuade people who are reasoning in bad faith.
Substack provides financial bonuses and exposure to "controversial voices". A casual inspection of those specially promoted "controversial voices" shows that they are all but exclusively technolibertarian and far right voices.
If substack wishes to provide a comfortable platform and promotion to racists and authoritarians, that is their privilege. But I don't want sanctmonious bullcrap as an excuse. They should just admit that they are more likely to get subscriptions from the wealthy white followers of hard right extremists, and that their techno-libertarian investors with racist authoritarian leanings reward them for doing so.
Ken indicates he has a major objection in that they do not come to grips with and avoid the fact that they are monetizing the Nazis, not just allowing them to exist. The rational is vacuous.
One wonders how much revenue is from actual racists versus those (like myself) who are disturbed by the government doing an end run on the first amendment these last several years by suggesting specific people and content that the government believes violate individual social media companies' terms of service (unconstitutional censorship by proxy.) These programs are well documented to have stiffled actual scientists from communicating science, e.g. aerosol physicists who proved that covid was airborne, so they already ran well out over their "legitimate" "anti-Nazi" skis and into Orwellian goodthink vs doubleplusungoodthink.
If the amount earned from actual bigots is small, then one imagines that exercising de minimis is in order. Relatedly, one also wonders if Ken might have a different opinion of Substack if they were to e.g. donate any monetization of bigots to charity.
That Venn diagram is a circle. The government hasn't done what you said, and the people who believe otherwise invariably believe all sorts of other evidence-free stuff that justifies their priors.
Ken's opinion of Substack here is based on them doing things they say they don't do. Presumably it would remain that way no matter what sort of virtue signalling they slap on top. You support them because the virtue signalling they currently do appeals to you...
"At one point, Lidia Morawska, a revered atmospheric physicist who had arranged the meeting, tried to explain how far infectious particles of different sizes could potentially travel. One of the WHO experts abruptly cut her off, telling her she was wrong, Marr recalls. His rudeness shocked her. “You just don’t argue with Lidia about physics,” she says."
"1.2. Nature of COVID-19 82 (20.6) “COVID-19 is becoming scary. It is now airborne. Is there still anywhere safe? "
This was published in 2023, well after even CDC after much kicking and screaming admitted covid is airborne. Yet the "disinformation research community" is either so dumb or so goalseeking that it continues to provide "objective" "scientific" papers supporting the censorship goals of the government.
No that doesn't make the rational vacuous, it just means they might have an incentive to believe the claim but not that it's false.
Would you prefer they let Nazi content creators take 100% of their subscription fees? Seems like there is a no win situation here. If you believe that it's better to not deplatform Nazis you are either bad because you give them a discount or bad because you make a profit.
I agree that's probably a good PR move, but I'm not really understanding why it's morally required. Sure, I'm more likely to see him as really principled if he does but usually we require more than just: well he has an incentive to believe this before accusations of moral wrongdoing.
I agree it would be morally preferable. I'm just not convinced failing to do so warrants moral blame. I fear that is implicitly demanding more from those who believe in the benefits of free expression than those who don't.
You'll have noticed that deciding whether to continue freely associating with Substack (taking his ball and going home) is precisely the topic of Popehat's post. Some businessmen have ethics and/or morals they bring to the business world from their private lives. Others learn to emulate having ethics by observing peers. Yet others are amoral sociopaths who learn only when they lose enough customers to notice.
Substack, like any business, is open to scolds (and people of good conscience) objecting to their business practices publicly. See also the Exxon Valdez incident. There is a reason there is a "goodwill" entry in most businesses' accounting ledgers.
“But that doesn’t mean I have to accept Substack’s attempt to convince me that its branding is about the good of humanity. It’s about money. Hamish McKenzie’s apologia for Substack’s approach is full of dubious (if common) arguments.”
1) SubStack management conflates 1st amendment with personal (or organizational) values. 2) SubStack acts as if they are “free speech” crusaders who are--like any other platform--making moderation decisions based on what they believe will make them the most successful. 3) SubStacks offers weak arguments to paint themselves as the crusaders.
Yes, maybe they aren't optimally clear in their language and I know you dislike unclear distinctions between claims of rights and cultural benefits. Ok fine, I agree it would be slightly better if they published a philosophical thesis on the matter and defined all teems but only a few of us would bother to read it and if their crime is not being optimally clear...well that doesn't seem like the biggest deal.
Their crime is that they are pretending that principle forces them to platform Nazis, when the truth is that they have made an editorial decision to platform Nazis—of the same sort as their editorial decision to deplatform pornography. They need to defend that editorial decision, or they can’t reasonably object to being called a “nazi platform”.
Cutting through the bullshit, Substack wants a cut of the small but lucrative market for antiwoke bullshit, and their actions here are brand management, not moderation policy. They don’t want to admit that because they would lose the rest of us. They have to pick, but through bullshit protestations about free speech, belied by their editorial choices to limit speech, they are trying to weasel out of choosing.
Do you think I'm lying about my views or reaction? For what possible motive? Based on the likes to some of my comments, it seems that I'm not alone in my views. Maybe, I'm mistaken in my beliefs but it seems implausible that a bunch of subscribers to this substack are lying about their views to fanboi about some CEO that no one knows about. People believe in fucking UFOs, alien abductions and bigfoot and where you think someone must be disingenuous is in a detailed argument about free expression and platforming?
Either way, I'm sorry that you have trouble believing that people could interpret things very differently and have very different reactions. That seems like it must make a great deal of conversation difficult.
If it helps, I think you are assuming a context for his claims which requires a particularly principled excuse to allow Nazi content but doesn't require one to moderate them away. I don't see any reason we should expect a company to be any more principled in a justification to allow the content than to moderate it away.
This feels like concern trolling, because it’s hard to believe any one can believe your words to be true. It’s trivial--it’s actually de facto truth in society that to moderate Nazi content is principled. It’s in fact only news when some take the opposite view--that’s it’s principled to allow it. So that’s why people call it out and yeah , they’re looking for a reasoned, principled view. But it turns out, there’s evidence they don’t truly hold any principles to a coherent degree. Or at least are unable to articulate that.
I don’t have hard evidence, but my eyes tell me that allowing all this in the open negatively influences some on the margins and empowers those deeply into it. To consciously platform it versus just allowing it is further sign on being unprincipled.
It's much more plausible that. blocking Nazi content makes society better than not blocking it then it's reasonable to critisize people for not blocking it. I don't know why we care if they have a principled view or not because we are critisizing them *because* we don't find that view plausible.
OTOH if you think, as I do, that lots of the attraction to being a Nazi is it being forbidden and sexy (I've seen that attract people) and being a separate little isolated society then it seems equally plausible that it's the platforms that moderate the Nazi content that are imposing the harm (increasing the number of Nazis).
And there is a reasonable argument to be had over that issue but that's the argument to be had. We shouldn't disapprove of it by stealth by saying that the people who think deplatforming is more harmful have to meet a higher standard. Either we have good reason to think they are wrong so who cares how principled they are or we don't and we should treat them similarly.
Part of me like the idea that the Nazi accounts and their supporters are "out in the open". That means they're only a subpoena away from a prosecutor or litigant who wants to know more about them. It's even better if Substack does some sort of verification beyond a credit card like when Facebook wanted a scan of my driver's license to give me access to my account. I'm still getting email from them to reactivate my dead account 5 years later.
This is a great, balanced article on the issue. I get annoyed at people constantly conflating deplatforming with *amplification*. As you regularly point out, the First Amendment means the government can't throw you in jail because it doesn't like your speech, it very much does NOT mean that every whacko out there is entitled to go viral on a private platform. I am glad you don't routinely encounter objectionable content on here. However, in my experience, I am repeatedly suggested so-called "Science" and "Health" content filled with insane, dangerous anti-vaccine and anti-LGBT nonsense filled with ignorant mistakes at best and egregious lies at worst. I'm not suggesting anybody be kicked off for their crazy views. I *am* suggesting that is a straw man argument and a false choice that rejects any basic guardrails against toxic garbage proliferating into our view.
This is the challenge with moderation. We’ve moved from “actual Nazis” to Richard Hananian to “anti-vaccine nonsense” and “ignorant mistakes,” all of which someone wants banned from this platform for their own sincere reasons. (Not to mention the Jesse Singal Wars.)
In my experience, 100% of keyboard “free speech warriors” take any view besides TOTAL HANDS OFF FREE FOR ALL LET THEM DO WHAT THEY WANT as “Oh, so you think issue X is the same thing as Nazism?? You want them banned off the platforms?!?!” And it’s like....no, my dude, reading comprehension. Every post I include language like “I’m not suggesting anyone be kicked off for their crazy views.” That is absolutely not the same thing as “hey, maybe monitor you’re algorithms and provide fact-checking guardrails to make sure posts about injecting bleach and eating horse dewormer paste don’t go viral and whip around the internet faster than the speed of a human neuron”
In other words, keep them on, but don't promote them? When you mentioned the toxic nonsense in your feed, your complaint wasn't that it was allowed but that it was placed in your feed?
Pretty sure I understand, but double-checking is good.
More or less, yes. When it is promoted on their official podcasts, as featured authors on "Office Hours," and amplified through Recommendations and in the feed, to me that tips from being a neutral utility to putting their thumb on the scale and exercising editorial judgment. If people want to actively search and find whatever subculture they want and choose to subscribe there, OK, fine. But Substack has arguably used some of these controversial lightning rod people cynically to promote their own brand, like "Look at us, we're the cool place with all these edgy CANCELED writers" (fact check: none of these people have actually been cancelled, they're all doing fine financially, some of them better than ever)
It appears you have quite weak reading and reasoning abilities. I'll help you out with a few pointers:
(1) This is NOT a public forum, it is quite literally a privately-held, for-profit publishing platform that operates on an opt-in, "walled garden" subscription model
(2) Calling for censorship does not mean someone is less than human. If I don't want the original version of "Saw" playing on Nickelodeon during prime time hours, that does not in any way imply I think the director, writer, actors, or studio are less than human; it means I don't think kids should see explicit gore. Even in a very pro-free speech country like the US we censor things all the time; for example, you're not allowed to threaten to kill the president even if you don't like him, because we've decided there is a public interest in not threatening political violence.
(3) Point #2 above isn't even relevant because at no point did I call for people to be censored! It is not censorship to say there should be appended fact checks, or maybe the "Recommended for You" tab shouldn't blast crazy nonsense like covid vaccines will change your species across anyone browsing the Science category, or maybe when picking from the tens of thousands of writers on your platform to spotlight on your podcast you DON'T pick a well-known racist. Just because people aren't forced to listen to your crazy bullshit doesn't mean it's censorship.
(4) It is funny that the second someone disagrees with these so-called free speech absolutists they fly into a rage and threaten to punish their perceived enemies, you're really telling on yourself there. Also, it is helpful to know who to avoid on these platforms, so thanks for the tip!
I've been in the process of considering alternatives for a bit, in part because of a recent discovery that Substack's SEO has issues but also because of issues like this one, but I've been having a heck of a time with the alternatives. Going back to Medium doesn't work because of the paywall-gate on content, Wordpress + addons is bloated, Buttondown is run by one vigilant (and likely overworked) guy, and Ghost, for some reason, isn't importing my historical posts and has elements that require you to be an IT engineer to overcome. This isn't me justifying staying or defending Substack but providing more insight into why folks stick around, at least in the short term.
This is why I really appreciated Ken's paragraph about being undecided about moving. I think quite a few folks, especially those of us with much smaller followings, utilize Substack due to its ease of use and relatively mature, low-to-zero-cost feature set, and choose to deal with their sideeye-worthy content policies in part because of this. But every time a statement like this is made, even though Substack has, as Ken said, likely done the calculus, it pushes us further in the direction off the platform, regardless of how that might affect things.
UPDATE: The Ghost community helped sort out my post import issue, so I'm moving to Ghost. While this increases my costs from, well, zero to not-zero, and there will likely be technical hurdles, for a variety of reasons besides the content moderation problem, it makes sense for me. And I bet I'm not the only one running the calculus right now.
I have no illusions Substack suffers a significant hit from this latest response - after all, like Ken says, they've figured it's acceptable losses from a business standpoint. But on a side note, one of the most damning side statements from the response is the part about "even if they were a minority of one" they'd have said the same thing. Not a great feeling for a company that you'd think needs to build rapport with their community.
That caught my ear as well, but I couldn't quite identify whether or not it was cribbed from elsewhere. I found one passage from Amos that almost fits, but I had to throw in the towel and admit to being insufficiently learned.
Kudos for correctly using the phrase "begging the question".
Like a downed pilot living in the jungle fighting a war lost decades ago.
I've probably seen that phrase 500 times in the last few years. This is the first time I've seen it used correctly since I was in college (a long, long time ago).
http://begthequestion.info/
Yes! I'd just about given up ever hearing it used correctly again.
I would expect nothing less from Ken White!
Substack Commenters: You just call everyone Nazis, there aren’t real Nazis
Substack: https://substack.com/@popehat/note/c-45876447?utm_source=notes-share-action&r=8qq7y
Nazi=anyone you consider your enemy
Who do you think you're fooling? .
Welp, at least he didn't put too fine a point on it...
anyone who would like to read Geddy Lee's excellent autobiography can decide for themselves if the "holocaust never existed." it was one of the most harrowing reads that i never expected from a music biography. it's super important. god, these idiots.
Just finished that chapter last night. What a horrific experience his parents and their families went through. I knew they were survivors, but didn’t know the full story.
So happy substack removes porn, because I was really concerned about porn stars trying to start race wars. Oh, wait....
I left Twitter because of the crap there. I can leave substack just as easily. And if they don't ban Nazis soon, I will.
Substack disallows porn because they don’t want to get cut off from the credit card system.
Then the apologia probably should have said “we only ban speech that we think will lose us money.”
Exactly. And they think that being Nazi-curious won't cost them money. That's the whole core of the issue.
Never believe a for-profit corporation when it starts telling you how it's goal is for the betterment of humanity. It's a lie.
That's what it said (to me) just without being so explicit (pun intended? your call) about it.
That was my take away.
Good enough reason for me. I don't think anybody here has an itching for porn, and if they do, it is easily obtainable.
I'm just here for the RICO porn
Bring it!
Misses the point.
And when I deleted my last twitter account (I used it to promote my writing on Product Management) in April, the last few times I logged in, I was inundated with hardcore quasi-amateur porn. All their filtering seemed to be broken, and a lot of the OF/Porn accounts had purchased the blue check and that guaranteed their content being smeared like peanut butter over my glasses.
Ick.
(no, I don't judge the creators, they are/were doing what they needed to survive and make a living. It was just that all the filters were hopelessly broken, and the incentives of the premium/plus levels of subscription drove their behaviors.)
"I dislike McKenzie’s apologia for Substack’s policy and for Richard Hanania because it has a sort of detached, sociopathic philosophy popular with techbrahs that all differences of opinion are equal"
^^^^^!!!!!
Will follow you anywhere -- fan in WEST TEXAS (okay, and usually Austin :)
💯 “McKenzie’s apologia deeply annoys me because it treats me like I’m a moron.” Exactly my reaction.
If the greatest annoyance of being a defender of free expression is spending a great deal of your time and effort defending scoundrels, then the second greatest annoyance is all the scoundrels spending a great deal of time and effort pretending that their self interest is really a grand defense of free expression.
Whether it's Elon Musk (a fierce advocate of free speech except when he censoring people he doesn't like), the university presidents (taking a break from their usual draconian "hostile environment" enforcement to become champions of free speech on this one issue), Donald Trump (Musk's position, just with a bigger stick), to Substack, there is an endless parade of wolves cloaking themselves in the guise of free speech.
I suppose we should take it as a compliment of sorts. When bad people are pretending to uphold your values, you know that you have traction with the public.
It really is annoying, though.
Well put.
There’s a difference between protecting their speech and pretending they’re good people.
"...you know that you have traction with the public."
i doubt that that necessarily follows; it seems like it just cheapens/muddies the value in question, which is only ever a win for demagogues.
Demagogues choose their political appeals because they work.
The university presidents didn't say that they were making sure those uppity Jews knew their place (although someone in a similar position 100 years ago might have), they didn't claim that these were private decisions at private universities and that it was therefore none of Congress's business (That may be true, but it wasn't going to help them politically.), and they didn't say that they were combatting colonialist oppression by siding with the Palestinian supporters (That might have been popular back at the campus, but it wasn't going play in Peoria.).
They said they were upholding the principle of free speech. That tells us that among their available arguments, they thought that would be the best received.
That tells us that even these people who don't really believe in it think that free speech is popular with the public.
To be fair to the administrators (and I have no particular inclination to be so, but this argument does make itself) the argument they made was literally the one they'd been told to make (albeit not in so many words) by the exact same people who were "holding them to account". Republicans have spent much of the past decade hammering universities for not being friendly enough to casually genocidal talking points and using "free speech" as the fig leaf.
That just strengthens my point. Republicans are hardly notable as champions of free speech other than their own, and yet they too choose to hammer Democrats on this point.
I am a big fan of Milton Friedman's dictum that the key to success in politics isn't trying to elect good people (a fool's errand), but rather creating incentives such that even bad people find it advantageous to do good things.
But therein lies the rub - all this talk about "free speech" as a metaphor for "my specific speech" and/or hate speech isn't making speech any more free, it's just muddied the water about what free speech actually means.
So, if your point is simply that people are talking about free speech more, then yes my own point strengthens that one. But if you want all that talk to /mean something/ as per Friedman then I don't agree, because what Republicans and Substack are doing is weakening the usefulness of the concept by skunking the term.
TL;DR: *Some* concept of free speech has become popular with the public, for a given definition of popular, but I'd argue not a useful definition - and different segments are using different definitions.
"Hate speech" is free speech.
Substack and Republicans may be employing free speech selectively but they are still making the argument that we should not silence people barred on their point of view, which is the essential principle of free speech. They are simply eliding all the instances in which they are not upholding that principle.
Thankfully there are Democrats available to call them to account in those instances, just as they are Republicans available to hold Democrats to account for, say, attempts to ban hate speech.
That's where Friedman's principle applies. Not because either side is a particularly strong friend of free speech, but because an accusation that someone is against free speech remains a powerful political weapon, which limits the amount of censorship politicians, university administrators, or police officers, or anyone else in a position of authority can engage in. It is protection even in those areas where the first amendment does not apply.
my point was that hijacking a buzzword for one's audience, who didn't comprehend or care about the real concept behind it in the first place, isn't an actual appeal to that concept, and it subsequently insulates that audience's idiocies when they can just shout back "no WE are the ones who care about [concept-buzzword]"
i certainly didn't deny it "works". i just don't share in being indirectly encouraged by its deployment by scumbags.
https://bsky.app/profile/nzheretic.bsky.social/post/3k64tm7op642o
Left leaning replies to Mainstream Media posts have been disproportionately shadow-banned to the point of being invisible, meanwhile the right wing anonymous unsubscribed replies still appear.
Great job, as always, of parsing a messy subject. I don't like Substack's decision, but as you rightly point out, it's extremely easy for readers to avoid undesirable content on this platform, unlike some others. I spend quite a bit of time on Substack and can honestly say I've never encountered any of the nazi crap. I can see why you might not want to help Substack make money now, and I understand why their justification is irksome, but unless and until there's a functionally better alternative — for you and your readers — I hope you'll stay.
There’s a writer’s pleasure in drafting a footnote calculated to vex Reply Guys and not only for it to work but the Reply Guys can’t resist outing themselves as Reply Guys by kvetching about it.
Every time I see the term "reply guy" my mind pulls up a picture of a Mario Bros. Shy Guy.
https://www.mariowiki.com/Shy_Guy
The smartest commentary I've seen on this affair.
I appreciate the footnote, as I was legitimately confused that you were defining “Nazi” as “racist.”
I’m projecting here, but I wonder if pure annoyance is the reason Substack is making the choices it has. I wouldn’t look forward to fielding incessant complaints and never-ending arguments about who is, and who isn’t actually a “Nazi.” Especially if I’d built my platform in a way that one needs to seek out the offensive (or good) stuff.
Ultimately, I think the question becomes how much any of us needs the companies we do business with to share our values.
As I suggested, I think it’s completely legit to argue that it’s hard to come up with a principled definition of what you’ll allow and what you won’t.
I'm sorry Ken, but your repeated and remorseless commands to "snort my taint" are causing a moral panic and as such you will be deplatformed.
Fair.
Pretty sure this falls under the banner of 'sexual gratification' and 'pornographic content'. Ken is getting nuked by the Substack laser satellite.
They only ban _visual_ porn, not _written_ porn.
I don’t necessarily assume that racists talking on Substack has caused “direct harm” to people or that removing such people from Substack would prevent such harm.
I’m sure it’s possible. But I’d need to see it.
You won't see it because legit examples do not exist. Katz has been repeatedly taken down for lack of proof there are any Nazis on Substack.
And then there's this kovely footnote to clarify how the word is being defined:
“I use “Nazis” throughout in this post as shorthand to refer to an array of right-wing bigots and assholes with the secure knowledge that doing so will offend and annoy the people I intend to offend and annoy. Merry Christmas!”
From this essay
https://open.substack.com/pub/kevinmkruse/p/moving-forward?utm_source=share&utm_medium=android&r=16mlr
Perhaps you should start by reading this
https://open.substack.com/pub/thestorytellerscorner/p/on-this-supposed-problem-on-substack?utm_source=share&utm_medium=android&r=16mlr
Not "basically." That's exactly what I wrote.
Since you have all this evidence of real-world harms caused by Substack speech, go ahead and share it.
I remain skeptical that it exists, and based on the 500 other comments you've hammered across this thread, I think you just want to shut up people that you find disagreeable.
Thank you for this very helpful response. It helped me better understand and classify my concerns, and I just wanted to flag that I'd follow your writing to just about any new platform.
I don't understand your objection. Yes, it's true that they remove certain speech but why is that an issue?
It's certainly possible to believe, indeed I actually do believe, that in general trying to deplatform Nazis from substack or other bad ideas isn't particularly effective and has more harms than benefits. Indeed, I think that not doing so does create valuable social benefits (less people can scream their ideas are being oppressed which makes those ideas less sexy).
I also believe that other forms of speech such as imminent calls for violence or doxxing present different costs/benefits and there the costs exceed the benefits. Indeed, that's made more plausible by the fact that it roughly tracks the kind of situations (tho not the specific tests) where we don't extend 1st amendment protections (invasion of privacy torts, harassment and imminent calls to violence).
Maybe you don't agree with that judgement but it's a perfectly reasonable one that the people in charge of substack probably genuinely believe so what's the problem? Sure they have an incentive to believe it but that's the nature of for profit buisness.
It's the lying. They say they don't moderate content, but they do. They've just made a choice not to moderate white supremacist content. And they think we're too dumb to see through it. You don't have an objection to being lied to?
Language is tricky. I'd understand that claim to mean they do exactly what they say they do.
He has to make his point briefly and if he goes on a long digression about the difference between blocking things they fear might lead to legal liability or just harms via doxxing/imminent calls to violence most people are just going to tune out.
It feels like a very uncharitable reading of the comments.
They say, "we arre committed to upholding and protecting freedom of expression, even when it hurts."
They do restrict a range of types of expression - including some that are indisputably protected free expression under the First Amendment, but which don't align with their brand values. Like smut. And - to be clear - their potential legal liability here is not spectacularly high. That's a branding choice, not a legal restrictions one.
This does not strike me as difficult to understand.
That's like the branding of literally every company. Yes they gave a brief statement that highlights their general defense not a PhD thesis that explains every judgement.
And if they want to say, "hey, we're chill with any content that we don't think hurts our branding the way smut does," that's fine. But that's not what they're doing. They're pretending that they're standing for expression, writ large, rather than 'expression we think fits the branding that makes us money.'
It's disingenuous. At best.
Except not every company platforms and promotes transparent nazis like substack platforms and actively promotes Richard Hanania.
You are effectively saying that if someone believes that a very broad range of speech should be allowed we are going to pick on every nit or imperfection in a way we wouldn't if you believed in only allowing a narrower range of speech on your platform.
And yes, you can believe different things about what the optimum range of allowed content should be but let's not pretend applying higher standards to those who choose to allow a broader range of content is anything but a way of saying that you prefer platforms be more restrictive.
I
It's not uncharitable to think Hamish is lying.
I'd argue it's either naive or gullible to think he's not. Of, per Mencken, perhaps there's other reasons you think he's not.
That said, I have no desire to discuss the issue further, as I doubt it would be profitable.
That's not usually how people work. It's more that they tend to convince themselves what benefits them is true. I mean if people can convince themselves Christ rose from the dead or Mohamed flew to heaven (pick whichever you don't believe) then he's probably convinced himself of this (whether or not he believed it before...which he may have).
But that's such a universal human failing we don't usually see it as deserving criticism absent some overt evidence of hypocrisy. And since I actually do believe this and don't have a financial stake it seems plausible to me he also believed it beforehand -- not certain but plausible.
None of that denies that lies can be beneficial to people.
Nor does it deny that, as demonstrated in human social psychology and its self-evolution, that the most powerful lies to the external world are the ones we tell best to ourselves first.
And, hypocrisy slots right in the juxtaposition of knowing, at least vaguely, if not explicitly, that maybe what we're claiming to ourselves first, then others, tain't so.
Or, per the Verge, it slots into looking for confirmation bias. https://www.theverge.com/2023/12/21/24011232/substack-nazi-moderation-demonetization-hamish-mckenzie
Per your examples, especially given that either 2000 or 1400 years later, we can be pretty sure those things didn't happen, if we wanted to, we could call a fundamentalist Christian or Muslim a liar too, for that matter.
And now that I see you love you some of Dreher's Primitive Root Wiener, no need to argue further here, either.
Oh, I blogged about the money train behind Dreher's Primitive Root Wiener fixation getting to be too much being cut off in March. (And, per the original response, I've met Dreher in person, and if not a liar, knew he was unethical 20 years ago when breaking Morning News rules by also doing other editorial writing for pay. And, I was one of the first to report him.) https://socraticgadfly.blogspot.com/2023/03/i-come-to-bury-rod-dreher-not-to-praise.html
We assess people all the time who we've never met.
How do you decide a politician is lying or not?
Speaking of, to riff on the old cliche?
"How do you tell a techbro is lying? When they're their lips."
It is difficult to argue with people who don't read what they are arguing about, or whom refuse or are incapable of understanding it.
I'm sorry I appear to not see things the way you do but since I'm apparently not alone, maybe you'd care to try and explain it and persuade us.
There really isn't much profit in explaining things to and trying to persuade people who are reasoning in bad faith.
Substack provides financial bonuses and exposure to "controversial voices". A casual inspection of those specially promoted "controversial voices" shows that they are all but exclusively technolibertarian and far right voices.
If substack wishes to provide a comfortable platform and promotion to racists and authoritarians, that is their privilege. But I don't want sanctmonious bullcrap as an excuse. They should just admit that they are more likely to get subscriptions from the wealthy white followers of hard right extremists, and that their techno-libertarian investors with racist authoritarian leanings reward them for doing so.
Because there are known facts that mean the comments made by McKenzie cannot possibly be true.
What facts are those?
That they moderate content. Aggressively and in a centralized manner. They just think we don't know.
White supremacists is my preferred term, because Nazi is too narrow. Neither one of those are white supremacist organizations.
Ken indicates he has a major objection in that they do not come to grips with and avoid the fact that they are monetizing the Nazis, not just allowing them to exist. The rational is vacuous.
One wonders how much revenue is from actual racists versus those (like myself) who are disturbed by the government doing an end run on the first amendment these last several years by suggesting specific people and content that the government believes violate individual social media companies' terms of service (unconstitutional censorship by proxy.) These programs are well documented to have stiffled actual scientists from communicating science, e.g. aerosol physicists who proved that covid was airborne, so they already ran well out over their "legitimate" "anti-Nazi" skis and into Orwellian goodthink vs doubleplusungoodthink.
If the amount earned from actual bigots is small, then one imagines that exercising de minimis is in order. Relatedly, one also wonders if Ken might have a different opinion of Substack if they were to e.g. donate any monetization of bigots to charity.
That Venn diagram is a circle. The government hasn't done what you said, and the people who believe otherwise invariably believe all sorts of other evidence-free stuff that justifies their priors.
Ken's opinion of Substack here is based on them doing things they say they don't do. Presumably it would remain that way no matter what sort of virtue signalling they slap on top. You support them because the virtue signalling they currently do appeals to you...
Cool story.
https://twitter.com/kprather88/status/1399089677148168192
https://www.wired.com/story/the-teeny-tiny-scientific-screwup-that-helped-covid-kill/
"At one point, Lidia Morawska, a revered atmospheric physicist who had arranged the meeting, tried to explain how far infectious particles of different sizes could potentially travel. One of the WHO experts abruptly cut her off, telling her she was wrong, Marr recalls. His rudeness shocked her. “You just don’t argue with Lidia about physics,” she says."
https://twitter.com/kprather88/status/1625926669549854720?lang=en
and that's just in a few minutes of searching on Google, a platform known to have run cover for the censorship by deindexing reporting on it.
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10337476/
"1.2. Nature of COVID-19 82 (20.6) “COVID-19 is becoming scary. It is now airborne. Is there still anywhere safe? "
This was published in 2023, well after even CDC after much kicking and screaming admitted covid is airborne. Yet the "disinformation research community" is either so dumb or so goalseeking that it continues to provide "objective" "scientific" papers supporting the censorship goals of the government.
No that doesn't make the rational vacuous, it just means they might have an incentive to believe the claim but not that it's false.
Would you prefer they let Nazi content creators take 100% of their subscription fees? Seems like there is a no win situation here. If you believe that it's better to not deplatform Nazis you are either bad because you give them a discount or bad because you make a profit.
Or else Substack donates its cut from bigots to charity.
I agree that's probably a good PR move, but I'm not really understanding why it's morally required. Sure, I'm more likely to see him as really principled if he does but usually we require more than just: well he has an incentive to believe this before accusations of moral wrongdoing.
It affects moral calculus by neither giving discounts to nor profiting from bigots per the two choices you gave at the end of your prior comment.
I agree it would be morally preferable. I'm just not convinced failing to do so warrants moral blame. I fear that is implicitly demanding more from those who believe in the benefits of free expression than those who don't.
You'll have noticed that deciding whether to continue freely associating with Substack (taking his ball and going home) is precisely the topic of Popehat's post. Some businessmen have ethics and/or morals they bring to the business world from their private lives. Others learn to emulate having ethics by observing peers. Yet others are amoral sociopaths who learn only when they lose enough customers to notice.
Substack, like any business, is open to scolds (and people of good conscience) objecting to their business practices publicly. See also the Exxon Valdez incident. There is a reason there is a "goodwill" entry in most businesses' accounting ledgers.
? It’s not really that difficult:
“But that doesn’t mean I have to accept Substack’s attempt to convince me that its branding is about the good of humanity. It’s about money. Hamish McKenzie’s apologia for Substack’s approach is full of dubious (if common) arguments.”
1) SubStack management conflates 1st amendment with personal (or organizational) values. 2) SubStack acts as if they are “free speech” crusaders who are--like any other platform--making moderation decisions based on what they believe will make them the most successful. 3) SubStacks offers weak arguments to paint themselves as the crusaders.
Yes, maybe they aren't optimally clear in their language and I know you dislike unclear distinctions between claims of rights and cultural benefits. Ok fine, I agree it would be slightly better if they published a philosophical thesis on the matter and defined all teems but only a few of us would bother to read it and if their crime is not being optimally clear...well that doesn't seem like the biggest deal.
Their crime is that they are pretending that principle forces them to platform Nazis, when the truth is that they have made an editorial decision to platform Nazis—of the same sort as their editorial decision to deplatform pornography. They need to defend that editorial decision, or they can’t reasonably object to being called a “nazi platform”.
Cutting through the bullshit, Substack wants a cut of the small but lucrative market for antiwoke bullshit, and their actions here are brand management, not moderation policy. They don’t want to admit that because they would lose the rest of us. They have to pick, but through bullshit protestations about free speech, belied by their editorial choices to limit speech, they are trying to weasel out of choosing.
"[T]rying to deplatform Nazis from substack or other bad ideas isn't particularly effective and has more harms than benefits"
Evidence? Don't bother - there is none. Germany didn't fall to Hitler because of censorship.
Germany fell to Hitler in spite of far more aggressive censorship than we countenance in the US.
https://www.thefire.org/news/blogs/eternally-radical-idea/would-censorship-have-stopped-rise-nazis-part-16-answers
2 things:
1 - That's not evidence that censorship was less effective at slowing the rise of Nazism than unfettered free speech.
2 - We're not talking about government censorship.
While I find you disingenuous I decided to be kind and TLDR for you:
“McKenzie’s apologia deeply annoys me because it treats me like I’m a moron.”
Do you think I'm lying about my views or reaction? For what possible motive? Based on the likes to some of my comments, it seems that I'm not alone in my views. Maybe, I'm mistaken in my beliefs but it seems implausible that a bunch of subscribers to this substack are lying about their views to fanboi about some CEO that no one knows about. People believe in fucking UFOs, alien abductions and bigfoot and where you think someone must be disingenuous is in a detailed argument about free expression and platforming?
Either way, I'm sorry that you have trouble believing that people could interpret things very differently and have very different reactions. That seems like it must make a great deal of conversation difficult.
If it helps, I think you are assuming a context for his claims which requires a particularly principled excuse to allow Nazi content but doesn't require one to moderate them away. I don't see any reason we should expect a company to be any more principled in a justification to allow the content than to moderate it away.
This feels like concern trolling, because it’s hard to believe any one can believe your words to be true. It’s trivial--it’s actually de facto truth in society that to moderate Nazi content is principled. It’s in fact only news when some take the opposite view--that’s it’s principled to allow it. So that’s why people call it out and yeah , they’re looking for a reasoned, principled view. But it turns out, there’s evidence they don’t truly hold any principles to a coherent degree. Or at least are unable to articulate that.
I don’t have hard evidence, but my eyes tell me that allowing all this in the open negatively influences some on the margins and empowers those deeply into it. To consciously platform it versus just allowing it is further sign on being unprincipled.
Yes, I agree that if you assume:
It's much more plausible that. blocking Nazi content makes society better than not blocking it then it's reasonable to critisize people for not blocking it. I don't know why we care if they have a principled view or not because we are critisizing them *because* we don't find that view plausible.
OTOH if you think, as I do, that lots of the attraction to being a Nazi is it being forbidden and sexy (I've seen that attract people) and being a separate little isolated society then it seems equally plausible that it's the platforms that moderate the Nazi content that are imposing the harm (increasing the number of Nazis).
And there is a reasonable argument to be had over that issue but that's the argument to be had. We shouldn't disapprove of it by stealth by saying that the people who think deplatforming is more harmful have to meet a higher standard. Either we have good reason to think they are wrong so who cares how principled they are or we don't and we should treat them similarly.
Part of me like the idea that the Nazi accounts and their supporters are "out in the open". That means they're only a subpoena away from a prosecutor or litigant who wants to know more about them. It's even better if Substack does some sort of verification beyond a credit card like when Facebook wanted a scan of my driver's license to give me access to my account. I'm still getting email from them to reactivate my dead account 5 years later.
This is a great, balanced article on the issue. I get annoyed at people constantly conflating deplatforming with *amplification*. As you regularly point out, the First Amendment means the government can't throw you in jail because it doesn't like your speech, it very much does NOT mean that every whacko out there is entitled to go viral on a private platform. I am glad you don't routinely encounter objectionable content on here. However, in my experience, I am repeatedly suggested so-called "Science" and "Health" content filled with insane, dangerous anti-vaccine and anti-LGBT nonsense filled with ignorant mistakes at best and egregious lies at worst. I'm not suggesting anybody be kicked off for their crazy views. I *am* suggesting that is a straw man argument and a false choice that rejects any basic guardrails against toxic garbage proliferating into our view.
This is the challenge with moderation. We’ve moved from “actual Nazis” to Richard Hananian to “anti-vaccine nonsense” and “ignorant mistakes,” all of which someone wants banned from this platform for their own sincere reasons. (Not to mention the Jesse Singal Wars.)
In my experience, 100% of keyboard “free speech warriors” take any view besides TOTAL HANDS OFF FREE FOR ALL LET THEM DO WHAT THEY WANT as “Oh, so you think issue X is the same thing as Nazism?? You want them banned off the platforms?!?!” And it’s like....no, my dude, reading comprehension. Every post I include language like “I’m not suggesting anyone be kicked off for their crazy views.” That is absolutely not the same thing as “hey, maybe monitor you’re algorithms and provide fact-checking guardrails to make sure posts about injecting bleach and eating horse dewormer paste don’t go viral and whip around the internet faster than the speed of a human neuron”
In other words, keep them on, but don't promote them? When you mentioned the toxic nonsense in your feed, your complaint wasn't that it was allowed but that it was placed in your feed?
Pretty sure I understand, but double-checking is good.
More or less, yes. When it is promoted on their official podcasts, as featured authors on "Office Hours," and amplified through Recommendations and in the feed, to me that tips from being a neutral utility to putting their thumb on the scale and exercising editorial judgment. If people want to actively search and find whatever subculture they want and choose to subscribe there, OK, fine. But Substack has arguably used some of these controversial lightning rod people cynically to promote their own brand, like "Look at us, we're the cool place with all these edgy CANCELED writers" (fact check: none of these people have actually been cancelled, they're all doing fine financially, some of them better than ever)
It appears you have quite weak reading and reasoning abilities. I'll help you out with a few pointers:
(1) This is NOT a public forum, it is quite literally a privately-held, for-profit publishing platform that operates on an opt-in, "walled garden" subscription model
(2) Calling for censorship does not mean someone is less than human. If I don't want the original version of "Saw" playing on Nickelodeon during prime time hours, that does not in any way imply I think the director, writer, actors, or studio are less than human; it means I don't think kids should see explicit gore. Even in a very pro-free speech country like the US we censor things all the time; for example, you're not allowed to threaten to kill the president even if you don't like him, because we've decided there is a public interest in not threatening political violence.
(3) Point #2 above isn't even relevant because at no point did I call for people to be censored! It is not censorship to say there should be appended fact checks, or maybe the "Recommended for You" tab shouldn't blast crazy nonsense like covid vaccines will change your species across anyone browsing the Science category, or maybe when picking from the tens of thousands of writers on your platform to spotlight on your podcast you DON'T pick a well-known racist. Just because people aren't forced to listen to your crazy bullshit doesn't mean it's censorship.
(4) It is funny that the second someone disagrees with these so-called free speech absolutists they fly into a rage and threaten to punish their perceived enemies, you're really telling on yourself there. Also, it is helpful to know who to avoid on these platforms, so thanks for the tip!
Ken, your voice is a very deeply needed one, and I hope this post moves the needle a little!
I’d like to see you (and in particular Serious Trouble, which I pay for) move to ghost or another platform.
I've been in the process of considering alternatives for a bit, in part because of a recent discovery that Substack's SEO has issues but also because of issues like this one, but I've been having a heck of a time with the alternatives. Going back to Medium doesn't work because of the paywall-gate on content, Wordpress + addons is bloated, Buttondown is run by one vigilant (and likely overworked) guy, and Ghost, for some reason, isn't importing my historical posts and has elements that require you to be an IT engineer to overcome. This isn't me justifying staying or defending Substack but providing more insight into why folks stick around, at least in the short term.
This is why I really appreciated Ken's paragraph about being undecided about moving. I think quite a few folks, especially those of us with much smaller followings, utilize Substack due to its ease of use and relatively mature, low-to-zero-cost feature set, and choose to deal with their sideeye-worthy content policies in part because of this. But every time a statement like this is made, even though Substack has, as Ken said, likely done the calculus, it pushes us further in the direction off the platform, regardless of how that might affect things.
UPDATE: The Ghost community helped sort out my post import issue, so I'm moving to Ghost. While this increases my costs from, well, zero to not-zero, and there will likely be technical hurdles, for a variety of reasons besides the content moderation problem, it makes sense for me. And I bet I'm not the only one running the calculus right now.
I have no illusions Substack suffers a significant hit from this latest response - after all, like Ken says, they've figured it's acceptable losses from a business standpoint. But on a side note, one of the most damning side statements from the response is the part about "even if they were a minority of one" they'd have said the same thing. Not a great feeling for a company that you'd think needs to build rapport with their community.
+1
"Has Gab fallen? Is Truth Social no more?"
This has a nice seasonal "Are there no workhouses?" feel to it.
That caught my ear as well, but I couldn't quite identify whether or not it was cribbed from elsewhere. I found one passage from Amos that almost fits, but I had to throw in the towel and admit to being insufficiently learned.