Judges rule Big Tech’s free ride on Section 230 is over

The current comments seem to say this is rings the death knell of social media and that this just leads to government censorship. I’m not so sure.

I think the ultimate problem is that social media is not unbiased — it curates what people are shown. In that role they are no longer an impartial party merely hosting content. It seems this ruling is saying that the curation being algorithmic does not absolve the companies from liability.

In a very general sense, this ruling could be seen as a form of net neutrality. Currently social media platforms favor certain content, while down weighting others. Sure, it might be at a different level than peer agreements between ISPs and websites, but it amounts to a similar phenomenon when most people interact on social media through the feed.

Honestly, I think I’d love to see what changes this ruling brings about. HN is quite literally the only social media site (loosely interpreted) I even have an account on anymore, mainly because of how truly awful all the sites have become. Maybe this will make social media more palatable again? Maybe not, but I’m inclined to see what shakes out.

I’m probably mis-understanding the implications but, IIUC, as it is, HN is moderated by dang (and others?) but still falls under 230 meaning HN is not responsible for what other users post here.

With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation. So they have 2 options.

(1) Stop the moderation so they can be safe under 230. Result, HN turns to 4chan.

(2) enforce the moderation to a much higher degree by say, requiring non-anon accounts and TOS that make each poster responsible for their own content and/or manually approve every comment.

I’m not even sure how you’d run a website with user content if you wanted to moderate that content and still avoid being liable for illegal content.

> With this ruling, HN is suddenly responsibly for all posts here specifically because of the moderation.

I think this is a mistaken understanding of the ruling. In this case, TikTok decided, with no other context, to make a personalized recommendation to a user who visited their recommendation page. On HN, your front page is not different from my front page. (Indeed, there is no personalized recommendation page on HN, as far as I’m aware.)

> The Court held that a platform’s algorithm that reflects “editorial judgments” about “compiling the third-party speech it wants in the way it wants” is the platform’s own “expressive product” and is therefore protected by the First Amendment.

I don’t see how this is about personalization. HN has an algorithm that shows what it wants in the way it wants.

So, yes, the TikTok FYP is different from a forum with moderation.

But the basis of this ruling is basically “well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that’s your speech and not somebody else’s and so 230 doesn’t apply and you can be liable for it.” That rationale extends to basically any form of moderation or selection, personalized or not, and would blow a big hole in 230’s protections.

Given generalized anti-Big-Tech sentiment on both ends of the political spectrum, I could see something that claimed to carve out just algorithmic personalization/suggestion from protection meeting with success, either out of the courts or Congress, but it really doesn’t match the current law.

“well the Moody case says that curation/moderation/suggestion/whatever is First Amendment protected speech, therefore that’s your speech and not somebody else’s and so 230 doesn’t apply and you can be liable for it.”

I see a lot of people saying this is a bad decision because it will have consequences they don’t like, but the logic of the decision seems pretty damn airtight as you describe it. If the recommendation systems and moderation policies are the company’s speech, then the company can be liable when the company “says”, by way of their algorithmic “speech”, to children that they should engage in some reckless activity likely to cause their death.

It’s worth noting that personalisation isn’t moderation, An app like TikTok needs both.

Personalisation simply matches users with the content the algorithm thinks they want to see. Moderation (which is typically also an algorithm) tries to remove harmful content from the platform altogether.

The ruling isn’t saying that Section 230 doesn’t apply because TikTok moderated. It’s saying Section 230 doesn’t apply because TikTok personalised, allegedly knew about the harmful content and allegedly didn’t take enough action to moderate this harmful content.

>Personalisation simply matches users with the content the algorithm thinks they want to see.

These algorithms aren’t matching you with what you want to see, they’re trying to maximize your engagement- or, its what the operator wants you to see, so you’ll use the site more and generate more data or revenue. Its a fine, but extremely important distinction.

What the operator wants you to see also gets into the area of manipulation, hence 230 shouldn’t apply – by making algorithms based on manipulation or paid for boosting companies move from impartial unknowing deliverers of harmful content into committed distributors of it.

Specifically NetChoice argued that personalized feeds based on user data were protected due to first person speech. This went to supreme court and supreme court agreed. Now precedent is set by the highest court that those feeds are “expressive product”. It doesn’t make sense, but that’s how the law works – by trying to define as best as possible the things in gray areas.

And they probably didn’t think through how this particular argument could affect other areas of their business.

It absolutely makes sense. What NetChoice held was that the curation aspect of algorithmic feeds makes the weighting approach equivalent to the speech of the platforms and therefore when courts evaluated challenges to government imposed regulation, they had to perform standard First Amendment analysis to determine if the contested regulation passed muster.

Importantly, this does not mean that before the Third Circuit decision platforms could just curate any which way they want and government couldn’t regulate at all — the mandatory removal regime around CSAM content is a great example of government regulating speech and forcing platforms to comply.

The Third Circuit decision, in a nutshell, is telling the platforms that they can’t have their cake and eat it too. If they want to claim that their algorithmic feeds are speech that is protected from most government regulation, they can’t simultaneously claim that these same algorithmic feeds are mere passive vessels for the speech of third parties. If that were the case, then their algorithms would enjoy no 1A protection from government regulation. (The content itself would still have 1A protection based on the rights of the creators, but the curation/ranking/privileging aspect would not).

I misunderstood the Supreme Court ruling that it hinged on personalization per user of algorithms and thought it made a distinction between editorial decisions that show to everyone vs individual users. I thought that part didn’t make sense. I see now it’s really the third circuit ruling that interpreted the user customization part as editorial decisions, not excluding the non-per user algorithms.

Yeah, I agree.

This ruling is a natural consequence of the NetChoice ruling. Social media companies can’t have it both ways.

> If that were the case, then their algorithms would enjoy no 1A protection from government regulation.

Well, the companies can still probably claim some 1st Amendment protections for their recommendation algorithms (for example, a law banning algorithmic political bias would be unconstitutional). All this ruling does is strip away the safe harbour protections, which weren’t derived from the 1A in the first place.

That’s the difference between the case and a monolithic electronic bulletin board like HN. HN follows an old-school BB model very close to the models that existed when Section 230 was written.

Winding up in the same place as the defendant would require making a unique, dynamic, individualized BB for each user tailored to them based on pervasive online surveillance and the platform’s own editorial “secret sauce.”

The HN team explicitly and manually manages the front page of HN, so I think it’s completely unarguable that they would be held liable under this ruling if at least the front page contained links to articles that caused harm. They manually promote certain posts that they find particularly good, even if they didn’t get a lot of votes, so this is even more direct than what TikTok did in this case.

It is absolutely still arguable in court, since this ruling interpreted the Supreme Court ruling to pertain to “a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,”

In other words, the Supreme Court decision mentions editorial decisions but no court case has yet backed up if that means editorial decisions in the HN front page sense (as in mods make some choices but it’s not personalized.) Common sense may say mods making decisions is editorial decisions but it’s a gray area until a court case makes it clear. Precedence is the most important thing when interpreting law, and the only precedence we have is that it pertains to personalized feeds.

The decision specifically mentions algorithmic recommandation as being speech, ergo the recommandation itself is the responsibility of the platform.

Where is the algorithmic recommandation that differs per user on HN?

> Because TikTok’s “algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata,” it becomes TikTok’s own speech.

Key words are “editorial” and “secret sauce”. Platforms should not be liable for dangerous content which slips through the cracks, but certainly should be when their user-personalized algorithms mess up. Can’t have your cake and eat it to.

Dangerous content slipping through the cracks and the algorithms messing up is the same thing. There is no way for content to “slip through the cracks” other than via the algorithm.

You can view the content via direct links or search, recommendation algorithms isn’t the only way to view it.

If you child porn that gets shared via direct links then that is bad even if nobody can see it, but it is much much worse if you start recommending that to people as well.

Everything is related. Search results are usually generated based on recommendations, and direct links usually influence recommendations, or include recommendations as related content.

It’s rarely if ever going to be the case that there is some distinct unit of code called “the algorithm” that can be separated and considered legally distinct from the rest of the codebase.

HN is _not_ a monolithic bulletin board — the messages on a BBS were never (AFAIK) sorted by ‘popularity’ and users didn’t generally have the power to demote or flag posts.

Although HN’s algorithm depends (mostly) on user input for how it presents the posts, it still favours some over others and still runs afoul here. You would need a literal ‘most recent’ chronological view and HN doesn’t have that for comments. It probably should anyway!

@dang We need the option to view comments chronologically, please

> HN is _not_ a monolithic bulletin board — the messages on a BBS were never (AFAIK) sorted by ‘popularity’ and users didn’t generally have the power to demote or flag posts.

I don’t think the feature was that unknown. Per Wikipedia, the CDA passed in 1996 and Slashdot was created in 1997, and I doubt the latter’s moderation/voting system was that unique.

Moderating content is explicitly protected by the text of Section 230(c)(2)(a):

“(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or”

Algorithmic ranking, curation, and promotion are not.

The text of the Third Circuit decision explicitly distinguishes between algorithms that respond to user input — such as by surfacing content that was previously searched for, or favorited, or followed. Allowing users to filter content by time, upvotes, number of replies etc would be fine.

The FYP algorithm that’s contested in the case surfaced the video to the minor without her searching for that topic, following any specific content creator, or positively interacting (liking/favoriting/upvoting) with previous instances of said content. It was fed to her based on a combination of what TikTok knew about her demographic information, what was trending on the platform, and TikTok’s editorial secret sauce. TikTok’s algorithm made an active decision to surface this content to her, despite knowing that other children had died from similar challenge videos, they promoted it and should be liable for that promotion.

Doesn’t seem to have anything to do with personalization to me, either. It’s about “editorial judgement,” and an algorithm isn’t necessarily a get out of jail free card unless the algorithm is completely transparent and user-adjustable.

I even think it would count if the only moderation you did on your Lionel model train site was to make sure that most of the conversation was about Lionel model trains, and that they be treated in a positive (or at least neutral) manner. That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you’re a moderator, and your first duty is legal.

If you’re just a dumb pipe, however, you’re a dumb pipe and get section 230.

I wonder how this works with recommendation algorithms, though, seeing as they’re also trade secrets. Even when they’re not dark and predatory (advertising related.) If one has a recommendation algo that makes better e.g. song recommendations, you don’t want to have to share it. Would it be something you’d have to privately reveal to a government agency (like having to reveal the composition of your fracking fluid to the EPA, as an example), and they would judge whether or not it was “editorial” or not?

(edit: that being said, it would probably be very hard to break the law with a song recommendation algorithm. But I’m sure you could run afoul of some financial law still on the books about payola, etc.)

> That degree of moderation, for that purpose, would make you liable if you left illegal or tortious content up i.e. if you moderate, you’re a moderator, and your first duty is legal.

I’m not sure that’s quite it. As I read the article and think about its application to Tiktok, the problem was more that “the algorithm” was engaged in active and allegedly expressive promotion of the unsafe material. If a site like HN just doesn’t remove bad content, then the residual promotion is not exactly Hacker News’s expression, but rather its users’.

The situation might change if a liability-causing article were itself given ‘second chance’ promotion or another editorial thumb on the scale, but I certainly hope that such editorial management is done with enough care to practically avoid that case.

But something like Reddit would be held liable for showing posts, then. Because you get shown different results depending on the subreddits you subscribe to, your browsing patterns, what you’ve upvoted in the past, and more. Pretty much any recommendation engine is a no-go of this ruling becomes precedence.

TBH, Reddit really shouldn’t have 230 protection anyways.

You can’t be licensing user content to AI as it’s not yours. You also can’t be undeleting posts people make (otherwise it’s really reddit’s posts and not theirs).

When you start treating user data as your own; it should become your own and that erodes 230.

> You can’t be licensing user content to AI as it’s not yours.

It is theirs. Users agreed to grant Reddit a license to use the content when they accepted the terms of service.

From my reading, if the site only shows you based on your selections, then it wouldn’t be liable. For example, if someone else with the exact same selections gets the same results, then that’s not their platform deciding what to show.

If it does any customization based on what it knows about you, or what it tries to sell you because you are you, then it would be liable.

Yep., recommendation engines would have to be very carefully tuned, or you risk becoming liable. Recommending only curated content would be a way to protect yourself, but that costs money that companies don’t have to pay today. It would be doable.

> For example, if someone else with the exact same selections gets the same results, then that’s not their platform deciding what to show.

This could very well be true for TikTok. Of course “selection” would include liked videos, how long you spend watching each video, and how many videos you have posted

And on the flip side a button that brings you to a random video would supply different content to users regardless of “selections”.

It could be difficult to draw the line. I assume TikTok’s suggestions are deterministic enough that an identical user would see the same things – it’s just incredibly unlikely to be identical at the level of granularity that TikTok is able to measure due to the type of content and types of interactions the platform has.

>Pretty much any recommendation engine is a no-go of this ruling becomes precedence.

That kind of sounds… great?
The only instance where I genuinely like to have a recommendation engine around is music steaming. Like yeah sometimes it does recommend great stuff.
But anywhere else? No thank you

> On HN, your front page is not different from my front page.

It’s still curated, and not entirely automatically. Does it make a difference whether it’s curated individually or not?

Per the court of appeals, TikTok is not in trouble for showing a blackout challenge video. TikTok is in trouble for not censoring them after knowing they were causing harm.

> “What does all this mean for Anderson’s claims?
Well, § 230(c)(1)’s preemption of traditional publisher liability
precludes Anderson from holding TikTok liable for the
Blackout Challenge videos’ mere presence on TikTok’s
platform. A conclusion Anderson’s counsel all but concedes.
But § 230(c)(1) does not preempt distributor liability, so
Anderson’s claims seeking to hold TikTok liable for
continuing to host the Blackout Challenge videos knowing
they were causing the death of children can proceed.”

As-in, Dang would be liable if say somebody started a blackout challenge post on HN and he didn’t start censoring all of them once news reports of programmers dieing broke out.

https://fingfx.thomsonreuters.com/gfx/legaldocs/mopaqabzypa/…

Does TikTok have to know that “as a category blackout videos are bad” or that “this specific video is bad”.

Does TikTok have preempt this category of videos in the future or simply respond promptly when notified such a video is posted to their system?

Are you asking about the law, or are you asking our opinion?

Do you think its reasonable for social media to send videos to people without considering how harmful they are?

Do you even think its reasonable for search engine to respond to a specific request for this information?

Personally, I wouldn’t want search engines censoring results for things explicitly searched for, but I’d still expect that social media should be responsible for harmful content they push onto users who never asked for it in the first place. Push vs Pull is an important distinction that should be considered.

I think it’s a very different conversation when you’re talking about social media sites pushing content they know is harmful onto people who they know are literal children.

Trying to define “all” is an impossibility; but, by virtue of having taken no action whatsoever, answering that question is irrelevant in the context of this particular judgment: Tiktok took no action, so the definition of “all” is irrelevant. See also for example: https://news.ycombinator.com/item?id=41393921

In general, judges will be ultimately responsible for evaluating whether “any”, “sufficient”, “appropriate”, etc. actions were taken in each future case judgement they make. As with all things legalese, it’s impossible to define with certainty a specific degree of action that is the uniform boundary of acceptable; but, as evident here, “none” is no longer permissible in that set.

(I am not your lawyer, this is not legal advice.)

Any good will attempt at censoring would have been as a reasonable defense even if technically they don’t censor 100% of them, such as blocking videos with the word “blackout” on their title or manually approving videos with such thing, but they did nothing instead.

> TikTok is in trouble for not censoring them after knowing they were causing harm.

This has interesting higher-order effects on free speech. Let’s apply the same ruling to vaccine misinformation, or the ability to organize protests on social media (which opponents will probably call riots if there are any injuries)

Uh yeah, the court of appeals has reached an interesting decision.

But I mean what do you expect from a group of judges that themselves have written they’re moving away from precedent?

I don’t doubt the same court relishes the thought of deciding what “harm” is on a case-by-case basis. The continued politicization of the courts will not end well for a society that nominally believes in the rule of law. Some quarters have been agitating for removing §230 safe harbor protections (or repealing it entirely), and the courts have delivered.

The personalized aspect wasn’t emphasized at all in the ruling. It was the curation. I don’t think TikTok would have avoided liability by simply sharing the video with everyone.

“I think this is a mistaken understanding of the ruling.”

I think that is quite generous. I think it is a deliberate reinterpretation of what the order says. The order states that 230(c)(1) provides immunity for removing harmful content after being made aware of it, i.e., moderation.

Section 230 hasn’t changed or been revoked or anything, so, from what I understand, manual moderation is perfectly fine, as long as that is what it is: moderation. What the ruling says is that “recommended” content and personalised “for you” pages are themselves speech by the platform, rather than moderation, and are therefore not under the purview of Section 230.

For HN, Dang’s efforts at keeping civility don’t interfere with Section 230. The part relevant to this ruling is whatever system takes recency and upvotes, and ranks the front page posts and comments within each post.

Under Judge Matey’s interpretation of Section 230, I don’t even think option 1 would remain on the table. He includes every act except mere “hosting” as part of publisher liability.

No, that’s not the end the result.

It would be perfectly legal for a platform to choose to allow a user to decide on their own to filter out spam.

Maybe a user could sign up for such an algorithm, but if they choose to whitelist certain accounts, that would also be allowed.

Problem solved.

Yeah, no moderation leads to spams, scams, rampant hate, and CSAM. I spent all of an hour on Voat when it was in its heyday and it mostly literal Nazis calling for the extermination of undesirables. The normies just stayed on moderated Reddit.

It was the people who were chased out of other websites that drove much of their traffic so it’s no surprise that their content got the front page. It’s a shame that they scared so many other people away and downvoted other perspectives because it made diversity difficult.

Not sure about the downvotes on this comment; but what parent says has precedent in Cubby Inc. vs Compuserve Inc.(1) and this is one of the reasons Section 230 came about to be in the first place.

HN is also heavily moderated with moderators actively trying to promote thoughtful comments over other, less thoughtful or incendiary contributions by downranking them (which is entirely separate from flagging or voting; and unlike what people like to believe, this place relies more on moderator actions as opposed to voting patterns to maintain its vibe.) I couldn’t possibly see this working with the removal of Section 230.

(1) https://en.wikipedia.org/wiki/Cubby,_Inc._v._CompuServe_Inc.

Theoretically, your liability is the same because the First Amendment is what absolves you of liability for someone else’s speech. Section 230 provides an avenue for early dismissal in such a case if you get sued; without Section 230, you’ll risk having to fight the lawsuit on the merits, which will require spending more time (more fees).

I’d probably like the upvote itself to be considered “speech”. The practical effect of upvoting is to endorse, together with the site’s moderators and algorithm-curators, the comment to be shown to a wider audience.

Along those lines then then an upvote i.e. endorsement would be protected, up to any point where it violated one of the free speech exceptions, e.g. incitement.

2) Require confirmation you are a real person (check ID) and attach accounts per person. The commercial Internet has to follow the laws they’re currently ignoring and the non-commercial Internet can do what they choose (because of being untraceable).

4chan is moderated and the moderation is different on each board with the only real global moderation rule being “no illegal stuff”. In addition to that the site does curate the content it shows you using an algorithm even though it is a very basic one (the thread with last reply goes to the top of the page and threads older then X are removed automatically.)

For example the qanon conspiracy nuts got moderated out of /pol/ for arguing in bad faith/just being too crazy to actually have any kind of conversation with and they fled to another board (8chan and later 8kun) that has even less moderation.

> 4chan is moderated

Yep, 4chan isn’t bad because “people I disagree with can talk there”, it’s bad because the interface is awful and they can’t attract enough advertisers to meet their hosting demands.

Nuff said. Underneath the ever-lasting political cesspool from /pol/ and… _specific_ atmosphere, it’s still one of the best places to visit for tech-based discussion.

Nah. HN is not the same as these others.

TikTok. Facebook. Twitter. YouTube.

All of these have their algorithms specifically curated to try to keep you angry. YouTube outright ignores your blocks every couple months, and no matter how many people dropping n-bombs you report and block, it never endingly pushes more and more.

These company know that their algorithms are harmful and they push them anyway. They absolutely should have liability for what their algorithm pushes.

There’s moderation to manage disruption to a service. There’s editorial control to manage the actual content on a service.

HN engages in the former but not the latter. The big three engage in the latter.

I don’t understand your explanation. Do you mean just voting itself? That’s not controlled or managed by HN. That’s just more “user generated content.” That posts get hidden or flagged due to thresholding is non-discriminatory and not _individually_ controlled by the staff here.

Or.. are you suggesting there’s more to how this works? Is dang watching votes and then making decisions based on those votes?

“Editorial control” is more of a term of art and has a narrower definition then you’re allowing for.

The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

The same applies to comments on HN. Comments are not moderated based purely on legal or certain general “good manners” grounds, they are moderated to keep a certain kind of discourse level. For example, shallow jokes or meme comments are not generally allowed on HN. Comments that start discussing controversial topics, even if civil, are also discouraged when they are not on-topic.

Overall, HN is very much curated in the direction of a newspaper “letter to the editor” section, then more algorithmic and hands-off like the Facebook wall or TikTok feed. So there is no doubt whatsoever, I believe, that HN would be considered responsible for user content (and is, in fact, already pretty good at policing that in my experience, at least on the front page).

> The HN moderation team makes a lot of editorial choices, which is what gives HN its specific character. For example, highly politically charged posts are manually moderated and kept off the main page regardless of votes, with limited exceptions entirely up to the judgement of the editors. For example, content about the wars in Ukraine and Israel is not allowed on the mainpage except on rare occasions. dang has talked a lot about the reasoning behind this.

This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.

> This is meaningfully different in kind from only excluding posts that reflect certain perspectives on such a conflict. Maintaining topicality is not imposing a bias.

Maintaining topicality is literally a bias. Excluding posts that reflect certain perspectives is censorship.

There’s things like ‘second chance’ where the editorial team can re-up posts they feel didn’t get a fair shake the first time around, sometimes if a post gets too ‘hot’ they will cool it down — all of this is understandable but unfortunately does mean they are actively moderating content and thus are responsible for all of it.

Dang has been open about voting being only one part of the way HN works, and that manual moderator intervention does occur. They will downweigh the votes of “problem” accounts, manually adjust the order of the frontpage, and do whatever they feel necessary to maintain a high signal to noise ratio.

Every time you see a comment marked as (dead) that means a moderator deleted it. There is no auto-deletion resulting from downvotes.

Even mentioning certain topics, such as Israel’s invasion of Palestine, even when the mention is on-topic and not disruptive, as in this comment you are reading, is practically a death sentence for a comment. Not because of votes, but because of the moderators. Downvotes may prioritize which comments go in front of moderators (we don’t know) but moderators make the final decision; comments that are downvoted but not removed merely stick around in a light grey colour.

By enabling showdead in your user preferences and using the site for a while, especially reading controversial threads, you can get a feel for what kinds of comments are deleted by moderators exercising. It is clear that most moderation is about editorial control and not simply the removal of disruption.

This comment may be dead by the time you read it, due to the
previous mention of Palestine – hi to users with showdead enabled. Its parent will probably merely be down voted because it’s wrong but doesn’t contain anything that would irk the mods.

Comments that are marked (dead) without the (flagged) indicator are like that because the user that posted the comment has been banned. For green (new) accounts this can be due to automatic filters that threw up false positives for new accounts. For old accounts this shows that the account (not the individual comment) has been banned by moderators. Users who have been banned can email [email protected] pledging to follow the rules in the future and they’ll be granted another chance. Even if a user remains banned, you can unhide a good (dead) comment by clicking on its timestamp and clicking “vouch.”

Comments are marked (flagged) (dead) when ordinary users have clicked on the timestamp and selected “flag.” So user downvotes cannot kill a comment, but flagging by ordinary non-moderator users can kill it.

Freedom of speech, not reach of their personal curation preferences, narrative shaping due to confirmation bias and survivorship bias. Tech is in the put them on scales to increase their signal, decrease others based upon some hokey story of academic and free market genius.

The pro-science crowd (which includes me fwiw) seems incapable of providing a proof any given scientist is that important. Same old social politics norms inflate some deflate others and we confirm our survival means we special. Ones education is vacuous prestige given physics applies equally; oh you did the math! Yeah I just tell the computer to do it. Oh you memorized the circumlocutions and dialectic of some long dead physicist. Outstanding.

There’s a lot of ego driven banal classist nonsense in tech and science. At the end of the day just meat suits with the same general human condition.

(1) 4chin is too dumb to use HN, and there’s no image posting so, I doubt they’d even be interested in raiding us
(2) I’ve never seen anything illegal here, I’m sure it happens, and it gets dealt with quickly enough that it’s not really ever going to be a problem if things continue as they have been.

They may lose 230 protection, sure, but probably not really a problem here. For Facebook et al, it’s going to be an issue, no doubt. I suppose they could drop their algos and bring back the chronological feeds, but, my guess is that wouldn’t be profitable given that ad-tech and content feeds are one in the same at this point.

I’d also assume that “curation” is the sticking point here, if a platform can claim that they do not curate content, they probably keep 230 protection.

Certain boards most definitely raid various HN threads.

Specifically, every political or science thread that makes it, is raided by 4chan. 4chan also regularly pushes anti/science and anti-education agenda threads to the top here, along with posts from various alt-right figures on occasion.

search: site:4chan.org news.ycombinator.com

Seems pretty sparse to me, and from a casual perusal, I haven’t seen any actual calls to raiding anything here, it’s more of a reference where articles/posts have happened, and people talking about them.

Remember, not everyone who you disagree with comes from 4chan, some of them probably work with you, you might even be friends with them, and they’re perfectly serviceable people with lives, hopes, dreams, same as yours, they simply think differently than you.

lol dude. Nobody said that 4chan links are posted to HN, just that 4chan definitely raids HN.

4chan is very well known for brigading. It is also well known that using 4chan as well as a number of other locations, such as discord, to post links for brigades are an extremely common thing that the alt-right does to try to raise the “validity” of their statements.

I also did not claim that only these opinions come from 4chan. Nice strawman bro.

Also, my friends do not believe these things. I do not make a habit of being friends with people that believe in genociding others purely because of sexual orientation or identity.

Go ahead and type that search query into google and see what happens.

Also the alt-right is a giant threat, if you categorize everyone right of you as alt-right, which seems to be the standard definition.

That’s not how I’ve chosen to live, and I find that it’s peaceful to choose something more reasonable. The body politic is cancer on the individual, and on the list of things that are important in life, it’s not truly important. With enough introspection you’ll find that the tendency to latch onto politics, or anything politics-adjacent, comes from an overall lack of agency over the other aspects of life you truly care about. It’s a vicious cycle. You have a finite amount of mental energy, and the more you spend on worthless things, the less you have to spend on things that matter, which leads to you latching further on to the worthless things, and having even less to spend on things that matter.

It’s a race to the bottom that has only losers. If you’re looking for genocide, that’s the genocide of the modern mind, and you’re one foot in the grave already. You can choose to step out now and probably be ok, but it’s going to be uncomfortable to do so.

That’s all not to say there aren’t horrid, problem-causing individuals out in the world, there certainly are, it’s just that the less you fixate on them, the more you realize that they’re such an extreme minority that you feel silly fixating on them in the first place. That goes for anyone that anyone deems ‘horrid and problem-causing’ mind you, not just whatever idea you have of that class of person.

> Go ahead and type that search query into google and see what happens.

What are you expecting it to show? That site removes all content after a matter of days.

These people win elections and make news cycles. They are not an “ignorable, small minority”.

For the record, ensuring that those who wish to genocide LGBT+ people are not the majority voice on the internet is absolutely not “a worthless matter”, not by any stretch. I would definitely rather not have to do this, but then, the people who dedicate their lives to trolling and hate are extremely active.

>4chin is too dumb to use HN

I don’t frequent 4cuck, I use soyjak.party which I guess from your perspective is even worse, but there are of plenty of smart people on the ‘cuck thoughbeit, like the gemmy /lit/ schizo. I think you would feel right at home in /sci/.

> I think the ultimate problem is that social media is not unbiased — it curates what people are shown.

This is literally the purpose of Section 230. It’s Section 230 of the Communications Decency Act. The purpose was to change the law so platforms could moderate content without incurring liability, because the law was previously that doing any moderation made you liable for whatever users posted, and you don’t want a world where removing/downranking spam or pornography or trolling causes you to get sued for unrelated things you didn’t remove.

The CDA was about making it clearly criminal to send obscene content to minors via the internet. Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn’t remove your common carrier protections, but I don’t believe that was a response to pre-CDA status quo.

> The CDA was about making it clearly criminal to send obscene content to minors via the internet.

Basically true.

> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content.

No, it wasn’t, and you can tell that because there is literally not a single word to that effect in Section 230. It was to enable information service providers to exercise editorial control over user-submitted content without acquiring publisher-style liability, because the alternative, giving liability decisions occurring at the time and the way providers were reacting to them, was that any site using user-sourced content at scale would, to mitigate legal risk, be completely unmoderated, which was the opposite of the vision the authors of Section 230 and the broader CDA had for the internet. There are no “common carrier” obligations or protections in Section 230. The terms of the protection are the opposite of common carrier, and while there are limitations on the protections, there are no common carrier like obligations attached to them.

>

> The CDA was about making it clearly criminal to send obscene content to minors via the internet.

That part of the law was unconstitutional and pretty quickly got struck down, but it still goes to the same point that the intent of Congress was for sites to remove stuff and not be “common carriers” that leave everything up.

> Section 230 was intended to clarify the common carrier role of ISPs and similar providers of third party content. It does have a subsection to clarify that attempting to remove objectionable content doesn’t remove your common carrier protections, but I don’t believe that was a response to pre-CDA status quo.

If you can forgive Masnick’s chronic irateness he does a decent job of explaining the situation:

https://www.techdirt.com/2024/08/29/third-circuits-section-2…

It’s also classic commercial activity. Because 230 exists, we are able to have many intentionally different social networks and web tools. If there was no moderation — for example, if you couldn’t delete porn from linkedin — all social networks would be the same. Likely there would only be one large one. If all moderation was pushed to the client side, it might seem like we could retain what we have but it seems very possible we could lose the diverse ecosystem of Online and end up with something like Walmart.

This would be the worst outcome of a rollback of 230.

> The purpose was to change the law so platforms could moderate content

What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes “moderation”?

What part of deliberately showing political content to people algorithmically expected to disagree with it, constitutes “moderation”?

What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes “moderation”?

What part of suppressing “misinformation” on the basis of what’s said in “reliable sources” (rather than any independent investigation – but really the point would still stand), constitutes “moderation”?

What part of favouring content from already popular content creators because it brings in more ad revenue, constitutes “moderation”?

What part of algorithmically associating content with ads for specific products or services, constitutes “moderation”?

> What part of deliberately showing political content to people algorithmically expected to agree with it, constitutes “moderation”?

Well, maybe it’s just me, but only showing political content that doesn’t include “kill all the (insert minority here)”, and expecting users to not object to that standard, is a pretty typical aspect of moderation for discussion sites.

> What part of deliberately suppressing or promoting political content based on the opinions of those in charge of the platform, constitutes “moderation”?

Again, deliberately suppressing support for literal and obvious facism, based on the opinions of those in charge of the platform, is a kind of moderation so typical that it’s noteworthy when it doesn’t happen (e.g. Stormfront).

> What part of suppressing “misinformation” on the basis of what’s said in “reliable sources” (rather than any independent investigation – but really the point would still stand), constitutes “moderation”?

Literally all of Wikipedia, where the whole point of the reliable sources policy is that the people running it don’t have to be experts to have a decently objective standard for what can be published.

The rise of social media was largely predicated on the curation it provided. People, and particularly advertisers, wanted a curated environment. That was the key differentiator to the wild west of the world wide web.

The idea that curation is a problem with social media is always a head scratcher for me. The option to just directly publish to the world wide web without social media is always available, but time and again, that option is largely not chosen… this ruling could well narrow it down that being the only option.

Now, in practice, I don’t think that will happen. This will raise the costs of operating social media, and those costs will be reflected in prices advertisers pay to advertise on social media. That may shrink the social media ecosystem, but what it will definitely do is raise the draw bridge over the moat around the major social media players. You’re going to see less competition.

> People, and particularly advertisers, wanted a curated environment

Then give the choice to the user.

If a user wants to opt in, or change their moderation preferences then they should be allowed.

By all means offer a choice of moderation decisions. And let the user change them, opt out conditionally and ignore them if they so choose.

> You’re free

Actually it seems like with these recent rulings, we will be free to use major social media platforms where the choice of moderation is given to the user, lest those social media platforms are otherwise held liable for their “speech”.

I am fully fine with accepting the idea that if a social media platform doesn’t act as a dumb pipe, then their choice of moderation is their “speech” as long as they can be held fully legally liable for every single moderation/algorithm choice that they make.

Fortunately for me, we are commenting on a post where a legal ruling was made to this effect, and the judge agrees with me that this is how things aught be.

> You say that like that choice doesn’t exist.

You said this: “People, and particularly advertisers, wanted a curated environment.”

If moderation choices are put in the hands of the user, then what you are describing is not a problem, as the user can have that.

Therefore, you saying that this choice exists, means that there isn’t a problem for anyone who chooses to not have the spam, and your original complaint is refuted.

> I’m saying the choice exists. The choices we make are the problem.

Well then feel free to choose differently for yourself.

Your original statement was this: “People, and particularly advertisers, wanted a curated environment.”

You referencing what people “want” is directly refuted by the idea that they should be able to choose whatever their preferences are.

And your opinion on other people’s choices doesn’t really matter here.

> You referencing what people “want” is directly refuted by the idea that they should be able to choose whatever their preferences are.
>
> And your opinion on other people’s choices doesn’t really matter here.

I think maybe we’re talking past each other. What I’m saying what people “want” is a reflection of the overwhelming choices they make. They’re choosing the curated environments.

The “problem” that is being referenced is the curation. The claim is that the curation is a problem; my observation is that it is the solution all the parties involved seem to want, because they could, at any time, choose otherwise.

> They’re choosing the curated environments

Ok, and if more power is given to the user and the user is additionally able to control their current curation, then that’s fine and you can continue to have your own curated environment, and other people will also have more or less control over their own curation.

Problem solved! You get to keep your curation, and other people can also change the curation on existing platforms for their own feeds.

> The claim is that the curation is a problem

Nope. Few people have a problem with other people having s choice of curation.

Instead the solution that people are advocating for is for more curating powers to be giving to individual users so that they can choose, on current platforms, how much is curated for themselves.

Easy solution.

>The option to just directly publish to the world wide web without social media is always available,

Not exactly. You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.

You might also have to procure the services of Cloudflare if you face significant traffic, and Cloudflare might choose to refuse your money and kick you off.

>that option is largely not chosen…

That’s because most people do not have neither the time nor the will to learn and speak computer.

Social media and immediate predecessors like WordPress were and are successful because they brought down the lowest common denominator to “Smack keys and tap Submit”. HTML? CSS? Nobody has time for our pig latin.

> You still have to procure web hosting somewhere, and that hosting provider might choose to refuse your money and kick you off.

Who says you need to procure a web hosting provider?

But yes, if you connect your computer up to other computers, the other computers may decide they don’t want any part of what you have to offer.

Without that, I wouldn’t want to be in the Internet. I don’t want to be forced to ingest bytes from anyone who would send them my way. That’s just not a good value proposition for me.

> That’s because most people do not have neither the time nor the will to learn and speak computer.

I’m sorry, but no. You can literally type in to a word processor or any number of other tools and select “save as web content”, and then use any number of products to take a web page and serve it up to the world wide web. It’s been that way for the better part of 25 years. No HTML or CSS knowledge needed. If you can’t handle that you can just record a video, save it to a file, and serve it up over a web server. Yes, you need to be able to use a computer to participate on the world wide web, but no more than you do to use social media.

Now, what you won’t get is a distribution platform that gets your content up in front of people who never asked for it. That is what social media provides. It lowers the effort for the people receiving the content, as in exactly the curation process that the judge was ruling about.

>You can literally type in to a word processor or any number of other tools

Most people these days don’t have a word processor or, indeed, “any number of other tools”. It’s all “in the cloud”, usually Google Docs or Office 365 Browser Edition(tm).

>select “save as web content”

Most people these days don’t (arguably never) understand files and folders.

>and then use any number of products to take a web page and serve it up to the world wide web.

Most people these days cannot be bothered. Especially when the counter proposal is “Make an X account, smash some keys, and press Submit to get internet points”.

>If you can’t handle that you can just record a video, save it to a file, and serve it up over a web server.

I’m going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people. There is a reason Youtube and Twitch have killed off literally every other video sharing service; there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops).

Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.

>what you won’t get is a distribution platform that gets your content up in front of people who never asked for it.

The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.

> Most people these days don’t have a word processor or, indeed, “any number of other tools”. It’s all “in the cloud”, usually Google Docs or Office 365 Browser Edition(tm).

Read that again. 😉

> Most people these days don’t (arguably never) understand files and folders.

We can debate on the skills of “most people” back and forth, but I think it’s fair to say that “save as web content” is easier to figure out than figuring out how to navigate a social media site (and that doesn’t necessarily require files or folders). If that really is too hard for someone, there are products out there designed to make it even easier. Way back before social media took over, everyone and their dog managed to figure out how to put stuff on the web. People who couldn’t make it through high school were successfully producing web pages, blogs, podcasts, video content, you name it.

> I’m going to stop you right here: You are vastly overestimating both the will and the computer-aptitude of most people.

I disagree. I think they don’t have the will to do it, because they’d rather use social media. I do believe if they had the will to do it, they would. I agree there are some people who lack the computer-aptitude to get content on the web. Where I struggle is believing those same people manage to put content on social media… which I’ll point out is on the web.

> There is a reason Youtube and Twitch have killed off literally every other video sharing service

Yes, because video sharing at scale is fairly difficult and requires real skill. If you don’t have that skill, you’re going to have to pay someone to do it, or find someone who has their own agenda that makes them want to do it without charging you… like Youtube or Twitch.

On the other hand, putting a video up on the web that no one knows about, no one looks for, and no one consumes unless you personally convince them to do so is comparatively simple.

> there is a reason smartphones killed off personal computers (desktops and to a lesser degree laptops)

Yes, that reason is that smartphones were subsidies by carriers. 😉

But it’s good that you mentioned smartphones, because smart phones will let you send content to anyone in your contacts without you having anything that most would describe as “computer-aptitude”. No social media needed… and yet the prevailing preference is for people to go through a process of logging in, shaping content to suit the demands of social media services, attempting to tune the content to get “the algorithm” to show it to as many people as possible, and put their content there. That takes more will/aptitude/whatever, but they do it for the distribution/audience.

> Social media became the juggernaut it is today because businesses figured out how to capitalize on the latent demand for easy sharing of information: Literal One Click Solutions(tm) that anyone can understand.

I’d agree with you if you said “distribute” instead of “sharing”. It’s really hard to get millions of people to consume your content. That is, until social media came along and basically eliminated the cost of distribution. So any idiot can push their content out to millions and fill the world with whatever they want…. and now there’s a sense of entitlement about it, where if a platform doesn’t push that content on other people, at no cost to them, that they’re being censored.

Yup, that does really require social media.

> The internet and more specifically search engines in general have always been that distribution platform. The only thing that changed in the last 30 years is how easy it is to get your stuff on that platform.

No, the Internet & the web required you to go looking for the content you wanted. Search engines (at least at one time) were designed to accelerate that proces of find exactly the content you were looking for faster, and get you off their platform ASAP. Social media is kind of the opposite of search engines. They want you to stay on their platform; they want you to keep scrolling at whatever “engaging” content they can find, regardless of what you’re looking for; if you forget about whatever you were originally looking for, that’s a bonus. It’s that ability to have your content show up when no one is looking for it where social media provides an advantage over the web for content makers.

> is that social media is not unbiased

That’s how I read it, too. Section 230 doesn’t say you can’t get in trouble for failure to moderate, it says that you can’t get in trouble for moderating one thing but not something else (in other words, the government can’t say, “if you moderated this, you could have moderated that”). They seem to be going back on that now.

Real freedom from censorship – you cannot be held liable for content you hosted – has never been tried. The US government got away with a lot of COVID-era soft censorship by just strong-arming social media sites into suppressing content because there were no first-amendment style protections against that sort of soft censorship. I’d love to see that, but there’s no reason to think that our government is going in that direction.

> social media is not unbiased …

Media, generally, social or otherwise, is not unbiased. All media has bias. The human act of editing, selecting stories, framing those stories, authoring or retelling them… it’s all biased.

I wish we would stop seeking unbiased media as some sort of ideal, and instead seek open biases — tell me enough about yourself and where your biases lie, so I can make informed decisions.

This reasoning is not far off from the court’s thinking: editing is speech. A for you page is edited, and is TikTok’s own speech.

That said, I do agree with your meta point. Social media (hn not excluded) is a generally unpleasant place to be.

It all comes down to the assertion made by the author:

> There is no way to run a targeted ad social media company with 40% margins if you have to make sure children aren’t harmed by your product.

don’t let children use?
In TN it that will be illegal Jan 1 – unless social media creates a method for parents to provide ID and opt out of them being blocked I think?

Wouldn’t that put the responsibility back on the parents?

The state told you XYZ was bad for your kids and it’s illegal for them to use, but then you bypassed that restriction and put the sugar back into their hands with an access-blocker-blocker..

Random wondering

Age limitations for things are pretty widespread. Of course, they can be bypassed to various degrees but, depending upon how draconian you want to be, you can presumably be seen as doing the best you reasonably can in a virtual world.

I’m not sure about video, but we are no longer in an era when manual moderation is necessary. Certainly for text, moderation for child safety could be as easy as taking the written instructions currently given to human moderators and having an LLM interpreter (only needs to output a few bits of information) do the same job.

There are two questions – one is “should social media companies be globally immune from liability for any algorithmic decisions” which this case says “no”. Then there is “in any given case, is the social media company guilty of the harm of which it is accused”. Outcomes for that would evolve over time (and I would hope for clarifying legislation as well).

At the scale social media companies operate at, absolutely perfect moderation with zero false negatives is unavailable at any price. Even if they had a highly trained human expert manually review every single post (which is obviously way too expensive to be viable) some bad stuff would still get through due to mistakes or laziness. Without at least some form of Section 230, the internet as we know it cannot exist.

“Social media” is a broad brush though. I operate a Mastodon instance with a few thousand users. Our content timeline algorithm is “newest on top”. Our moderation is heavily tailored to the users on my instance, and if a user says something grossly out of line with our general vibe, we’ll remove them. That user is free to create an account on any other server who’ll have them. We’re not limiting their access to Mastodon. We’re saying that we don’t want their stuff on our own server.

What are the legal ramifications for the many thousands of similar operators which are much closer in feel to a message board than to Facebook or Twitter? Does a server run by Republicans have to accept Communist Party USA members and their posts? Does a vegan instance have to allow beef farmers? A PlayStation fan server host pro-PC content?

If it is a reckoning for social media then so be it. Social media net-net was probably a mistake.

But I doubt this gets held on appeal. Given how fickle this Supreme Court is they’ll probably overrule themselves to fit their agenda since they don’t seem to think precedent is worth a damn.

But what are the implications?

No more moderation? This seems bad.

No more recommendation/personalization? This could go either way, I’m also willing to see where this one goes.

No more public comment sections? Arstechnica claimed back in the day when section 230 was under fire last time that this would be the result if it was ever taken away. This seems bad.

I’m not sure what will happen, I see 2 possible outcomes that are bad and one that is maybe good. At first glance this seems like bad odds.

Actually there’s a fourth possibility, and that’s holding Google responsible for whatever links they find for you. This is the nuclear option. If this happens, the internet will have to shut all of its American offices to get around this law.

Would bluesky not solve this issue?

The underlying hosted service is nearly completely unmoderated and unpersonalised. It’s just streams of bits and data routing. You can scan for/limit the propagation of CSAM or DMCA content to some degree as an infrastructure provider but that’s really about it and even then you can only really do so to fairly limited degrees and that doesn’t stop other providers (or self hosted participants) from propagating that anyways.

Then you provide custom feed algorithms, labelling services, moderation services, etc on top of that but none of them change or control the underlying data streams. They just annotate on top or provide options to the client.

Then the user’s client is the one that directly consumes all these different services on top of the base service to produce the end result.

It’s a true, unbiased section 230 compatible protocol (under even the strictest interpretation) that the user then can optionally combine with any number of secondary services and addons that they use to craft their personalised social media experience.

I think HN sees this as just more activist judges trying to overrule the will of the people (via Congress). This judge is attempting to interject his opinion on the way things should be vs what a law passed by the highest legislative body in the nation as if that doesn’t count. He is also doing it on very shaky ground, but I wouldn’t expect anything less of the 3rd circuit (much like the 5th)

For the case in question the major problem seems to be, specifically, what content do we allow children to access.

There’s an enormous difference in the debate between what should be prohibited and what should be prohibited for children.

I always wondered why Section 230 does not have a carve-out exemption to deal with the censorship issue.

I think we’d all agree that most websites are better off with curation and moderation of some kind. If you don’t like it, you are free to leave the forum, website, etc. The problem is that Big Tech fails to work in the same way, because those properties are becoming effectively the “public highways” where everyone must pass by.

This is not dissimilar from say, public utilities.

So, why not define how a tech company becomes a Big Tech “utility”, and therefore, cannot hide behind 230 exception for things that it willingly does, like censorship ?

Wonder no longer! It’s Section 230 of the communications “decency” act, not the communication freedoms and regulations act. It doesn’t talk about censorship because that wasn’t in the scope of the bill. (And actually it does talk about censorship of obscene material in order to explicitly encourage it.)

I look at forums and social media as analogous to writing a “Letter to the Editor” to a newspaper:

In the newspaper case, you write your post, send it to the newspaper, and some editor at the newspaper decides whether or not to publish it.

In Social Media, the same thing happens, but it’s just super fast and algorithmic: You write your post, send it to the Social Media site (or forum), an algorithm (or moderator) at the Social Media site decides whether or not to publish it.

I feel like it’s reasonable to interpret this kind of editorial selection as “promotion” and “recommendation” of that comment, particularly if the social media company’s algorithm deliberately places that content into someone’s feed.

I agree.

I think if social media companies relayed communication between it’s users with no moderation at all, then they should be entitled to carrier protections.

As soon as they start making any moderation decisions, they are implicitly endorsing all other content, and should therefore be held responsible for it.

There are two things social media can do. Firstly, they should accurately identify its users before allowing them to post, so they can counter sue that person if post harms them, and secondly, they can moderate every post.

Everybody says this will kill social media as we know it, but I say the world will be a better place as a result.

So the solution is “more speech?” I don’t know how that will unhook minors from the feedback loop of recommendation algorithms and their plastic brains. It’s like saying ‘we don’t need to put laws in place to combat heroin use, those people could go enjoy a good book instead!’.

Yes, the solution is more speech. Teach your kids critical thinking or they will be fodder for somebody else who has it. That happens regardless of who’s in charge, government or private companies. If you can’t think for yourself and synthesize lots of disparate information, somebody else will do the thinking for you.

You’re mistaken as to what this ruling is about. Ultimately, when it comes right down to it, the Third Circuit is saying this (directed at social media companies):

“The speech is either wholly your speech or wholly someone else’s. You can’t have it both ways.”

Either they get to act as a common carrier (telephone companies are not liable for what you say on a phone call because it is wholly your own speech and they are merely carrying it) or they act as a publisher (liable for everything said on their platforms because they are exercising editorial control via algorithm). If this ruling is upheld by the Supreme Court, then they will have to choose:

* Either claim the safe harbour protections afforded to common carriers and lose the ability to curate algorithmically

or

* Claim the free speech protections of the First Amendment but be liable for all content as it is their own speech.

Algorithmic libel detectors don’t exist. The second option isn’t possible. The result will be the separation of search and recommendation engines from social media platforms. Since there’s effectively one search company in each national protectionist bloc, the result will be the creation of several new monopolies that hold the power to decide what news is front-page, and what is buried or practically unavailable. In the English-speaking world that right would go to Alphabet.

The second option isn’t really meant for social media anyway. It’s meant for traditional publishers such as newspapers.

If this goes through I don’t think it will be such a big boost for Google search as you suggest. For one thing, it has no effect on OpenAI and other LLM providers. That’s a real problem for Google, as I see a long term trend away from traditional search and towards LLMs for getting questions answered, especially among young people. Also note that YouTube is social media and features a curation algorithm to deliver personalized content feeds.

As for social media, I think we’re better off without it! There’s countless stories in the news about all the damage it’s causing to society. I don’t think we’ll be able to roll all that back but I hope we’ll be able to make things better.

If the ruling was upheld, Google wouldn’t gain any new liability for putting a TikTok-like frontend on video search results; the only reason they’re not doing it now is that all existing platforms (including YouTube) funnel all the recommendation clicks back into themselves. If YouTube had to stop offering recommendations, Google could take over their user experience and spin them off into a hosting company that derived its revenue from AdSense and its traffic from “Google Shorts.”

This ruling is not a ban on algorithms, it’s a ban on the vertical integration between search or recommendation and hosting that today makes it possible for search engines other than Google to see traffic.

I actually don’t think Google search will be protected in its current form. Google doesn’t show you unadulterated search results anymore, they personalize (read: editorialize) the results based on the data they’ve collected on you, the user. This is why two different people entering the same query can see dramatically different results.

If Google wants to preserve their safe harbour protections they’ll need to roll back to a neutral algorithm that delivers the same results to everyone given an identical query. This won’t be the end of the world for Google but it will produce lower quality results (at least in the eyes of normal users who aren’t annoyed by the personalization). Lower quality results will further open the doors to LLMs as a competitor to search.

And newspapers decide every single word they publish, because they’re liable for it. If a newspaper defames someone they can be sued.

This whole case comes down to having your cake and eating it too. Newspapers don’t have that. They have free speech protections but they aren’t absolved of liability for what they publish. They aren’t protected under section 230.

If the ruling is upheld by SCOTUS, Google will have to choose: section 230 (and no editorial control) or first amendment plus liability for everything they publish on SERPs.

Solution that require everyone to do a thing, and do it well, are doomed to fail.

Yes, it would be great if parents would, universally, parent better, but getting all of them (or a large enough portion of them for it to make a difference) to do so is essentially impossible.

Government controls aren’t a solution either though. The people with critical thinking skills, who can effectively tell others what to think, simply capture the government. Meet the new boss, same as the old boss.

> Yes, the solution is more speech.

I think we’ve reached the point now that there is more speech than any person can consume by a factor of a million. It now comes to down to picking what speech you want to hear. This is exactly what content algorithms are doing -> out of the millions of hours of speech produced in a day, it’s giving you your 24 hours of it.

Saying “teach your kids critical thinking” is a solution but it’s not the solution. At some point, you have to discover content out of those millions or hours a day. It’s impossible to do yourself — it’s always going to be curated.

EDIT: To whomever downvoted this comment, you made my point. You should have replied instead.

I agree with this. Kids are already subject to an agenda; for example, never once in my K-12 education did I learn anything about sex. This was because it was politically controversial at the time (and maybe it still is now), so my school district just avoided the issue entirely.

I remember my mom being so mad about the curriculum in general that she ran for the school board and won. (I believe it was more of a math and science type thing. She was upset with how many coloring assignments I had. Frankly, I completely agreed with her then and I do now.)

I was lucky enough to go to a charter school where my teachers encouraged me to read books like “People’s History of the U.S” and “Lies My Teacher Told Me”. They have an agenda too, but understanding that there’s a whole world of disagreement out there and that I should seek out multiple information sources and triangulate between them has been a huge superpower since. It’s pretty shocking to understand the history of public education and realize that it wasn’t created to benefit the student, but to benefit the future employers of those students.

K so several of the most well-funded tech companies on the planet sink literally billions of dollars into psyops research to reinforce addictive behavior and average parents are expected to successfully compete against it with…a lecture.

We have seen that adults can’t seem to unhook from these dopamine delivery systems and you’re expecting that children can do so?

Sorry. That’s simply disingenuous.

Yes, children and especially teenagers do lots of things even though their parents try to prevent them from doing so. Even if children and teenagers still get them, we don’t throw up our hands and sell them tobacco and alcohol anyway.

Open-source the algorithm and have users choose. A marketplace is the best solution to most problems.

It is pretty clear that china already forces a very different tiktok ranking algo for kids within the country vs outside the country. Forcing a single algo is pretty unamerican though and can easily be abused, let’s instead open it up.

80% of users will leave things at the default setting, or “choose” whatever the first thing in the list is. They won’t understand the options; they’ll just want to see their news feed.

I’m not so sure, the feed is quite important and users understand that. Look at how many people switched between X and Threads given their political view. People switched off Reddit or cancelled their FB account at times in the past also.

I’m pretty sure going from X to Threads had very little to do with the feed algorithm for most people. It had everything to do with one platform being run by Musk and the other one not.

Unfortunately, the biases of newspapers and social media sites are only diverse if they are not all under the strong influence of the wealthy.

Even if they may have different skews on some issues, under a system where all such entities are operated entirely for-profit, they will tend to converge on other issues, largely related to maintaining the rights of capital over labor and over government.

Yeah, pretty much. What’s not clear to me though is how non-targeted content curation, like simply “trending videos” or “related videos” on YouTube, is impacted. IMO that’s not nearly as problematic and can be useful.

> In a very general sense, this ruling could be seen as a form of net neutrality

In reality this will not be the case and instead it will introduce the bias of regulators to replace the bias companies want there to be. And even with their motivation to sell users attention, I cannot see this as an improvement. No, the result will probably be worse.

HN also has an algorithm.

I’ll have to read the third circuit’s ruling in detail to figure out whether they are trying to draw a line in the Sand on whether an algorithm satisfies the requirements for section 230 protection or falls outside of it. If that’s what they’re doing, I wouldn’t assume a priori that a site like Hacker News won’t also fall afoul of the law.

> I think the ultimate problem is that social media is not unbiased — it curates what people are shown.

It is not only biased but also biased for maximum engagement.

People come to these services for various reasons but then have this specifically biased stuff jammed down their throats in a way to induce specific behavior.

I personally don’t understand why we don’t hammer these social media sites for conducting psychological experiments without consent.

This is a much needed regulation. If anything it will probably spur innovation to solve safety in algorithms.

I think of this more along the lines of preventing a factoring from polluting a water supply or requiring a bank to have minimum reserves.

Refusal to moderate, though, is also a bias. It produces a bias where the actors who post the most have their posts seen the most. Usually these posts are Nigerian princes, Viagra vendors, and the like. Nowadays they’ll also include massive quantities of LLM-generated cryptofascist propaganda (but not cryptomarxist propaganda because cryptomarxists are incompetent at propaganda). If you moderate the spam, you’re biasing the site away from these groups.

You can’t just pick anything and call it a “bias” – absolutely unmoderated content may not (will not) represent the median viewpoint, but it’s not the hosting provider “bias” doing so. Moderating spam is also not “bias” as long as you’re applying content-neutral rules for how you do that.

These are some interesting mental gymnastics. Zuckerberg literally publicly admitted the other day that he was forced by the government to censor things without a legal basis. Musk disclosed a whole trove of emails about the same at Twitter. And you’re still “not so sure”? What would it take for you to gain more certainty in such an outcome?

Haven’t looked into the Zuckerberg thing yet but everything I’ve seen of the “Twitter Files” has done more to convince me that nothing inappropriate or bad was happening, than that it was. And if those selective-releases were supposed to be the worst of it? Doubly so. Where’s the bad bit (that doesn’t immediately stop looking bad if you read the surrounding context whoever’s saying it’s bad left out)?

Means you haven’t really looked into the Twitter files. They were literally holding meetings with the government officials and were told what to censor and who to ban. That’s plainly unconstitutional and heads should roll for this.

> How did the government force Facebook to comply

By asking.

The government asking you to do something is like a dangerous schoolyard bully asking for your lunch money. Except the gov has the ability to kill, imprison, and destroy. Doesn’t matter if you’re an average Joe or a Zuckerberg.

So it’s categorically impossible for the government to make any non-coercive request or report for anything because it’s the government?

I don’t think that’s settled law.

For example, suppose the US Postal Service opens a new location, and Google Maps has the pushpin on the wrong place or the hours are incorrect. A USPS employee submits a report/correction through normal channels. How is that trampling on Google’s first-amendment rights?

This is obviously not a real question, so instead of answering I propose we conduct a thought experiment. The year is 2028, and Zuck had a change of heart and fully switched sides. Facebook, Threads, and Instagram now block the news of Barron Trump’s drug use, of his lavishly compensated board seat on the board of Russia’s Gazprom, and bans the dominant electoral candidate off social media. In addition it allows the spread of a made up dossier (funded by the RNC) about Kamala Harris’ embarrassing behavior with male escorts in China.

What you should ask yourself is this: irrespective of whether compliance is voluntary or not, is political censorship on social media OK? And what kind of a logical knot one must contort one’s mind into to suggest that this is the second coming of net neutrality? Personally I think the mere fact that the government is able to lean on a private company like that is damning AF.

You’re grouping lots of unrelated things.

All large sites have terms of service. If you violate them, you might be removed, even if you’re “the dominant electoral candidate”. Remember, no one is above the law, or in this case, the rules that a site wishes to enforce.

I’m not a fan of political censorship (unless that means enforcing the same ToS that everyone else is held to, in which case, go for it). Neither am I for the radical notion of legislation telling a private organization that they must host content that they don’t wish to.

This has zero to do with net neutrality. Nothing. Nada.

Is there evidence that the government leaned on a private company instead of meeting with them and asking them to do a thing? Did Facebook feel coerced into taking actions they wouldn’t have willingly done otherwise?

This turns on what TikTok “knew”:

“But by the time Nylah viewed these
videos, TikTok knew that: 1) “the deadly Blackout Challenge
was spreading through its app,” 2) “its algorithm was
specifically feeding the Blackout Challenge to children,” and
3) several children had died while attempting the Blackout
Challenge after viewing videos of the Challenge on their For
You Pages. App. 31–32. Yet TikTok “took no and/or
completely inadequate action to extinguish and prevent the
spread of the Blackout Challenge and specifically to prevent
the Blackout Challenge from being shown to children on their
(For You Pages).” App. 32–33. Instead, TikTok continued to
recommend these videos to children like Nylah.

We need to see another document, “App 31-32”, to see what TikTok “knew”. Could someone find that, please? A Pacer account may be required. Did they ignore an abuse report?

See also Gonzales vs. Google (2023), where a similar issue reached the U.S. Supreme Court.(1) That was
about whether recommending videos which encouraged the viewer to support the Islamic State’s jihad led someone to go fight in it, where they were killed. The Court rejected the terrorism claim and declined to address the Section 230 claim.

(1) https://en.wikipedia.org/wiki/Gonzalez_v._Google_LLC

IIRC, TikTok has (had?) a relatively high-touch content moderation pipeline, where any video receiving more than a few thousand views is checked by a human reviewer.

Their review process was developed to hit the much more stringent speech standards of the Chinese market, but it opens them up to even more liability here.

I unfortunately can’t find the source articles for this any more, they’re buried under “how to make your video go viral” flowcharts that elide the “when things get banned” decisions.

> Their review process was developed to hit the much more stringent speech standards of the Chinese market

TikTok isn’t available in China. They have a separate app called Douyin.

  TikTok, Inc., via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” One video depicted the “Blackout Challenge,” which encourages viewers to record themselves engaging in acts of self-asphyxiation. After watching the video, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. -- https://cases.justia.com/federal/appellate-courts/ca3/22-3061/22-3061-2024-08-27.pdf?ts=1724792413

An algorithm accidentally enticed a child to hang herself. I’ve got code running on dozens of websites that recommends articles to read based on user demographics. There’s nothing in that code that would or could prevent an article about self-asphyxiation being recommended to a child. It just depends on the clients that use the software not posting that kind of content, people with similar demographics to the child not reading it, and a child who gets the recommendation not reading it and acting it out. If those assumptions fail should I or my employer be liable?

Yes.

Or you do things that gives you rewards – and do not care what it will result otherwise – but you want to be saved from any responsibility (automatically!) for what it causes just because it is an algorithm?

The enjoying the benefits but running away from responsibility is a cowardly and childish act. Childish acts need supervision from adults.

You seem to be overlooking the fact of the late plaintiff being 10 years old. The case turns on whether it’s reasonable to expect that Tiktok would knowingly share content encouraging users to attempt life-threatening activities to children.

You want to bake cookies yet refuse to take responsibility for the possibility of somebody choking on them, or sell cars without maling crashes impossible!

Impossible goals are an asinine standard and “responsibility” and “accountability” are the favorite weasel words of those who want absolute discretion to abuse power.

If the Mercedes infotainment screen had shown you a curated recommendation that you run them over, prior to you doing so, they very possibly would (and should).

If it is operating mechanically, then it is following a process chosen by the developers who wrote the code. They work for the company, so the consequences are still the company’s responsibility.

What happened to that child is on the parents not some programmer who coded an optimization algorithm. It’s really as simple as that. No 10 year old should be on TikTok, I’m not sure anyone under 18 should be given the garbage, dangerous misinformation, intentional disinformation, and lack of any ability to control what your child sees.

Do you feel the same way about the sale of alcohol? I do see the argument for parental responsibility, but I’m not sure how parents will enforce that if the law allows people to sell kids alcohol free from liability.

We regulate the sale of all sorts of things that can do damage but also have other uses. You can’t buy large amounts of certain cold medicines, and you need to be an adult to do so. You can’t buy fireworks if you are a minor in most places. In some countries they won’t even sell you a set of steak knives if you are underage.

Someone else’s response was that a 10 year old should not be on ticktoc. Well then how did they get past the age restrictions?(I’m guessing its a check box at best). So its inadequately gated. But really, I don’t think its the sort of thing that needs an age gate.

They are responsible for a product that is actively targeting harmful behavior at children and adults. It’s not ok in either situation. You cannot allow your platform to be hijacked for content like this. Full stop.

These ‘services’ need better ways to moderate content. If that is more controls that allow them to delete certain posts and videos or some other method to contain videos like this. You cannot just allow users to upload and share whatever they want. And further, have your own systems promote these videos.

Everyone who makes a product(especially for mass consumption), has a responsibility to make sure their product is safe. If your product is so complicated that you can’t control it, then you need to step back and re-evaluate how it’s functioning. Not just plow ahead, making money, letting it harm people.

Alcohol (the consumption form) serves only one purpose to get you buzzed. Unlike algorithms and hammers which are generic and serve many purposes, some of which are positive, especially when used correctly. You can’t sue the people who make hammers if someone kills another person with one.

> Alcohol (the consumption form) serves only one purpose to get you buzzed.

Since consumable alcohol has other legitimate uses besides getting a buzz on, I don’t think this point stands. For example, it’s used quite often in cooking and (most of the time?) no intoxicating effects remain in the final product.

You said sue the hammer manufacturer. Why didn’t you say to sue the newspaper that ran the ads? The fact that you couldn’t keep that straight in your analogy undermines your argument significantly imo.

We’re not talking about “all algorithms” any more than the alcohol example is talking about “all liquids”. Social media algorithms have one purpose: to manipulate people into more engagement, to manoeuvre them into forgoing other activities in favour of more screen time, in the service of showing them more ads.

You would think so, wouldn’t you?

Except right now youtube have a self advertisement in the middle of the page warning people not to trust the content on youtube. A company warning people not to trust the product they built and the videos they choose to show you… we need to rethink 230. We’ve gone seriously awry.

It’s more nuanced than that. If I sent a hateful letter through the mail and someone gets hurt by it (even physically), who is responsible, me or the post office?

I know youtube is different in important ways than the post, but it’s also different in important ways from e.g. somebody who builds a building that falls down.

If the post office opened your letter, read it, and then decided to copy it and send it to a bunch of kids, you would be responsible for your part in creating it, and they would be responsible for their part in disseminating it.

The Post Office just delivers your mail, it doesn’t do any curation.

YouTube, TikTok, etc. differ by applying an algorithm to “decide” what to show you. Those algorithms have all sorts of weights and measures, but they’re ultimately personalized to you. And if they’re making personalized recommendations that include “how to kill yourself”… I think we have a problem?

It’s simply not just a FIFO of content in content out, and in many cases (Facebook & Instagram especially) the user barely gets to a choice in what is shown in the feed…

Contrast with e.g. Mastodon where there is no algorithm and it only shows you what you explicitly followed, and in the exact linear order it was posted.

(Which is actually how Facebook used to be)

It’s be more akin to buying a hammer and then the hammer starts morphing into a screw driver without you noticing.

Then when you accidentally hit your hand with the hammer, you actually stabbed yourself. And that’s when you realized your hammer is now a screwdriver.

Yes, I thought that’s what I said – no one knows the shape of danger social media is currently.

It’s like trying to draw tiger but you’ve never seen an animal. We only have the faintest clue what social media is right now. It will change in the next 25+ years as well.

Sure we know some dangers but… I think we need more time to know them all.

It sounds like your algorithm targets children with unmoderated content. That feels like a dangerous position with potential for strong arguments in either direction. I think the only reasonable advice here is to keep close tabs on this case.

Does it specifically target children or does it simply target people and children happen to be some of the people using it?

If a child searches Google for “boobs”, it’s not fair to accuse Google of showing naked women to children, and definitely not fair to even say Google was targeting children.

Part of the claim is that TikTok knew about this content being promoted and other cases where children had died as a result.

> But by the time Nylah viewed these videos, TikTok knew that: 1) “the deadly Blackout Challenge was spreading through its app,” 2) “its algorithm was specifically feeding the Blackout Challenge to children,” and
3) several children had died while attempting the Blackout Challenge after viewing videos of the Challenge on their For You Pages. App. 31-32. Yet TikTok “took no and/or completely inadequate action to extinguish and prevent the spread of the Blackout Challenge and specifically to prevent the Blackout Challenge from being shown to children on their (For You Pages).” App. 32-33. Instead, TikTok continued to recommend these videos to children like Nylah.

Do you think this should be legal? Would you do nothing if you knew children were dying directly because of the content you were feeding them?

Yes, if a product actively contributes to child fatalities then the manufacturer should be liable.

Then again, I guess your platform is about article recommendation and not about recording yourself doing popular trends. And perhaps children are not your target audience, or an audience at all. In many ways the situation was different for TikTok.

I think it depends on some technical specifics, like which meta data was associated with that content, and the degree to which that content was surfaced to users that fit the demographic profile of a ten year old child.

If your algorithm decides that things in the 90th percentile of shock value will boost engagement to a user profile that can also include users who are ten years old then you maybe have built a negligent algorithm. Maybe that’s not the case in this particular instance but it could be possible.

“I have a catapult that launches loosely demo-targeted things, without me checking what is being loaded into it. I only intend for harmless things to be loaded. Should I be liable if someone loads a boulder and it hurts someone?”

Right?

Like if I’m a cement company, and I build a sidewalk that’s really good and stable, stable enough for a person to plant a milk crate on it, and stand on that milk crate, and hold up a big sign that gives clear instructions on self-asphyxiation, and a child reads that sign, tries it out and dies, am I going to get sued? All I did was build the foundation for a platform.

That’s not a fair analogy though. To be fairer, you’d have to monitor said footpath 24/7 and have a robot and/or a number of people removing milk crate signs that you deemed inappropriate for your foothpath. They’d also move various milk crate signs in front of people as they walked and hide others.

If you were indeed monitoring the footpath for milk crate signs and moving them, yes you may be liable for showing or not removing one to someone it wouldn’t be appropriate for.

That’s a good point, and actually the heart of the issue, and what I missed.

In my analogy the stable sidewalk that can hold the milk crate is both the platform and the optimization algorithm. But to your point there’s actually a lot more going on with the optimization than just building a place where any rando can market self-asphyxiation. It’s about how they willfully targeted people with that content.

Of course you should be. Just because an algorithm gave you an output doesn’t absolve you from using it. It’s some magical mystical thing. It’s something you created and you are 100% responsible for what you do with the output of it.

For anyone making claims about what the authors of Section 230 intended or the extent to which Section 230 applies to targeted recommendations by algorithms, the authors of Section 230 (Ron Wyden and Chris Cox) wrote an amicus brief (1) for Google v. Gonzalez (2023). Here is an excerpt from the corresponding press release (2) by Wyden:

> “Section 230 protects targeted recommendations to the same extent that it protects other forms of content presentation,” the members wrote. “That interpretation enables Section 230 to fulfill Congress’s purpose of encouraging innovation in content presentation and moderation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable Internet users and platforms alike. Section 230’s protection remains as essential today as it was when the provision was enacted.”

(1)(PDF) https://www.wyden.senate.gov/download/wyden-cox-amicus-brief…

(2) https://www.wyden.senate.gov/news/press-releases/sen-wyden-a…

This statement from Wyden’s press release seems to be in contrast to Chris Cox’s reasoning in his journal article (1) (linked in the amicus).

  It is now firmly established in the case law that Section 230 cannot act as a shield whenever a website is in any way complicit in the creation or development of illegal content.

  ...

  In FTC v. Accusearch,(69) the Tenth Circuit Court of Appeals held that a website’s mere posting of content that it had no role whatsoever in creating — telephone records of private individuals — constituted “development” of that information, and so deprived it of Section 230 immunity. Even though the content was wholly created by others, the website knowingly transformed what had previously been private information into a publicly available commodity. Such complicity in illegality is what defines “development” of content, as distinguished from its creation.

He goes on to list multiple similar cases and how they fit the original intent of the law. Then further clarifies that it’s not just about illegal content, but all legal obligations:

  In writing Section 230, Rep. Wyden and I, and ultimately the entire Congress, decided that these legal rules should continue to apply on the internet just as in the offline world. Every business, whether operating through its online facility or through a brick-and-mortar facility, would continue to be responsible for all of its own legal obligations.

Though, ultimately the original reasoning matters little in this case, as the courts are the ones to interpret the law. In fact Section 230 is one part of the larger Communications Decency Act that was mostly struck down by the Supreme Court.

EDIT: Added quote about additional legal obligations.

(1): https://jolt.richmond.edu/2020/08/27/the-origins-and-origina…

The Accusearch case was a situation in which the very act of reselling a specific kind of private information would’ve been illegal under the FTC Act if you temporarily ignore Section 230. If you add Section 230 into consideration, then you have to consider knowledge, but the knowledge analysis is trivial. Accusearch should’ve known that reselling any 1 phone number was illegal, so it doesn’t matter whether Accusearch knew the actual phone numbers it sold. Similarly, a social media site that only allows blackout challenge posts would be illegal regardless of whether the site employees know whether post #123 is actually a blackout challenge post. In contrast, most of the posts on TikTok are legal, and TikTok is designed for an indeterminate range of legal posts. Knowledge of specific posts matters.

Whether an intermediary has knowledge of specific content that is illegal to redistribute is very different from whether the intermediary has “knowledge” that the algorithm it designed to rank legally distributable content can “sometimes” produce a high ranking to “some” content that’s illegal to distribute. The latter case can be split further into specific illegal content that the intermediary has knowledge of and illegal content that the intermediary lacks knowledge of. Unless a law such as KOSA passes (which it shouldn’t (1)), the intermediary has no legal obligation to search for the illegal content that it isn’t yet aware of. The intermediary need only respond to reports, and depending on the volume of reports the intermediary isn’t obligated to respond within a “short” time period (except in “intellectual property cases”, which are explicitly exempt from Section 230). “TikTok knows that TikTok has blackout challenge posts” is not knowledge of post PQR. “TikTok knows that post PQR on TikTok is a blackout challenge post” is knowledge of post PQR.

Was TikTok aware that specific users were being recommended specific “blackout challenge” posts? If so, then TikTok should’ve deleted those posts. Afterward, TikTok employees should’ve known that its algorithm was recommending some blackout challenge posts to some users. Suppose that TikTok employees are already aware of post PQR. Then TikTok has an obligation to delete PQR. If in a week blackout challenge post HIJ shows up in the recommendations for user @abc and @xyz, then TikTok shouldn’t be liable for recommendations of HIJ until TikTok employees read a report about it and then confirm that HIJ is a blackout challenge post. Outwardly, @abc and @xyz will think that TikTok has done nothing or “not enough” even though TikTok removed PQR and isn’t yet aware of HIJ until a second week passes. The algorithm doesn’t create knowledge of HIJ no matter how high the algorithm ranks HIJ for user @abc. The algorithm may be TikTok’s first-party speech, but the content that is being recommended is still third-party speech. Suppose that @abc sues TikTok for failing to prevent HIJ from being recommended to @abc during the first elapsed week. The First Amendment would prevent TikTok from being held liable for HIJ (third party speech that TikTok lacked knowledge of during the first week). As a statute that provides an immunity (as opposed to a defense) in situations involving redistribution of third-party speech, Section 230 would allow TikTok to dismiss the case early; early dismissals save time and court fees. Does the featured ruling by the Third Circuit mean that Section 230 wouldn’t apply to TikTok’s recommendation of HIJ to @abc in the first elapsed week? Because if so, then I really don’t think that the Third Circuit is reading Section 230 correctly. At the very least, the Third Circuit’s ruling will create a chilling effect on complex algorithms in violation of social media websites’ First Amendment freedom of expression. And I don’t believe that Ron Wyden and Chris Cox intended for websites to only sort user posts by chronological order (like multiple commenters on this post are hoping will happen as a result of the ruling) when they wrote Section 230.

(1) https://reason.com/2024/08/20/censoring-the-internet-wont-pr…

I’m skeptical that Ron Wyden anticipated algorithmic social media feeds in 1996. But I’m pretty sure he gets a decent amount of lobbying cash from interested parties.

You May Also Like

More From Author