Arrest of Pavel Durov, Telegram CEO, charges of terrorism, fraud, child porn

This seems like the Kim Dotcom situation again.

Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don’t see Pichai, Citron, or Huffman getting indicted for anything.

Hell, then there’s the actual infrastructure providers too. This seems like a slippery slope with no defined boundaries where the government can just arbitrary use to pin the blame on the people they don’t like. Because ultimately, almost every platform with user-provided content will have some quantity of illegal material.

But maybe I’m just being naive?

Dotcom got extradited (which was declared legal much later). Durov landed in a country that had an arrest warrant out for him.

I hope his situation isn’t similar to Dotcom’s, as Dotcom was shown to be complicit in the crimes he was being persecuted for. Convicting the megaupload people would’ve been a LOT harder if they hadn’t been uploading and curating illegal content on their platform themselves.

As a service provider, you’re not responsible for what your users post as long as you take appropriate action after being informed of illegal content. That’s where they’re trying to get Telegram, because Telegram is known to ignore law enforcement as much as possible (to the point of doing so illegally and getting fined for it).

https://restoreprivacy.com/telegram-sharing-user-data/

> the operators of the messenger app Telegram have released user data to the Federal Criminal Police Office (BKA) in several cases. According to SPIEGEL information, this was data from suspects in the areas of child abuse and terrorism. In the case of violations of other criminal offenses, it is still difficult for German investigators to obtain information from Telegram, according to security circles.

https://threema.ch/en/blog/posts/chat-apps-government-ties-a…

> two popular chat services have accused each other of having undisclosed government ties. According to Signal president Meredith Whittaker, Telegram is not only “notoriously insecure” but also “routinely cooperates with governments behind the scenes.” Telegram founder Pavel Durov, on the other hand, claims that “the US government spent $3 million to build Signal’s encryption” and Signal’s current leaders are “activists used by the US state department for regime change abroad.”

According to the more detailed news sources I can find about this, it seems he knew the French were looking for him. I don’t know if he knew about the contents of the warrant, but it does seem he knew the authorities were planning to arrest him.

From what I can tell the warrant has been out for longer, but he was arrested when the airport police noticed his name was on a list. There’s not a lot of information out there, with neither the French authorities nor Telegram providing any official statements to the media.

The Sud-Ouest article must have been updated because the version currently online does not mention that at all. Quite the opposite, the article quotes an official that was surprised that Durov would come to Paris anyway even though he knew he was under an arrest warrant in France, and another source says that he might have decided to come in France anyway because he believed he’ll never be held accountable.

Fuck around and find out. If he legitimately ignored legal French documents forcing him to share information, as the French have declared, he’s got got.

You don’t step foot on a country with an extradition treaty, even less so the country itself, where you’re flouting their warrants for your company’s data.

That inconvenient bill of rights keeps us a step or two behind the rest of the anglosphere in decent to tyranny, but only for so long. It just takes a handful of dishonest judges to claim some right actually means something entirely different.

Watch his interview with Tucker Carlson and you’ll see. He doesn’t acquiesce to government requests for moderation control, censorship, and sharing private user data so they target him. He refuses to implement backdoors as well. In stark contrast to western social media companies.

> He refuses to implement backdoors as well.

We have no way to know this, and (unlike Signal), Telegram doesn’t give us best-effort assurances by doing things like open-sourcing its code.

Russian govt officials are protesting his arrest.

When an authoritarian govt is calling for the release of someone who runs a “private” messenger, it suggests they have a back door. Otherwise they tend to oppose all private messaging.

No, there is no logical link between the two events. Russian govt can protest that for propaganda reasons: to make a point that Western governments are restricting freedom of speech.

They’re hitting that Uno Reverse card. Tbf, the US does a LOT of the stuff that we openly criticize Russia and China for. Which, I would hope that people have enough insight to recognize that this is a bad thing across the board. The only people who get hurt and face consequences from this kind of a thing are the citizens.

I don’t completely buy the fact that he was arrested because he didn’t cooperate with authorities. World Police forces have an history of infiltrating criminal groups and gaining their trust; planting backdoors isn’t the only way they can investigate people.
Also, this way they’re yelling loud to these people “hurry! pick another platform!”.

And then, he is also on Putin’s wanted list; his arrest could one day turn him into a valuable bargaining chip.

while it’s amazing for them to keep maintaining it, as the person mentioned down the thread, it’s hard to know what they are actually running, right? and it’s not a lot of work to patch this or clone/branch as necessary before deploying. Oh well, i already resigned that a part of my life will be run by someone else by now.

Not sure what part of my comment amused you so much.

An IM platform server can be open sourced. Just like any kind of software.

It’s just a matter of publishing your code and, preferably making it possible to verify that the service your users are connecting to is build using the same published code.

If you bothered to look, you would find that both of the examples given are open-source servers. You might then deduce that you misunderstood the comment to which you replied.

This distinction gets lost in these discussions all of the time. A company that makes an effort to comply with laws is in a completely different category than a company that makes the fact that they’ll look the other way one of their core selling points.

Years ago there was a case where someone built a business out of making hidden compartments in cars. He did an amazing job of making James Bond style hidden compartments that perfectly blended into the interior. He was later arrested because drug dealers used his hidden compartment business to help their drug trade.

There was an uproar about the fact that he wasn’t doing the drug crimes himself. He was only making hidden compartments which could be used for anything. How was he supposed to know that the hidden compartments were being used for illegal activities rather than keeping people’s valuables safe during a break-in?

Yet when the details of the case came out, IIRC, it was clear that he was leaning into the illegal trades and marketing his services to those people. He lost his plausible deniability after even a cursory look at how he was operating.

I don’t know what, if any, parts of that case apply to Pavel Durov. I do like to share it as an example of how intent matters and how one can become complicit in other crimes by operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law. It’s also why smart corporate criminals will shut down and walk away when it becomes too obvious that they’re losing plausible deniability in a criminal enterprise.

Try to deposit 10k to your bank account and then, when they call you and ask the obvious question, answer that you sold some meth or robbed someone. They will totally be fine with this answer, as they are just a platform for providing money services and well, you can always just pay for everything in cash.

If you deposit more than 10k the IRS simply gets automatically notified. No one calls you to ask where you got the money.

The IRS actually expects you to report income earned from illegal activities, they _explicitly_ state this in Publication 17.

If his was really true for banks there would be a large number of bankers in jail. This number being close to zero, I guess the courts are very lax at charging bankers for crimes.

> and you knew about the illegal behavior

Your analogy is terrible and doesn’t make sense.

If you provide a service that is used for illegal behavior AND you know it’s being used that way AND you explicitly market your services to users behaving illegally AND the majority of your product is used for illegal deeds THEN you’re gonna have a bad time.

If one out of ten thousand people use your product for illegal deeds you’re fine. If it’s 9 out of 10 you probably aren’t.

> If one out of ten thousand people use your product for illegal deeds you’re fine.

This logic clearly makes the prison of someone like the owner of Telegram difficult to justify, since 99.999% of messages in telegram are completely legal.

If you know your services are going to be used to commit a crime, then yes, that makes you an accessory and basically all jurisdictions (I know basically nothing about French criminal law) can prosecute you for that. Crime is, y’know, illegal.

I’m appalled that you would argue in good faith that a tool for communicating in secret can be reasonably described as a service used to commit a crime.

Why aren’t all gun manufacturers in jail then? They must know a percentage of their products are going to be used to commit crimes. A much larger percentage than those using Telegram to commit one.

The answer to this charade is that to “prove” that you’re not doing anything wrong you need to secretly provide all data from anyone that the government doesn’t like. Otherwise you go to jail.

Yes.

If you are directly aiding and abetting without any plausible attempt to minimize bad actors from using your services then absolutely.

For example, CP absolutely exists on platforms like FB or IG, but Meta will absolutely try to moderate it away to the best of their ability and cooperate with law enforcement when it is brought to their attention.

And like I have mentioned a couple times before, Telegram was only allowed to exist because the UAE allowed them to, and both the UAE and Russia gained ownership stakes in Telegram by 2021. Also, messaging apps can only legally operate in the UAE if they provide decryption keys to the UAE govt because all instant messaging apps are treated as VoIP under their Telco regulation laws.

There have been plenty of cases where anti-Russian govt content was moderated away during the 2021 protests – https://www.wired.com/story/the-kremlin-has-entered-the-chat…

If you are a criminal lawyer who is providing defense, that is acceptable because everyone is entitled to to a fair trial and defense.

If you are a criminal lawyer who is directly abetting in criminal behavior (eg. a Saul Goodman type) you absolutely will lose your Bar License and open yourself up to criminal penalties.

If you are a criminal lawyer who is in a situation where your client wants you to abet their criminal behavior, then you are expected to drop the client and potentially notify law enforcement.

> If you are a criminal lawyer who is directly abetting in criminal behavior

Not a lawyer myself but I believe this is not a correct representation of the issue.

A lawyer abetting in criminal behaviour is committing a crime, but the crime is not offering his services to criminals, which is completely legal.

When offering their services to criminals law firm or individual lawyers in most cases are not required to report crimes they have been made aware of under the attorney-client privilege and are not required to ask to minimize bad actors from using their services.

In short: unless they are committing crimes themselves, criminal lawyers are not required to stay clear from criminals, actually, usually the opposite is true.

Again, presumption of innocence do exists.

Yep. Your explaination is basically what I was getting at

In this case, Telegram showed bad faith moderation. They are not a lawyer, and don’t operate with the same constraints.

What do you mean “look the other way?” Does the phone company “look the other way” when they don’t listen in to your calls? Does the post office “look the other way” when they don’t read your mail?

That guy who built the hidden compartments should absolutely not have gone to jail. The government needs to be put in check. This has gotten ridiculous.

If the police tell them illegal activity is happening and give them a warrant to wiretap and they are capable of doing so but refuse then yeah they’re looking the other way. That’s not even getting into things like PRISM.

> operating in a manner where one of your selling points is that you’ll help anyone out even when their intent is to break the law

is it what happened here?

in my view Durov is the owner renting his apartment and not caring what people do inside it, which is not illegal, someone could go as fare as say that it is morally reprensible, but it’s not illegal in any way.

It would be different if Durov knew but did not report it.

Which, again, doesn’t seem what happened here and it must be proven in a court anyway, I believe everyone in our western legal systems still has the right to the presumption of innocence.

Telegram not spying on its users is the same thing as Mullvad not spying on its users and not saving the logs. I consider it a feature not a bug, for sure not complicity in any crime whatsoever.

> the owner renting his apartment and not caring what people do inside it, which is not illegal

Problem is if you know what these people do inside it and you don’t do anything about it.

As far as I can see. CP is probably the fastest way to get a channel and related account wiped on telegram in a very short time. As a telegram group manager. I often see automated purge of CP related ad/contents, or auto lockout for managers to clear up the channel/group. Saying telegram isn’t managing CP problems is just absurd. I really feel like they just created the reason for other purpose.

Read the founder exit letter. whatsapp is definitely not e2e encrypted for all features.

You leak basic metadata (who talked to who at what time).

You leak 100% of messages with “business account”, which are another way to say “e2e you->meta and then meta relays the message e2e to N reciptients handling that business account”.

Then there’s the all the links and images which are sent to e2e you->meta, meta stores the image/link once, sends you back a hash, you send that hash e2e to your contact.

there’s so many leaks it’s not even fun to poke fun at them.

And I pity anyone who is fool enough to think meta products are e2e anything.

> with “business account”, which are another way to say “e2e you->meta and then meta relays

actually its a nominated end point, and then from there its up to the business. It works out better for meta, because they aren’t liable for the content if something goes wrong. (ie a secret is leaked, or PII gets out.) Great for GDPR because as they aren’t acting as processor of PII they are less likley to be taken to court.

Whatsapp has about the same level of practical “privacy” (encryption is a loaded word here) as iMessage. The difference is, there are many more easy ways to report nasty content in whatsapp, which reported ~1 million cases of CSAM a year vs apples’ 267. (not 200k, just 267. Thats the whole of apple. https://www.missingkids.org/content/dam/missingkids/pdfs/202…)

Getting the content of normal messages is pretty hard, getting the content of a link, much easier.

Its not signal, but then its never meant to be.

Telegram is for the most part not end-to-end encrypted, one to one chats can be but aren’t by default, and groups/channels are never E2EE. That means Telegram is privy to a large amount of the criminal activity happening on their platform but allegedly chooses to turn a blind eye to it, unlike Signal or WhatsApp, who can’t see what their users are doing by design.

Not to say that deliberately making yourself blind to what’s happening on your platform will always be a bulletproof way to avoid liability, but it’s a much more defensible position than being able to see the illegal activity on your platform and not doing anything about it. Especially in the case of seriously serious crimes like CSAM, terrorism, etc.

If law enforcement asked them nicely for access I bet they wouldn’t refuse. Why take responsibility for something if you can just offload it to law enforcement?

The issue is law enforcement doesn’t want that kind of access. Because they have no manpower to go after criminals. This would increase their caseload hundredfold within a month. So they prefer to punish the entity that created this honeypot. So it goes away and along with it the crime will go back underground where police can pretend it doesn’t happen.

Telegram is basically punished for existing and not doing law enforcement job for them.

Maybe they didn’t ask nicely. Or they asked for something else. There’s literally zero drawback for service provider to provide secret access to the raw data that they hold to law enforcement. You’d be criminally dumb if you didn’t do it. Literally criminally.

I bet that if they really asked, they pretty much asked Telegram to build them one click creator that would print them court ready documents about criminals on their platform so that law enforcement can just click a button and yell “we got one!” to the judge.

The chats are encrypted but the backup saved in the cloud isn’t. So if someone gets access to your Google Drive he can read your WhatsApp chats. You can opt-in to encrypt the backup but it doesn’t work well.

Nope, it didn’t even arrive on their end, it prevented me from sending the message and said I wasn’t allowed to send that. So they are pre screening your messages before you send them.

If a service provider can see plain text for a messaging app between the END users, that is NOT end-to-end encryption, by any valid definition. Service providers do not get to be one of the ends in E2EE, no matter what 2019 Zoom was claiming in their marketing. That’s just lying.

Answering law enforcement letters, even if it’s just to say that data cannot be provided, is some 80% of cooperation needed.

Meta can provide conversation and account metadata (Twitter does the same – or used to do at least), or suspend accounts

What has E2EE got to do with it? If you catch someone who sent CP you can open their phone and read their messages. Then you can tell Meta which ones to delete and they can do it from the metadata alone.

> Isn’t Whatsapp supposed to be end-to-end encrypted?

It is supposedly end-to-end encrypted. And in a shallow way. Also the app is closed source and you can’t develop your own.

It’s basically end-to-end-trust-me-bro-level encrypted.

I’m more disturbed by the fact that on HN we have 0 devs confirming or denying this thing about FBs internals wrt encryption. We know there are many devs that work there that are also HN users. But I’ve yet to see one of them chime in on this discussion.

That should scare a lot of us.

I find it pretty ridiculous to assume that any dev would comment on the inner workings of their employers software in any way beyond what is publicly available anyway. I certainly wouldn’t.

For Reddit it is a bit documented how some power-mods used to flood subreddits with child porn to get them taken down. It was seemingly done with the administration’s best wishes. Not sure if it still going on, but some of these people are certainly around, in the same positions.

That’s disgusting but certainly effective to take down something very quickly.

I was very disappointed to hear that UFO related subreddits take down and block UFO sightings. What’s the whole point of the sub if they censor the relevant content.

I’ve actually given up trying to post on Reddit for this reason. Whenever I’ve tried to join in on a discussion in some subreddit that’s relevant(eg r/chess) my post has been autoremoved by a bot because my karma is too low or my account is “too new”. Well how can I get any karma if all my posts are deleted?

Even those who farm accounts know the simple answer to your question. You have to spend a little time being civil in other subreddits before you reveal the real you. Just takes a few weeks.

The comments I made were quite serious and civil. Not sure what you mean. They were autodeleted by a bot. I wasn’t trolling or anything.

I’m not particularly interested in spending a lot of time posting on reddit. But very occasionally I’ll come across a thread I can contribute meaningfully to and want to comment. Even if allowed I’d probably just make a couple comments a year or something. But I guess the site isn’t set up for that, so fuck it.

Sounds like you glossed over the phrase “in other subreddits”, which is the secret sauce. The point of my phrasing was not to suggest that you aim to be uncivil, but to highlight that the above works even for those who do aim to. So, surely, it should work for you, too.

Comment in subreddits without those restrictions for a bit. E.g. this list: https://www.reddit.com/r/NewToReddit/wiki/index/newusersubs/

I can see how it’s frustrating, but the communities you’re trying to post in are essentially offloading their moderation burden onto the big popular subreddits with low requirements — if you can prove you’re capable of posting there without getting downvoted into oblivion, you’re probably going to be less hassle for the smaller moderator teams.

Actually, HN has a much better system. Comments from new accounts, like your throwaway, are dead by default, but any user can opt in to seeing dead posts, and any user with a small amount of karma can vouch those posts, reviving them. Like I just did to your post.

It’s simpler, the US wants to control the narrative everywhere and in everything, just like in the 90s and 00s. Things like Telegram and Tiktok and to some extent RT, stand in the way of that.

Because the telcos _cooperate_ with law enforcement.

It’s not whether the platform is being used for illegal activity (all platforms are to some extent, as your facile comment shows). It’s whether the operator of a platform actively avoids cooperating with LE to stop that activity once found.

I know. That’s obviously true, but I hate that it happens and it makes no sense to me why more people aren’t upset by it. What I’m trying to get at is that complying with rules that are stupid, ineffective, and unfair is not a good thing and anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad. Cooperation with law enforcement is morally neutral and not important.

The real goal is hurting anyone that’s not aligned with people in power regardless of who is getting helped or harmed. Everyone knows this but so many people in this thread are lying about it.

> anyone who thinks these goals are reasonable should apply them to equivalent services to realize they’re bad

AFAIK these goals _are_ applied to equivalent services. It’s just that twitter, FB, Instagram, WhatsApp, and all the others _do_ put in the marginal amount of effort required to remove/prohibit illicit activity on their platform.

Free speech is one thing, refusing to take down CSAM or drug dealing operating in the open is always going to land you in hot water.

Because those services don’t get shown reports of CSAM and then turn a blind eye to it and do nothing about it.

A person witnessing a crime by itself is not a crime. However, a person witnessing a crime and choosing not to report it is a crime.

> I don’t think it’s a crime not to report a crime

That heavily depends on the jurisdiction. It’s explicitly a crime in Germany, for example: https://www.gesetze-im-internet.de/stgb/__138.html

On top of that, if you can be shown to benefit from the crime (e.g. by knowingly taking payment for providing services to those that commit it), that presumably makes you more than just a bystander in most jurisdictions anyway.

That link you posted is 1) about very specific crimes (treason, murder, manslaughter, genocide etc.) and 2) it applies only when you hear about a crime that is being planned but which has not been committed yet (and can still be prevented).

You’re technically right (I think). However, I believe if you witness a murder and know the murderer and the police asks you: “Do you know anything about X murder?” Then I think you’re legally required to tell the truth here.

If someone says I need a cab for after I rob a bank and you give them a ride after waiting then you’re almost certainly an accessory. If they flag a random cab off the street then not.

The English common law tradition has a crime called “misprision”. Misprision of treason is the felony of knowing someone has committed or is about to commit treason but failing to report it to the authorities.

It still exists in many jurisdictions, including the UK, the US (it is a federal crime under 18 U.S. Code § 2382, and also a state crime in most states), Australia, Canada, New Zealand and Ireland.

Related was the crime “misprision of felony”, which was failure to report a felony (historically treason was not classed as a felony, rather a separate more serious category of crime). Most common law jurisdictions have abolished it, in large part due to the abolition of the felony-misdemeanour distinction. However, in the US (which retains that distinction), it is a federal crime (18 U.S. Code § 4). However, apparently case law has narrowed that offence to require active concealment rather than merely passive failure to report (which was its original historical meaning)

Many of the jurisdictions which have abolished misprision of felony still have laws making it a crime not to report certain categories of crime, such as terrorism or child sexual abuse

As a suspect. At least in court, as a completely non-involved bystander you have no right of refusal to testify in most jurisdictions.

Not sure whether that extends to police questioning though.

It doesn’t extend to police questioning, i also pointed out it’s a different thing when you are in a court.
For the police an innocent bystander can turn into a suspect real fast.

That only applies if you’re the defendant.

If you’re the witness to a murder and you’re subpoena’d to court and refuse to testify then you are committing contempt of court. There was a guy in Illinois who got 20 years (reduced to 6 on appeal) for refusing to testify in a murder.

https://illinoiscaselaw.com/clecourses/contempt-of-court-max…

Contempt of court usually has no boundaries on the punishment, nor any jury trials. A judge can just order you to be executed on the spot if you say, fall asleep in his courtroom. Sheriffs in Illinois have the same unbridled power over jail detainees.

As far as I know all western judicial systems, both civil and common law. But as I said, there are exceptions for certain professions, and situations.

I have my dead creepy uncle’s phone in my drawer right now, and can give you soft core child porn from his instagram. His algorithm was even tuned to keep giving endless supply of children dancing in lingerie, naked women breastfeeding children while said children play with her private part, prostitutes of unknown age sharing their number on the screen, and porn frames hidden in videos.

Nobody’s arresting Zuckerberg for that.

> A person witnessing a crime by itself is not a crime. However, a person witnessing a crime and choosing not to report it is a crime.

That’s generally not true, at least in the Anglo legal system.

YouTube ignored reports for CSAM links in comments of “family videos” of children bathing for years until a channel that made a large report on it went viral.

Who you are definitely determines how the law handles you. If you’re Google execs, you don’t have to worry about the courts of the peasantry.

IANAL and not that familiar with the legal situation, but if we assume that running a platform of this type requires you, by law, to moderate such a platform and he fails to do that, idk what we are talking about. Yes, he would clearly be breaking the law. Why would that not get prosecuted in the completely normal, boring way that I would hope all law breaking will eventually be prosecuted?

If you are alleging that there’s comparable, specific and actual legal infringements on the part of meta/google, that somehow go uninvestigated and unpunished, free free to point to that.

You’re missing the key part: Telegram doesn’t have E2EE enabled by default. Group chats and channels aren’t encrypted at all.

The only E2EE in Telegram is called “secret chats” and they’re 1-on-1.

frankly, even with unencrypted chats, any law/precedent requiring that platform providers have to scale moderation linearly with the number of users (which is effectively what this is saying) sounds like really bad policy (and probably further prevents the EU from building actual competitors to American tech companies)

discord has hundreds of content moderators, telegram is made by a team of 30 people

i don’t think messaging startups should be required to employ hundreds of people to read messages

Isn’t this a consequence of Telegram’s actions?

It was their decision to become something bigger than a simple messaging app by adding channels and group chats with tons of participants.
It was also their decision to understaff content moderation team.

Sometimes the consequence is a legal action, like the one we’re seeing right now. All this could have been easily avoided if they had E2EE or enough people to review reported content and remove it when necessary.

That’s not the case here though. Most of the communication on Telegram is not E2E Encrypted.

Even E2EE messaging service providers have to cooperate in terms of providing communication metadata and responding to takedown requests. Ignoring law enforcement lands you in a lot of shit everywhere, in Russia you’ll just be landing out of a window.

These laws have applied for decades in some shape or form in pretty much all countries, so it shouldn’t come as a surprise.

Have you used Telegram before making this comment? It is moderated. You really think this is about the company, the platform, not about politics? Well you should think again.

it is much less aggressively moderated and censored than facebook, and pleasant to use, source: first hand experience.

But i have no idea if it truly has more or less crime than other platforms. So we can’t really tell if he’s being messed with because he can’t stand up for himself in a way Microsoft or Musk can, or it is truly a criminal problem.

Telegram is an absolute hive of criminality but, more importantly, Telegram will simply not cooperate with law enforcement.

That is why he’s been lifted. Google et al will cooperate, even if that’s by way of an onerous bureaucratic procedure involving MLATs.

It’s incorrect to say that they weren’t cooperating with authorities at all.

In the EU, Telegram blocked access to certain channels that the EU deemed to be Russian disinformation, for example.

As far as I’ve heard, they did that only under threat of getting kicked out of the Apple and Google app stores. Supposedly, the non-app-store versions don’t have these blocks.

In other words, Apple and Google are the only authorities they recognize (see also (1)). I’m not surprised this doesn’t sit well with many governments.

(1) https://news.ycombinator.com/item?id=41348666

It’s also the government’s role to take measures against harmful actions. Personal rights end where they start to harm others, or harm society in general. They are not an absolute, and always have to be balanced against other concerns.

However, my GP comment was against the claim that “The state has no business judging the truth”. That claim as stated is absurd, because judging what is true is necessary for a state being able to function in the interest of its people. The commenter likely didn’t mean what they wrote.

One can argue what is harmful and what isn’t, and I certainly don’t agree with many things that are being over-moderated. But please discuss things on that level, and don’t absolutize “free speech”, or argue that authorities shouldn’t care about what is true or not.

One of those was @rtnews which is definitely state-sponsored propaganda and remains inaccessible to this day.

They cooperated to some degree, but I’ll go out on a limb to say that the authorities wanted Telegram to be fully subservient to western government interests.

Don’t get me wrong, if you really want to watch it, I think you should be allowed to.

Personally I’m undecided about whether these channels should be publicly available on e.g. free TV channels, but that’s getting off topic.

think there is a societal interest in unsnoopable messaging.

there are other low-hanging fruit EU governments could do to address crime, NL has basically become a narcostate and they are just sitting by and watching – Telegram is not the problem.

>Eliminating child pornography and organised crime is a societal rather than ‘government’ interest.

Empirically speaking, governments have had absolutely zero success at this, but their attempts to do so have gotten them the kind of legal power over your life that organised crime could only dream about.

Huh? The traditional mafia is almost non-existent in the US today. RICO and its application has been highly successful at taking down the mafia.

You could certainly argue that RICO was too powerful and is often misapplied, but I’ve never before seen anyone argue that it has been ineffective.

In this instance (RT being banned), it’s Russia’s quite candid strategy to undermine social cohesion in their enemies’ societies, using disinformation. Margarita Simonyan and Vladislav Surkov have each bragged about its success. So yes, for social cohesion, when there’s a malign external actor poisoning public discourse with the intention of splitting societies, a responsible government ought to tackle it.

I think your subtle arguments are wasted on EU’s decision to stop the spread of misinformation and manipulation. It’s that simple for them. Black and white. Us vs them. Don’t think too much, you are taken care of by your “representatives” …

I believe both cases come down to how much effort the leaders put into identifying and purging the bad activities on their platforms.

One would hope that there is clear evidence to support a claim that they’re well aware what they’re profiting off and aren’t aggressively shutting it down.

To use Reddit as an example: in the early days it was the Wild West, and there were some absolutely legally gray subreddits. They eventually booted those, and more recently even seem to ban subreddits just because The Verge wrote an article about how people say bad things there.

Because these countries are hypocrites. Because politics, because these guys are from Russia, China. You can so obviously see there’s discrimination against companies from those countries. Can you imagine France do this if it’s a US company?

> Because these countries are hypocrites.

Rhetorical question: for what reason should a country be anything other than a hypocrite when it comes to situations such as this? Nations prioritize their own self-interests and that of their allies, even if that makes them appear hypocritical from an outside, or indeed, even an inside perspective. But that doesn’t mean there’s no legitimacy to what they do.

Let’s just say I encrypt illegal.content prior to uploading it to Platform A. And share the public key separately via Platform B. Maybe even refer Platform A users to a private forum on Platform B to obtain the keys. Are both platforms now on the wrong side of the law?

> Specifically, these service providers

I’m not a fan of this arrest and I don’t believe service providers have a duty to contravene their security promises so as to monitor their users.

But it seems pretty obvious that governments find the monitoring that Google / Reddit / etc do acceptable, and do not find operation of unmonitorable services acceptable.

All right, what about logless VPN providers like Mullvad?

> do not find operation of unmonitorable services acceptable.

Sounds like something straight out of a dystopian surveillance state novel, very bad outlook if true.

VPNs don’t pose an obstacle to monitoring any specific activity, and as many VPN-using criminals have found, even their ability to stop law enforcement from identifying you is limited. So they’ve been less of an issue. Having said that, I would note that Mullavad was forced to remove port forwarding in response to law enforcement interest, and I don’t think it would be too surprising (or too dystopian) if in the future “connection laundering” is a crime just like money laundering.

> the warrant was issued because of his alleged failure to cooperate with the French authorities.

That would seem to be the key bit. Makes one wonder what level of cooperation is required to not be charged with a slew of the worst crimes imaginable. Is there a French law requiring that messaging providers give up encryption keys that he is known to be in violation of?

> Why are these service providers being punished for what their users do ?

Because they crossed the line from common carrier to editor – an entirely different set of obligations.

Also, even such common carrier as telcos must abide to state injunctions against their users.

The difference is that this is not an isolated case on telegram(you said it yourself: “some amount”, which implies “limited”). At the same time, you can literally open up the app and with 0 effort find everything they are accusing them of – drugs, terrorist organizations, public decapitations, you name it. They also provide the ability to search for people and groups around you, and I am literally seeing a channel where people are buying and selling groups “800 meters away” from me and another one for prostitution, which is also illegal in my country. Meanwhile, see their TOS(1). They have not complied with any of the reports or requests from users (and governments by the looks of it) to crack down on them. While 1:1 chats are theoretically private and encrypted(full disclosure, I do not trust Telegram or any of the people behind it), telegram’s security for public channels and groups is absolutely appalling and they are well aware of it – they just chose to look the other way and hope they’d get away with it. You could have given them the benefit of the doubt if those are isolated(“some”) instances, sure. But just as in the case of Kim Dot-I-support-genocide-com, those are not isolated cases and saying that they had no idea is an obvious lie.

2000/31/EC(2), states that providers are generally not liable for the content they host IF they do not have actual knowledge of illegal activity or content AND upon obtaining such knowledge, they take action and remove and disable access to that content(telegram has been ignoring those). Service providers have no general obligation to monitor but they need to provide notice and take down mechanisms. Assuming that their statement are correct, and they had no idea, they should be in the clear. Telegram provides a notice and take down mechanism. But saying that there are channels with +500k subscribers filled with people celebrating a 4 year old girl with a blown off leg in Ukraine and no one has reported it in 2 and a half years after it was created is indeed naive.

(1) https://telegram.org/tos/eu

(2) https://eur-lex.europa.eu/eli/dir/2000/31/oj

> Why are these service providers being punished for what their users do? Specifically, these service providers?

https://xkcd.com/538/

Someone wants the service to stop, and has the influence to make it happen, the users are not a concern.

Now that Telegram is compromised, what’s the next chat app people trust?

It’s better not the Kim Dotcom situation, that would mean Durov encouraged the illegal use of Telegram like Megaupload rewarded file uploads which generated heavy download traffic.

If that would be the case he would be at least a accomplice if not even the Initiator of criminal activities.

Otherwise it would be just an abuse of his service by criminals.

It seems there has been a misunderstanding; laws for service providers never exempted them from having to cooperate and provide data available to them when ordered.

>> Why are these service providers being punished for what their users do?

Are we 100% certain that this is only about Telegram? I want to see the allegations, not the vague charges, before pontificating about ISP liability. These charges might be more straightforwards.

> Why are these service providers being punished for what their users do?

There is a legal distinction here between what happens on your platform despite your best efforts (what you might call “incidental” use) vs what your platform is designed specifically to do or enable.

Megaupload is a perfect example. It was used to pirate movies. Everyone knew it. The founders knew it. You can’t really argue it’s incidental or unintended or simply that small amount that gets past moderation.

Telegram, the authorities will argue, fails to moderate CSAM and other illegal activity to the point that it enables it and profits from it, which is legally indistinguishable from specifically designing your platfrom for it.

Many tech people fall into a binary mode of thinking because that’s how tech usually works. Either your code works or it doesn’t. You see it when arguments about people pirating IP being traced to a customer. Tech people will argue “you can’t prove it’s me”. While technically true, that’s not the legal standard.

Legal standards relay on tests. In the ISP case, authorities will look at what was pirated, was it found on your hard drive, was the activity done when you were home or not and so on to establish a balance of probabilities. Is it more likely that all this evidence adds up to your guilt or that an increasingly unlikely set of circumstances explains it where you’re innocent?

In the early days of Bitcoin I stayed away (to my detriment) because I coudl see the obvious use case of it being used for illegal stuff, whichh it is. The authorities don’t currently care. Bitcoin however is the means that enables ransomware. When someone decides this is a national security issue, Bitcoin is in for a bad time.

Telegram had (for the French at least) risen to the point where they considered it a serious enough issue to warrant their attention and the full force of the government may be brought to bear on it.

> Why are these service providers being punished for what their users do?

I think this is simplified. Certainly yes, if “all” Telegram was doing was operating a neutral/unmoderated anonymized chat service, then it’s hard to see criminal culpability for the reasons you list.

But as people are pointing out, that doesn’t seem to be technically correct. Telegram isn’t completely anonymous, does have access to important customer data, and is widely suspected of complying with third party requests for that data for law enforcement and regulatory reasons.

So… IF they are doing that, and they’re doing it in a non-neutral/non-anonymized way, then they’re very plausibly subject to prosectution. Say, if you get a report of terrorist activity and provide data on the terrorists, then a month later get notified that your service is being used to distribute CSAM, and you refuse to cooperate, then it’s not that far a reach to label you an accessory to the crime.

> Why are these service providers being punished for what their users do?

Because they let their users do it and benefited from it. Try doing the same thing as a bank 🙂 Or a newspaper 🙂

Internet cannot be anarchy forever. Every anarchy ends up as oligarchy. It needs regulation and fast.

I strongly suspect there’s more to it than just running a chat system used by criminals. If that were the issue then tons of services would be under indictment.

We’ll have to wait and see, but I suspect some kind of more direct participation or explicit provable look-the-other-way at CSAM etc.

> Why are these service providers being punished for what their users do? Specifically, these service providers? Because Google, Discord, Reddit, etc. all contain some amount of CSAM (and other illegal content), yet I don’t see Pichai, Citron, or Huffman getting indicted for anything.

WORSE, you get banned for reporting CSAM to Discord, and I guarantee if you report it to the proper authorities (FBI) they tell them to bug off and get a warrant. Can we please be consistent? If we’re going to hold these companies liable for anything, let’s be much more consistent. Worse yet, Discord doesnt even have End to End encryption, and the number of child abuse scandals on that platform are insane. People build up communities, where the admins (users, not Discord employees) have perceived power, users (children) want to partake in such things. Its essentially the Roblox issue all over again, devs taking advantage of easily impressionable minors.

They had a scandal where they allowed the furry equivalent of child porn, and quietly banned that type of porn from the platform later on. I assume due to legal requirements.

Edit:

I think the lack of bulk reporting is a pain too. They used to ask for more context. One time I reported a literal nazi admin (swastika posting, racial slurs, and what have you), but the post was “months old” and they told me essentially to “go screw myself” they basically asked why I was in the server.

  Why are these service providers being punished for what their users do
  (...)
  maybe I'm just being naive?

In this case, the comment does strike me as naive.

Back in the 1990s the tech community convinced itself (myself included) that Napster had zero ethical responsibility for the mass piracy it enabled. In reality, law in a society is supposed to serve that society. The tech community talked itself into believing that the only valid arguments were for Napster. In hindsight, it’s less cut-and-dry.

I have never believed E2EE to be viable, in the real world, without a back-door. It makes several horrendous kinds of crime too difficult to combat. It also has some upsides, but including a back-door, in practice, won’t erase the upsides for most users.

It is naive to think people (and government) will ignore E2EE; a feature that facilitates child porn, human trafficking, organized crime, murder-for-hire, foreign spying, etc etc. The decision about whether the good attributes justify the bad ones is too impactful on society to defer to chat app CEOs.

You can go ahead and encrypt messages yourself, without explicit E2E support on the platform. In fact, choosing your own secure channel for communicating the key would probably be more secure than anything in-band.

I doubt that will upset the public the way Signal and Telegram eventually will. Most people, including criminals, struggle with tech. If they want E2EE badly enough, and use one of the big messaging GUI apps they can succeed. If they can only do it via less user-friendly software, they’ll need help or to do research, and likely will leave a trail behind them. That is more useful to law enforcement than if they simply had downloaded one of the most popular App Store apps. It’s hard for a news story about a CLI utility to gain traction.

Yes, that is my position. E2EE back-doors might not affect my communications or yours, but have serious and undesirable repercussions for some journalists and whistleblowers. The thing is, regular people aren’t going to tolerate a sustained parade of news stories in which E2EE helps the world’s worst people to evade justice.

That’s how most law works. I have to give up my right to murder someone in order to enjoy a society where it’s illegal for everyone.

If you believe privacy not inspectable by law enforcement is wrong the prerequisite is saying that you’re willing to have the the law apply to you as well.

I believe that privacy not inspectable by law enforcement is a fundamental right. I’m willing to accept that aids some crimes but also willing to change my mind if the latter becomes too much of a problem. It doesn’t seem to be the case at all ATM.

This should be obvious to everyone here, but it’s pretty much inevitable that if a backdoor exists, criminals will eventually find their way through it. Not to mention the “legitimate” access by corrupt and oppressive governments that can put people in mortal danger for daring to disagree.

No doubt that is true, and presumably Cory Doctorow has written some article making that seem like the only concern. The alternative makes it difficult to enforce all kinds of laws, though.

This comment can itself be said to take for granted the naive view of what law it exposes.

Law is a way to enforce a policy on massive scale, sure. But there is no guarantee that it enforces things that are aiming the best equilibrium of everyone flourishing in society. And even when it does, laws are done by humans, so unless they results from a highly dynamic process that gather feedback from those on which it applies and strive to improve over time, there is almost no chance laws can meet such an ambitious goal.

What if Napster was a symptom, but not of ill behavior? Supposing that unconditional sharing cultural heritage is basically the sane way to go can be backed on solid anthropological evidences, over several hundred millennia.

What if information monopolies is the massive ethical atrocity, enforced by corrupted governments which were hijacked by various sociopaths whose chief goal is to parasite as much as possible resources from societies?

Horrendous crimes, yes there are many out there, often commissioned by governments who will shamelessly throw outrageous lies at there citizens to transform them into cannon fodders and other atrocities.

Regarding fair retribution of most artists out there, we would certainly be better served with universal unconditional net revenue for everyone. The current fame lottery is just as fair as a national bingo as a way to make a decent career.

You May Also Like

More From Author